{"id": "q-en-7710bis-228772fd3523d0efa7c7bd31a958f920ac1e066c02e23add7c5c43e8bc26a47b", "old_text": "authenticating to, and interacting with the captive portal is out of scope of this document. RFC7710 used DHCP code point 160. Due to a conflict, this document specifies TBD. [ This document is being collaborated on in Github at: https://github.com/capport-wg/7710bis. The most recent version of", "comments": "Note that I do need you both to confirm that you have fulfilled IPR obligations.\nThank you!", "new_text": "authenticating to, and interacting with the captive portal is out of scope of this document. This document replaces RFC 7710. RFC 7710 used DHCP code point 160. Due to a conflict, this document specifies TBD. [ This document is being collaborated on in Github at: https://github.com/capport-wg/7710bis. The most recent version of"}
{"id": "q-en-7710bis-228772fd3523d0efa7c7bd31a958f920ac1e066c02e23add7c5c43e8bc26a47b", "old_text": "clients that they are behind a captive-portal enforcement device and how to contact an API for more information. 1.1. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\",", "comments": "Note that I do need you both to confirm that you have fulfilled IPR obligations.\nThank you!", "new_text": "clients that they are behind a captive-portal enforcement device and how to contact an API for more information. This document replaces RFC 7710 RFC7710. 1.1. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\","}
{"id": "q-en-7710bis-228772fd3523d0efa7c7bd31a958f920ac1e066c02e23add7c5c43e8bc26a47b", "old_text": "Networks with no captive portals MAY explicitly indicate this condition by using this option with the IANA-assigned URI for this purpose (see ietf_params_capport-unrestricted). Clients observing the URI value \"urn:ietf:params:capport-unrestricted\" MAY forego time- consuming forms of captive portal detection. 2.1.", "comments": "Note that I do need you both to confirm that you have fulfilled IPR obligations.\nThank you!", "new_text": "Networks with no captive portals MAY explicitly indicate this condition by using this option with the IANA-assigned URI for this purpose (see iana-urn). Clients observing the URI value \"urn:ietf:params:capport-unrestricted\" MAY forego time-consuming forms of captive portal detection. 2.1."}
{"id": "q-en-7710bis-228772fd3523d0efa7c7bd31a958f920ac1e066c02e23add7c5c43e8bc26a47b", "old_text": "4.1. 4.1.1. Registry name: Captive Portal Unrestricted Identifier URN: urn:ietf:params:capport-unrestricted Specification: RFC TBD (this document) Repository: RFC TBD (this document) Index value: Only one value is defined (see URN above). No hierarchy is defined and therefore no sub-namespace registrations are possible. 4.2.", "comments": "Note that I do need you both to confirm that you have fulfilled IPR obligations.\nThank you!", "new_text": "4.1. This document registers a new entry under the IETF URN Sub-namespace defined in RFC3553: Captive Portal Unrestricted Identifier urn:ietf:params:capport-unrestricted RFC TBD (this document) RFC TBD (this document) Only one value is defined (see URN above). No hierarchy is defined and therefore no sub-namespace registrations are possible. 4.2."}
{"id": "q-en-ack-frequency-1de67cffa29ef23e6568c3b08a7e28194496055a0a102719d311a0e06436c7d7", "old_text": "To avoid additional delays to connection migration confirmation when using this extension, a client can bundle an IMMEDIATE_ACK frame with the first non-probing frame (Section 9.2 of QUIC-TRANSPORT) it sends or it can simply send an IMMEDIATE_ACK frame, which is a non-probing frame. An endpoint's congestion controller and RTT estimator are reset upon", "comments": "Connection Migration .... \" it sends or it can simply send an IMMEDIATE_ACK frame\" What is the intended difference between the words \"it sends\" or \"it can simply send\"?", "new_text": "To avoid additional delays to connection migration confirmation when using this extension, a client can bundle an IMMEDIATE_ACK frame with the first non-probing frame (Section 9.2 of QUIC-TRANSPORT) it sends or it can send only an IMMEDIATE_ACK frame, which is a non-probing frame. An endpoint's congestion controller and RTT estimator are reset upon"}
{"id": "q-en-ack-frequency-f6fca47092480459514087e075a264c3357be34c9934869b0402edb94dda02ca", "old_text": "ACK-Eliciting Threshold, an endpoint can expect that the peer will not need to wait for its \"max_ack_delay\" period before sending an acknowledgement. In such cases, the endpoint MAY therefore exclude the peer's 'max_ack_delay' from its PTO calculation. Note that this optimization requires some care in implementation, since it can cause premature PTOs under packet loss when \"ignore_order\" is enabled. 9.", "comments": "Provides more details on why excluding maxackdelay when Ignore Order is used may cause premature PTOs.\nI'm merging this -- I think this is good, but if we need to add more cautionary text, we can still do that.\nWhen the number of in-flight ack-eliciting packets is larger than the ACK-Eliciting Threshold, an endpoint can expect that the peer will not need to wait for its maxackdelay period before sending an acknowledgement. In such cases, the endpoint MAY therefore exclude the peer's 'maxackdelay' from its PTO calculation. Note that this optimization requires some care in implementation, since it can cause premature PTOs under packet loss when ignoreorder is enabled. Isn't this oversimplifying things so that the aspect of time until an ACK is expected to be sent is removed? To me it is not that maxackdelay is removed completely from URL PTO = smoothedrtt + max(4*rttvar, kGranularity) + maxackdelay Instead it is replaced with expected time until the sender have sent ACK-Eliciting Threshold number of packets when that is less than maxackdelay that is configured. Secondly, are you expecting the sender to make an estimate of how many packets more are received before an ACK is sent, i.e. care about what phase it is currently when calculating the PTO in regards to which packet is expected to trigger the next ACK? However, that is clearly prone to be wrong if there is a loss occurring currently who's loss indicating ack has not yet been received by the sender. So it appears to me that this allowance is by default creating a situation which will cause premature PTOs, and it would be better to put some restrictions on it.\nOn the second point, the sender always knows how many packets are outstanding. So, it always knows whether the PTO computation -- which is always on the latest sent packet -- should include maxackdelay or not. That said, you are right the ignoreorder causes trouble here, so how about \"MAY therefore exclude the peer's 'maxackdelay' from its PTO calculation unless ignoreorder is enabled. When ignore_order is enabled, premature PTOs are significantly more likely under packet loss.\"", "new_text": "ACK-Eliciting Threshold, an endpoint can expect that the peer will not need to wait for its \"max_ack_delay\" period before sending an acknowledgement. In such cases, the endpoint MAY therefore exclude the peer's 'max_ack_delay' from its PTO calculation. When Ignore Order is enabled and loss causes the peer to not receive enough packets to trigger an immediate acknowledgement, the receiver will wait 'max_ack_delay', increasing the chances of a premature PTO. Therefore, if Ignore Order is enabled, the PTO MUST be larger than the peer's 'max_ack_delay'. 9."}
{"id": "q-en-ack-frequency-f4a23e2a6dcb8c58cfaf3e723efc7f0b0856391fa770fa1b8b4e932cca10ff45", "old_text": "intensive, and reducing acknowledgement frequency reduces this cost at a data sender. Severely asymmetric link technologies, such as DOCSIS, LTE, and satellite links, connection throughput in the data direction becomes constrained when the reverse bandwidth is filled by acknowledgment packets. When traversing such links, reducing the number of acknowledgments allows connection throughput to scale much further. As discussed in implementation however, there can be undesirable consequences to congestion control and loss recovery if a receiver", "comments": "I tried to work in what I thought were the important parts of Gorry's suggested text, PTAL if I captured everything.\nIt's also true that for many RF media, the performance of the forward path can be impacted by the rate of acknowledgment packets when the link bandwidth is shared.\n\u201cSeverely asymmetric link technologies, such as DOCSIS, LTE, and satellite links, connection throughput in the data direction becomes constrained when the reverse bandwidth is filled by acknowledgment packets. When traversing such links, reducing the number of acknowledgments allows connection throughput to scale much further.\u201d I\u2019m going to push back again on this as largely wrong. Links such as DOCSIS, LTE, satellite and WiFi are asymmetric. But it\u2019s not just that the design decisions made for these types of link resulted in asymmetric capacity (throughput limit), the designs also share the capacity, and it is important to mitigate the impact on that sharing of capacity, than purely thinking of it items of bytes/sec of capacity used. A lower feedback rate can help avoid inducing flow starvation to other flows that share resources along the path they use.On mobile devices sending data consumes RF and battery power and is more costly for a host of reasons not just asymmetry when present.\nNAME -- I don't follow why you say that the statement is wrong. It seems to me that you think that the statement is inadequate. Would you mind responding here with some text you would prefer instead, and I'm happy to wordsmith and turn it into a PR?\nI suggest some text something like: For some link technologies (e.g., DOCSIS, LTE, WiFi, and satellite links) transmission of acknowledgment packets contribute to link utilization and can incur significant forwarding costs. In some cases, the connection throughput in the data direction can become constrained when the reverse bandwidth is filled by acknowledgment packets. In other cases, the rate of acknowledgment packets can impact link efficiency (e.g. transmission opportunities or battery life). A lower ACK frequency can be beneficial when using paths that include such links.\nNAME -- Is the text proposal in the issue enough for you to make a PR?\nThis is much better.", "new_text": "intensive, and reducing acknowledgement frequency reduces this cost at a data sender. For severely asymmetric link technologies, such as DOCSIS, LTE, and satellite links, connection throughput in the forward path can become constrained when the reverse path is filled by acknowledgment packets. When traversing such links, reducing the number of acknowledgments can achieve higher connection throughput. The rate of acknowledgment packets can impact link efficiency, including transmission opportunities or battery life. As discussed in implementation however, there can be undesirable consequences to congestion control and loss recovery if a receiver"}
{"id": "q-en-ack-frequency-fb4f9d30b1a242eea453e33a16c86551dfa596f64cdef778a93b992a3543dfbf", "old_text": "A variable-length integer that indicates how many out of order packets can arrive before eliciting an immediate ACK. If no ACK_FREQUENCY frames have been received, this value defaults to 3, which is the recommended packet threhold for loss detection in (Section 18.2 of QUIC-RECOVERY). A value of 0 indicates immediate ACKs SHOULD never be sent due to receiving an out-of-order packet. ACK_FREQUENCY frames are ack-eliciting. However, their loss does not require retransmission if an ACK_FREQUENCY frame with a larger", "comments": "Sec #ack-frequency-frame says: and Sec #out-of-order says:\nIt seems optimal to default it to 3, because that's the default packet threshold, but the principle of least surprise argues for 1 (ie: no behavior change until an ACK_FREQUENCY frame is received), so I lean toward 1.\nThis PR adds a SHOULD NOT, but it seems sensible.", "new_text": "A variable-length integer that indicates how many out of order packets can arrive before eliciting an immediate ACK. If no ACK_FREQUENCY frames have been received, the endpoint immediately acknowledges any subsequent packets that are received out of order, as specified in Section 13.2 of QUIC-TRANSPORT, as such the default value is 1. A value of 0 indicates immediate ACKs SHOULD never be sent due to receiving an out-of-order packet. ACK_FREQUENCY frames are ack-eliciting. However, their loss does not require retransmission if an ACK_FREQUENCY frame with a larger"}
{"id": "q-en-ack-frequency-fb4f9d30b1a242eea453e33a16c86551dfa596f64cdef778a93b992a3543dfbf", "old_text": "reordered ack-eliciting packet. This extension modifies this behavior. If an endpoint has not yet received an ACK_FREQUENCY frame, the endpoint immediately acknowledges any subsequent packets that are received out of order, as specified in Section 13.2 of QUIC- TRANSPORT. An endpoint, that receives an ACK_FREQUENCY frame with a Reordering Threshold value other than 0x00, MUST immediately send an ACK frame when the packet number of largest unacknowledged packet since the last detected reordering event exceeds the Reordering Threshold. If the most recent ACK_FREQUENCY frame received from the peer has a \"Reordering Threshold\" value of 0x00, the endpoint does not make this", "comments": "Sec #ack-frequency-frame says: and Sec #out-of-order says:\nIt seems optimal to default it to 3, because that's the default packet threshold, but the principle of least surprise argues for 1 (ie: no behavior change until an ACK_FREQUENCY frame is received), so I lean toward 1.\nThis PR adds a SHOULD NOT, but it seems sensible.", "new_text": "reordered ack-eliciting packet. This extension modifies this behavior. In order to ensure timely loss detection, the Reordering Threshold provided in the ACK_FREQUENCY frame SHOULD NOT be larger than the re- ordering threshold used by the data sender for loss detection. (Section 18.2 of QUIC-RECOVERY) recommends a default packet threshold for loss detection of 3. An endpoint, that receives an ACK_FREQUENCY frame with a Reordering Threshold value other than 0x00, MUST immediately send an ACK frame when the packet number of largest unacknowledged packet since the last detected reordering event exceeds the Reordering Threshold. If the most recent ACK_FREQUENCY frame received from the peer has a \"Reordering Threshold\" value of 0x00, the endpoint does not make this"}
{"id": "q-en-ack-frequency-0c4fc45cc1caf17631e75a22dbb3baacdc223b7c950318dbc3b55341ed30afb4", "old_text": "receivers to ignore obsolete frames. A variable-length integer representing the maximum number of ack- eliciting packets the recipient of this frame can receive without sending an acknowledgment. In other words, an acknowledgement is sent when more than this number of ack-eliciting packets have been received. Since this is a maximum value, a receiver can send an acknowledgement earlier. A value of 0 results in a receiver immediately acknowledging every ack-eliciting packet. A variable-length integer representing the value to which the endpoint requests the peer update its \"max_ack_delay\"", "comments": "We have inconsistent definitions for Ack-Eliciting Threshold, which probably has people implementing this differently. In Section 4: In Section 7: The intended definition is the one in Section 4, so fixes and makes this consistent.\nThe document uses MUST in multiple places, however, as discussed in the new proposed text for the security considerations (see PR ) the requester cannot enforce any ACK strategy and there might be reason like practical limitation or an overload situation where the ACK sender simply decides to send less ACKs. So the question is really can to requirements be MUSTs or is SHOULD more realistic and correct?", "new_text": "receivers to ignore obsolete frames. A variable-length integer representing the maximum number of ack- eliciting packets the recipient of this frame receives before sending an acknowledgment. A receiving endpoint SHOULD send at least one ACK frame when more than this number of ack-eliciting packets have been received. A value of 0 results in a receiver immediately acknowledging every ack-eliciting packet. By default, an endpoint sends an ACK frame for every other ack-eliciting packet, as specified in Section 13.2.2 of QUIC-TRANSPORT, which corresponds to a value of 1. A variable-length integer representing the value to which the endpoint requests the peer update its \"max_ack_delay\""}
{"id": "q-en-ack-frequency-0c4fc45cc1caf17631e75a22dbb3baacdc223b7c950318dbc3b55341ed30afb4", "old_text": "which is in milliseconds. Sending a value smaller than the \"min_ack_delay\" advertised by the peer is invalid. Receipt of an invalid value MUST be treated as a connection error of type PROTOCOL_VIOLATION. A variable-length integer that indicates how many out of order packets can arrive before eliciting an immediate ACK. If no", "comments": "We have inconsistent definitions for Ack-Eliciting Threshold, which probably has people implementing this differently. In Section 4: In Section 7: The intended definition is the one in Section 4, so fixes and makes this consistent.\nThe document uses MUST in multiple places, however, as discussed in the new proposed text for the security considerations (see PR ) the requester cannot enforce any ACK strategy and there might be reason like practical limitation or an overload situation where the ACK sender simply decides to send less ACKs. So the question is really can to requirements be MUSTs or is SHOULD more realistic and correct?", "new_text": "which is in milliseconds. Sending a value smaller than the \"min_ack_delay\" advertised by the peer is invalid. Receipt of an invalid value MUST be treated as a connection error of type PROTOCOL_VIOLATION. On receiving a valid value in this field, the endpoint MUST update its \"max_ack_delay\" to the value provided by the peer. A variable-length integer that indicates how many out of order packets can arrive before eliciting an immediate ACK. If no"}
{"id": "q-en-ack-frequency-0c4fc45cc1caf17631e75a22dbb3baacdc223b7c950318dbc3b55341ed30afb4", "old_text": "the Sequence Number value in the frame is not greater than the largest processed thus far. An endpoint will have committed a \"max_ack_delay\" value to the peer, which specifies the maximum amount of time by which the endpoint will delay sending acknowledgments. When the endpoint receives an ACK_FREQUENCY frame, it MUST update this maximum time to the value proposed by the peer in the Request Max Ack Delay field. 5. A sender can use an ACK_FREQUENCY frame to reduce the number of", "comments": "We have inconsistent definitions for Ack-Eliciting Threshold, which probably has people implementing this differently. In Section 4: In Section 7: The intended definition is the one in Section 4, so fixes and makes this consistent.\nThe document uses MUST in multiple places, however, as discussed in the new proposed text for the security considerations (see PR ) the requester cannot enforce any ACK strategy and there might be reason like practical limitation or an overload situation where the ACK sender simply decides to send less ACKs. So the question is really can to requirements be MUSTs or is SHOULD more realistic and correct?", "new_text": "the Sequence Number value in the frame is not greater than the largest processed thus far. 5. A sender can use an ACK_FREQUENCY frame to reduce the number of"}
{"id": "q-en-ack-frequency-0c4fc45cc1caf17631e75a22dbb3baacdc223b7c950318dbc3b55341ed30afb4", "old_text": "Prior to receiving an ACK_FREQUENCY frame, endpoints send acknowledgements as specified in Section 13.2.1 of QUIC-TRANSPORT. On receiving an ACK_FREQUENCY frame and updating its recorded \"max_ack_delay\" and \"Ack-Eliciting Threshold\" values (ack-frequency- frame), the endpoint MUST send an acknowledgement when one of the following conditions are met: Since the last acknowledgement was sent, the number of received ack-eliciting packets is greater than or equal to the recorded \"Ack-Eliciting Threshold\". Since the last acknowledgement was sent, \"max_ack_delay\" amount of time has passed.", "comments": "We have inconsistent definitions for Ack-Eliciting Threshold, which probably has people implementing this differently. In Section 4: In Section 7: The intended definition is the one in Section 4, so fixes and makes this consistent.\nThe document uses MUST in multiple places, however, as discussed in the new proposed text for the security considerations (see PR ) the requester cannot enforce any ACK strategy and there might be reason like practical limitation or an overload situation where the ACK sender simply decides to send less ACKs. So the question is really can to requirements be MUSTs or is SHOULD more realistic and correct?", "new_text": "Prior to receiving an ACK_FREQUENCY frame, endpoints send acknowledgements as specified in Section 13.2.1 of QUIC-TRANSPORT. On receiving an ACK_FREQUENCY frame and updating its \"max_ack_delay\" and \"Ack-Eliciting Threshold\" values (ack-frequency-frame), the endpoint sends an acknowledgement when one of the following conditions are met: Since the last acknowledgement was sent, the number of received ack-eliciting packets is greater than the \"Ack-Eliciting Threshold\". Since the last acknowledgement was sent, \"max_ack_delay\" amount of time has passed."}
{"id": "q-en-ack-frequency-65adebbb180f8a1b8a0d6548616c5f83986a577321a16b1d00880e6499891102", "old_text": "3. Delaying acknowledgements as much as possible reduces both work done by the endpoints and network load. An endpoint's loss detection and congestion control mechanisms however need to be tolerant of this", "comments": "This PR changes a few things: Introduces a TP for negotiating extension use Changes to Makes the fields required Specifies invalid values\nI'm merging this, let's address issues in later PRs.\nI think it'll be easier to make both parameters required. Not doing so saves 1 to 2 bytes, which is really not much in a frame I expect will be sent a few times a connection. Also, I think 0 should be an invalid value for both params, and 1 should be invalid for the number of packets, though I can see someone arguing that point.\naddresses these issues, with the exception that 1 is allowed for packet tolerance. I think that's actually useful for some startup schemes, so we shouldn't disallow it.", "new_text": "3. Endpoints advertise their support of the extension described in this document by sending the following transport parameter: A variable-length integer representing the minimum amount of time in microseconds by which the endpoint can delay an acknowledgement. Values of 0 and 2^14 or greater are invalid, and receipt of these values MUST be treated as a connection error of type PROTOCOL_VIOLATION. An endpoint's min_ack_delay MUST NOT be greater than the its max_ack_delay. Endpoints that support this extension MUST treat receipt of a min_ack_delay that is greater than the received max_ack_delay as a connection error of type PROTOCOL_VIOLATION. Note that while the endpoint's max_ack_delay transport parameter is in milliseconds (Section 18.2 in QUIC-TRANSPORT), min_ack_delay is specified in microseconds. 4. Delaying acknowledgements as much as possible reduces both work done by the endpoints and network load. An endpoint's loss detection and congestion control mechanisms however need to be tolerant of this"}
{"id": "q-en-ack-frequency-65adebbb180f8a1b8a0d6548616c5f83986a577321a16b1d00880e6499891102", "old_text": "ACK-FREQUENCY frames have a type of 0xXX, and contain the following fields: An optional variable-length integer representing the maximum number of ack-eliciting packets after which the receiver sends an acknowledgement. An optional variable-length integer representing the maximum amount of time, in microseconds, after which the receiver sends an acknowledgement. The value of the Time Tolerance field is scaled by multiplying the encoded value by 2 to the power of the value of the \"ack_delay_exponent\" transport parameter set by the sender of the ACK-FREQUENCY frame (see Section XX in QUIC-TRANSPORT). Scaling in this fashion allows for a larger range of values with a shorter encoding at the cost of lower resolution. ACK-FREQUENCY frames are ack-eliciting. However, their loss does not require retransmission.", "comments": "This PR changes a few things: Introduces a TP for negotiating extension use Changes to Makes the fields required Specifies invalid values\nI'm merging this, let's address issues in later PRs.\nI think it'll be easier to make both parameters required. Not doing so saves 1 to 2 bytes, which is really not much in a frame I expect will be sent a few times a connection. Also, I think 0 should be an invalid value for both params, and 1 should be invalid for the number of packets, though I can see someone arguing that point.\naddresses these issues, with the exception that 1 is allowed for packet tolerance. I think that's actually useful for some startup schemes, so we shouldn't disallow it.", "new_text": "ACK-FREQUENCY frames have a type of 0xXX, and contain the following fields: A variable-length integer representing the maximum number of ack- eliciting packets after which the receiver sends an acknowledgement. A value of 1 will result in an acknowledgement being sent for every ack-eliciting packet received. A value of 0 is invalid. A variable-length integer representing an update to the peer's \"max_ack_delay\" transport parameter (Section 18.2 in QUIC- TRANSPORT). The value of this field is in microseconds. Any value smaller than the min_ack_delay advertised by this endpoint is invalid. Receipt of invalid values in an ACK-FREQUENCY frame MUST be treated as a connection error of type PROTOCOL_VIOLATION. ACK-FREQUENCY frames are ack-eliciting. However, their loss does not require retransmission."}
{"id": "q-en-ack-frequency-65adebbb180f8a1b8a0d6548616c5f83986a577321a16b1d00880e6499891102", "old_text": "An endpoint MAY send ACK-FREQUENCY frames multiple times during a connection and with different values. 4. An endpoint could receive an ACK-FREQUENCY frame with either or both Time Tolerance and Packet Tolerance values specified. In addition, the endpoint will have committed a max_ack_delay value to the peer, which specifies the maximum amount of time by which the endpoint will delay sending acknowledgments; see Section XX of QUIC-TRANSPORT. The endpoint MUST send an acknowledgement when any of the three thresholds are met. Doing so results in the fewest acknowledgements possible within the sender's tolerance and honors the receiver's commitment. loss and batch section describes exceptions to this strategy. An endpoint is expected to bundle acknowledgements when possible. Every time an acknowledgement is sent, bundled or otherwise, all counters and timers related to delaying of acknowledgments are reset. 4.1. To expedite loss detection, endpoints SHOULD send an acknowledgement immediately on receiving an ack-eliciting packet that is out of", "comments": "This PR changes a few things: Introduces a TP for negotiating extension use Changes to Makes the fields required Specifies invalid values\nI'm merging this, let's address issues in later PRs.\nI think it'll be easier to make both parameters required. Not doing so saves 1 to 2 bytes, which is really not much in a frame I expect will be sent a few times a connection. Also, I think 0 should be an invalid value for both params, and 1 should be invalid for the number of packets, though I can see someone arguing that point.\naddresses these issues, with the exception that 1 is allowed for packet tolerance. I think that's actually useful for some startup schemes, so we shouldn't disallow it.", "new_text": "An endpoint MAY send ACK-FREQUENCY frames multiple times during a connection and with different values. An endpoint will have committed a max_ack_delay value to the peer, which specifies the maximum amount of time by which the endpoint will delay sending acknowledgments. When the endpoint receives an ACK- FREQUENCY frame, it MUST update this maximum time to the value proposed by the peer in the Update Max Ack Delay field. 5. On receiving an ACK-FREQUENCY frame, an endpoint will have an updated maximum delay for sending an acknowledgement, and a packet threshold as well. The endpoint MUST send an acknowledgement when either of the two thresholds are met. loss and batch section describes exceptions to this strategy. An endpoint is expected to bundle acknowledgements when possible. Every time an acknowledgement is sent, bundled or otherwise, all counters and timers related to delaying of acknowledgments are reset. 5.1. To expedite loss detection, endpoints SHOULD send an acknowledgement immediately on receiving an ack-eliciting packet that is out of"}
{"id": "q-en-ack-frequency-65adebbb180f8a1b8a0d6548616c5f83986a577321a16b1d00880e6499891102", "old_text": "codepoint in the IP header SHOULD be acknowledged immediately, to reduce the peer's response time to congestion events. 4.2. For performance reasons, an endpoint can receive incoming packets from the underlying platform in a batch of multiple packets. This", "comments": "This PR changes a few things: Introduces a TP for negotiating extension use Changes to Makes the fields required Specifies invalid values\nI'm merging this, let's address issues in later PRs.\nI think it'll be easier to make both parameters required. Not doing so saves 1 to 2 bytes, which is really not much in a frame I expect will be sent a few times a connection. Also, I think 0 should be an invalid value for both params, and 1 should be invalid for the number of packets, though I can see someone arguing that point.\naddresses these issues, with the exception that 1 is allowed for packet tolerance. I think that's actually useful for some startup schemes, so we shouldn't disallow it.", "new_text": "codepoint in the IP header SHOULD be acknowledged immediately, to reduce the peer's response time to congestion events. 5.2. For performance reasons, an endpoint can receive incoming packets from the underlying platform in a batch of multiple packets. This"}
{"id": "q-en-ack-frequency-65adebbb180f8a1b8a0d6548616c5f83986a577321a16b1d00880e6499891102", "old_text": "whether a threshold has been met and an acknowledgement is to be sent in response. 5. An endpoint can send multiple ACK-FREQUENCY frames, and each one of them can have different values.", "comments": "This PR changes a few things: Introduces a TP for negotiating extension use Changes to Makes the fields required Specifies invalid values\nI'm merging this, let's address issues in later PRs.\nI think it'll be easier to make both parameters required. Not doing so saves 1 to 2 bytes, which is really not much in a frame I expect will be sent a few times a connection. Also, I think 0 should be an invalid value for both params, and 1 should be invalid for the number of packets, though I can see someone arguing that point.\naddresses these issues, with the exception that 1 is allowed for packet tolerance. I think that's actually useful for some startup schemes, so we shouldn't disallow it.", "new_text": "whether a threshold has been met and an acknowledgement is to be sent in response. 6. An endpoint can send multiple ACK-FREQUENCY frames, and each one of them can have different values."}
{"id": "q-en-ack-frequency-65adebbb180f8a1b8a0d6548616c5f83986a577321a16b1d00880e6499891102", "old_text": "If the enclosing packet number is not greater than the recorded one, the endpoint MUST ignore this ACK-FREQUENCY frame. 6. TBD. 7. TBD.", "comments": "This PR changes a few things: Introduces a TP for negotiating extension use Changes to Makes the fields required Specifies invalid values\nI'm merging this, let's address issues in later PRs.\nI think it'll be easier to make both parameters required. Not doing so saves 1 to 2 bytes, which is really not much in a frame I expect will be sent a few times a connection. Also, I think 0 should be an invalid value for both params, and 1 should be invalid for the number of packets, though I can see someone arguing that point.\naddresses these issues, with the exception that 1 is allowed for packet tolerance. I think that's actually useful for some startup schemes, so we shouldn't disallow it.", "new_text": "If the enclosing packet number is not greater than the recorded one, the endpoint MUST ignore this ACK-FREQUENCY frame. 7. TBD. 8. TBD."}
{"id": "q-en-acme-284964f4a11d3c07c829660b0d7f16183ab73b3d257144e74fe6ba40672e1b16", "old_text": "A \"revoke-certificate\" resource For the \"new-X\" resources above, the server MUST have exactly one resource for each function. This resource may be addressed by multiple URIs, but all must provide equivalent functionality.", "comments": "This uses two JWS signatures to achieve the two-signature property during rollover, instead of using extra layers of encoding and embedding. It also simplifies the logic and splits it out into its own resource.", "new_text": "A \"revoke-certificate\" resource A \"key-change\" resource For the \"new-X\" resources above, the server MUST have exactly one resource for each function. This resource may be addressed by multiple URIs, but all must provide equivalent functionality."}
{"id": "q-en-acme-284964f4a11d3c07c829660b0d7f16183ab73b3d257144e74fe6ba40672e1b16", "old_text": "6.3.1. A client may wish to change the public key that is associated with a registration, e.g., in order to mitigate the risk of key compromise. To do this, the client first constructs a JSON object representing a request to update the registration: The string \"reg\", indicating an update to the registration. The JWK thumbprint of the new key RFC7638, base64url-encoded The client signs this object with the old key pair and encodes the object and signature as a JWS. The client then sends this JWS to the server in the \"rollover\" field of a request to update the registration. On receiving a request to the registration URL with the \"rollover\" attribute set, the server MUST perform the following steps: Check that the contents of the \"rollover\" attribute are a valid JWS Check that the \"rollover\" JWS verifies using the account key corresponding to this registration Check that the payload of the JWS is a valid JSON object Check that the \"resource\" field of the object has the value \"reg\" Check that the \"newKey\" field of the object contains the JWK thumbprint of the account key used to sign the request If all of these checks pass, then the server updates the registration by replacing the old account key with the public key carried in the \"jwk\" header of the request JWS. If the update was successful, then the server sends a response with status code 200 (OK) and the updated registration object as its body. If the update was not successful, then the server responds with an error status code and a problem document describing the error. 6.3.2.", "comments": "This uses two JWS signatures to achieve the two-signature property during rollover, instead of using extra layers of encoding and embedding. It also simplifies the logic and splits it out into its own resource.", "new_text": "6.3.1. A client may wish to change the public key that is associated with a registration in order to recover from a key compromise or proactively mitigate the impact of an unnoticed key compromise. To change the key associate with an account, the client POSTs a key- change object with a \"key\" field containing a JWK representation of the new public key. The JWS of this POST must have two signatures: one signature from the existing key on the account, and one signature from the new key that the client proposes to use. This demonstrates that the client actually has control of the private key corresponding to the new public key. The protected header must contain a JWK field containing the current account key. On receiving key-change request, the server MUST perform the following steps in addition to the typical JWS validation: Check that the JWS protected header container a \"jwk\" field containing a key that matches a currently active account. Check that there are exactly two signatures on the JWS. Check that one of the signatures validates using the account key from (1). Check that the \"key\" field contains a well-formed JWK that meets key strength requirements. Check that the \"key\" field is not equivalent to the current account key or any other currently active account key. Check that one of the two signatures on the JWS validates using the JWK from the \"key\" field. If all of these checks pass, then the server updates the corresponding registration by replacing the old account key with the new public key and returns status code 200. Otherwise, the server responds with an error status code and a problem document describing the error. 6.3.2."}
{"id": "q-en-acme-1472af68817dc1dc5a8d76f64ea12430ed448f1c01bb345116de59d4b85efece", "old_text": "The public key of the account key pair, encoded as a JSON Web Key object RFC7517. \"valid\" or \"deactivated\" An array of URIs that the server can use to contact the client for issues related to this authorization. For example, the server may", "comments": "what's the difference between \"deactivated\" and \"revoked\"?\nDeactivated is done by the user; revoked is done by an administrator.\nOK, so we need to either say that or use clearer terms :) I talked with NAME offline, and I think he's going to propose something.", "new_text": "The public key of the account key pair, encoded as a JSON Web Key object RFC7517. The status of this registration. Possible values are: \"valid\", \"deactivated\", and \"revoked\". \"deactivated\" should be used to indicate user initiated deactivation whereas \"revoked\" should be used to indicate administratively initiated deactivation. An array of URIs that the server can use to contact the client for issues related to this authorization. For example, the server may"}
{"id": "q-en-acme-31cc97c68018fe7c33a6cd17d810c4ba10dfb47b7fc1c264711f9aaab02efc3c", "old_text": "Fulfill the server's requirements for issuance Finalize the application and request issuance The client's application for a certificate describes the desired certificate using a PKCS#10 Certificate Signing Request (CSR) plus a", "comments": "Based on the acme mailing list discussion[0] on \"proactive\" vs \"on-finalization\" issuance it seems there is more support for the proactive approach. Speaking on behalf of Let's Encrypt and the Boulder developers we're willing to compromise and support proactive issuance as the sole issuance method in order to simplify the protocol & implementations. This PR changes the language around certificate issuance to reflect that the server MUST issue the certificate proactively once all the required challenges for an application have been fulfilled. The open issue corresponding to making this decision is removed now that there is one supported option. In an effort to close out open questions around applications this commit also removes the open issue around multiple certificates per application. If there is a demand for automating repeat issuance there should be a new thread started with a concrete proposal. 0: URL\nNAME apologies! I missed your comments until today. Both are resolved in URL Thank you for the feedback.\nOh! They were fresh and I misread the dates. Not such a bad turn around after all :-)\nNAME Fantastic, thanks. Merging. Hi folks, I wanted to revisit a discussion on the server proactively issuing certificates. Section 4, \"Protocol Overview\"[0] currently has a paragraph that reads: the the application be to the There is also an \"Open Issue\" remark related to proactive issuance in Section [1]. I think the specification should decide whether proactive issuance or on-finalization issuance should be used instead of allowing both with no indication to the client which will happen. It's the preference of Let's Encrypt that the specification only support on-finalization issuance. This is cheaper to implement and seems to be the path of least surprise. It also seems to better support large volume issuance when the client may want explicit control over when the certificates for multiple applications are issued. In the earlier \"Re: [Acme] Preconditions\" thread[2] Andrew Ayer seems to agree that there should be only one issuance method to simplify client behaviours, though he favours proactive issuance instead of on-finalization issuance. While it saves a round-trip message I'm not sure that this alone is convincing enough to choose proactive issuance as the one issuance method. What are the thoughts of other list members? Daniel/cpu [0] URL [1] URL [2] URL", "new_text": "Fulfill the server's requirements for issuance Await issuance and download the issued certificate The client's application for a certificate describes the desired certificate using a PKCS#10 Certificate Signing Request (CSR) plus a"}
{"id": "q-en-acme-31cc97c68018fe7c33a6cd17d810c4ba10dfb47b7fc1c264711f9aaab02efc3c", "old_text": "the client has accomplished the challenge. Once the validation process is complete and the server is satisfied that the client has met its requirements, the server can either proactively issue the requested certificate or wait for the client to request that the application be \"finalized\", at which point the certificate will be issued and provided to the client. To revoke a certificate, the client simply sends a revocation request indicating the certificate to be revoked, signed with an authorized", "comments": "Based on the acme mailing list discussion[0] on \"proactive\" vs \"on-finalization\" issuance it seems there is more support for the proactive approach. Speaking on behalf of Let's Encrypt and the Boulder developers we're willing to compromise and support proactive issuance as the sole issuance method in order to simplify the protocol & implementations. This PR changes the language around certificate issuance to reflect that the server MUST issue the certificate proactively once all the required challenges for an application have been fulfilled. The open issue corresponding to making this decision is removed now that there is one supported option. In an effort to close out open questions around applications this commit also removes the open issue around multiple certificates per application. If there is a demand for automating repeat issuance there should be a new thread started with a concrete proposal. 0: URL\nNAME apologies! I missed your comments until today. Both are resolved in URL Thank you for the feedback.\nOh! They were fresh and I misread the dates. Not such a bad turn around after all :-)\nNAME Fantastic, thanks. Merging. Hi folks, I wanted to revisit a discussion on the server proactively issuing certificates. Section 4, \"Protocol Overview\"[0] currently has a paragraph that reads: the the application be to the There is also an \"Open Issue\" remark related to proactive issuance in Section [1]. I think the specification should decide whether proactive issuance or on-finalization issuance should be used instead of allowing both with no indication to the client which will happen. It's the preference of Let's Encrypt that the specification only support on-finalization issuance. This is cheaper to implement and seems to be the path of least surprise. It also seems to better support large volume issuance when the client may want explicit control over when the certificates for multiple applications are issued. In the earlier \"Re: [Acme] Preconditions\" thread[2] Andrew Ayer seems to agree that there should be only one issuance method to simplify client behaviours, though he favours proactive issuance instead of on-finalization issuance. While it saves a round-trip message I'm not sure that this alone is convincing enough to choose proactive issuance as the one issuance method. What are the thoughts of other list members? Daniel/cpu [0] URL [1] URL [2] URL", "new_text": "the client has accomplished the challenge. Once the validation process is complete and the server is satisfied that the client has met its requirements, the server will issue the requested certificate and make it available to the client. To revoke a certificate, the client simply sends a revocation request indicating the certificate to be revoked, signed with an authorized"}
{"id": "q-en-acme-31cc97c68018fe7c33a6cd17d810c4ba10dfb47b7fc1c264711f9aaab02efc3c", "old_text": "A URL for the certificate that has been issued in response to this application. [[ Open issue: There are two possible behaviors for the CA here. Either (a) the CA automatically issues once all the requirements are fulfilled, or (b) the CA waits for confirmation from the client that it should issue. If we allow both, we will need a signal in the application object of whether confirmation is required. I would prefer that auto-issue be the default, which would imply a syntax like \"confirm\": true ]] [[ Open issue: Should this syntax allow multiple certificates? That would support reissuance / renewal in a straightforward way, especially if the CSR / notBefore / notAfter could be updated. ]] The elements of the \"requirements\" array are immutable once set, except for their \"status\" fields. If any other part of the object changes after the object is created, the client MUST consider the", "comments": "Based on the acme mailing list discussion[0] on \"proactive\" vs \"on-finalization\" issuance it seems there is more support for the proactive approach. Speaking on behalf of Let's Encrypt and the Boulder developers we're willing to compromise and support proactive issuance as the sole issuance method in order to simplify the protocol & implementations. This PR changes the language around certificate issuance to reflect that the server MUST issue the certificate proactively once all the required challenges for an application have been fulfilled. The open issue corresponding to making this decision is removed now that there is one supported option. In an effort to close out open questions around applications this commit also removes the open issue around multiple certificates per application. If there is a demand for automating repeat issuance there should be a new thread started with a concrete proposal. 0: URL\nNAME apologies! I missed your comments until today. Both are resolved in URL Thank you for the feedback.\nOh! They were fresh and I misread the dates. Not such a bad turn around after all :-)\nNAME Fantastic, thanks. Merging. Hi folks, I wanted to revisit a discussion on the server proactively issuing certificates. Section 4, \"Protocol Overview\"[0] currently has a paragraph that reads: the the application be to the There is also an \"Open Issue\" remark related to proactive issuance in Section [1]. I think the specification should decide whether proactive issuance or on-finalization issuance should be used instead of allowing both with no indication to the client which will happen. It's the preference of Let's Encrypt that the specification only support on-finalization issuance. This is cheaper to implement and seems to be the path of least surprise. It also seems to better support large volume issuance when the client may want explicit control over when the certificates for multiple applications are issued. In the earlier \"Re: [Acme] Preconditions\" thread[2] Andrew Ayer seems to agree that there should be only one issuance method to simplify client behaviours, though he favours proactive issuance instead of on-finalization issuance. While it saves a round-trip message I'm not sure that this alone is convincing enough to choose proactive issuance as the one issuance method. What are the thoughts of other list members? Daniel/cpu [0] URL [1] URL [2] URL", "new_text": "A URL for the certificate that has been issued in response to this application. The elements of the \"requirements\" array are immutable once set, except for their \"status\" fields. If any other part of the object changes after the object is created, the client MUST consider the"}
{"id": "q-en-acme-31cc97c68018fe7c33a6cd17d810c4ba10dfb47b7fc1c264711f9aaab02efc3c", "old_text": "then the server SHOULD change the status of the application to \"invalid\" and MAY delete the application resource. The server SHOULD issue the requested certificate and update the application resource with a URL for the certificate as soon as the client has fulfilled the server's requirements. If the client has already satisfied the server's requirements at the time of this request (e.g., by obtaining authorization for all of the identifiers in the certificate in previous transactions), then the server MAY proactively issue the requested certificate and provide a URL for it in the \"certificate\" field of the application. The server MUST, however, still list the satisfied requirements in the \"requirements\"", "comments": "Based on the acme mailing list discussion[0] on \"proactive\" vs \"on-finalization\" issuance it seems there is more support for the proactive approach. Speaking on behalf of Let's Encrypt and the Boulder developers we're willing to compromise and support proactive issuance as the sole issuance method in order to simplify the protocol & implementations. This PR changes the language around certificate issuance to reflect that the server MUST issue the certificate proactively once all the required challenges for an application have been fulfilled. The open issue corresponding to making this decision is removed now that there is one supported option. In an effort to close out open questions around applications this commit also removes the open issue around multiple certificates per application. If there is a demand for automating repeat issuance there should be a new thread started with a concrete proposal. 0: URL\nNAME apologies! I missed your comments until today. Both are resolved in URL Thank you for the feedback.\nOh! They were fresh and I misread the dates. Not such a bad turn around after all :-)\nNAME Fantastic, thanks. Merging. Hi folks, I wanted to revisit a discussion on the server proactively issuing certificates. Section 4, \"Protocol Overview\"[0] currently has a paragraph that reads: the the application be to the There is also an \"Open Issue\" remark related to proactive issuance in Section [1]. I think the specification should decide whether proactive issuance or on-finalization issuance should be used instead of allowing both with no indication to the client which will happen. It's the preference of Let's Encrypt that the specification only support on-finalization issuance. This is cheaper to implement and seems to be the path of least surprise. It also seems to better support large volume issuance when the client may want explicit control over when the certificates for multiple applications are issued. In the earlier \"Re: [Acme] Preconditions\" thread[2] Andrew Ayer seems to agree that there should be only one issuance method to simplify client behaviours, though he favours proactive issuance instead of on-finalization issuance. While it saves a round-trip message I'm not sure that this alone is convincing enough to choose proactive issuance as the one issuance method. What are the thoughts of other list members? Daniel/cpu [0] URL [1] URL [2] URL", "new_text": "then the server SHOULD change the status of the application to \"invalid\" and MAY delete the application resource. The server MUST issue the requested certificate and update the application resource with a URL for the certificate as soon as the client has fulfilled the server's requirements. If the client has already satisfied the server's requirements at the time of this request (e.g., by obtaining authorization for all of the identifiers in the certificate in previous transactions), then the server MUST proactively issue the requested certificate and provide a URL for it in the \"certificate\" field of the application. The server MUST, however, still list the satisfied requirements in the \"requirements\""}
{"id": "q-en-acme-bcc1584545e5fb32cc44ef6a89005bcd4b079539f8d77f598ab5ed8c37ac7245", "old_text": "\"alg\" \"jwk\" (only for requests to new-reg and revoke-cert resources) \"kid\" (for all other requests).", "comments": "In the term \"registration\" was changed to \"account\". In and in some of the remaining references to \"registration\" were removed, but some 'new-reg' and references to \"registration\" remained. This PR removes, hopefully, the last references to \"registration\".", "new_text": "\"alg\" \"jwk\" (only for requests to new-account and revoke-cert resources) \"kid\" (for all other requests)."}
{"id": "q-en-acme-bcc1584545e5fb32cc44ef6a89005bcd4b079539f8d77f598ab5ed8c37ac7245", "old_text": "The \"jwk\" and \"kid\" fields are mutually exclusive. Servers MUST reject requests that contain both. For new-reg requests, and for revoke-cert requests authenticated by certificate key, there MUST be a \"jwk\" field. For all other requests, there MUST be a \"kid\" field. This field must contain the account URI received by POSTing to the new-reg resource. Note that authentication via signed JWS request bodies implies that GET requests are not authenticated. Servers MUST NOT respond to GET", "comments": "In the term \"registration\" was changed to \"account\". In and in some of the remaining references to \"registration\" were removed, but some 'new-reg' and references to \"registration\" remained. This PR removes, hopefully, the last references to \"registration\".", "new_text": "The \"jwk\" and \"kid\" fields are mutually exclusive. Servers MUST reject requests that contain both. For new-account requests, and for revoke-cert requests authenticated by certificate key, there MUST be a \"jwk\" field. For all other requests, there MUST be a \"kid\" field. This field must contain the account URI received by POSTing to the new-account resource. Note that authentication via signed JWS request bodies implies that GET requests are not authenticated. Servers MUST NOT respond to GET"}
{"id": "q-en-acme-bcc1584545e5fb32cc44ef6a89005bcd4b079539f8d77f598ab5ed8c37ac7245", "old_text": "The server MUST ignore any updates to the \"key\", or \"order\" fields or any other fields it does not recognize. The server MUST verify that the request is signed with the private key corresponding to the \"key\" field of the request before updating the registration. For example, to update the contact information in the above account, the client could send the following request:", "comments": "In the term \"registration\" was changed to \"account\". In and in some of the remaining references to \"registration\" were removed, but some 'new-reg' and references to \"registration\" remained. This PR removes, hopefully, the last references to \"registration\".", "new_text": "The server MUST ignore any updates to the \"key\", or \"order\" fields or any other fields it does not recognize. The server MUST verify that the request is signed with the private key corresponding to the \"key\" field of the request before updating the account object. For example, to update the contact information in the above account, the client could send the following request:"}
{"id": "q-en-acme-bcc1584545e5fb32cc44ef6a89005bcd4b079539f8d77f598ab5ed8c37ac7245", "old_text": "the CA may consider the new account associated with the external account corresponding to the MAC key, and MUST reflect value of the \"external-account-binding\" field in the resulting account object. If any of these checks fail, then the CA MUST reject the new- registration request. 6.3.3.", "comments": "In the term \"registration\" was changed to \"account\". In and in some of the remaining references to \"registration\" were removed, but some 'new-reg' and references to \"registration\" remained. This PR removes, hopefully, the last references to \"registration\".", "new_text": "the CA may consider the new account associated with the external account corresponding to the MAC key, and MUST reflect value of the \"external-account-binding\" field in the resulting account object. If any of these checks fail, then the CA MUST reject the new-account request. 6.3.3."}
{"id": "q-en-acme-8a84ab0d6d27677388c00e4a94e3debeb2b4fda718309250517ce58a20d85036", "old_text": "supported scheme but an invalid value then the server MUST return an error of type \"invalidContact\". The server creates an account and stores the public key used to verify the JWS (i.e., the \"jwk\" element of the JWS header) to authenticate future requests from the account. The server returns this account object in a 201 (Created) response, with the account URL in a Location header field. If the server already has an account registered with the provided account key, then it MUST return a response with a 200 (OK) status code and provide the URL of that account in the Location header field. This allows a client that has an account key but not the corresponding account URI to recover the account URL. If the server wishes to present the client with terms under which the ACME service is to be used, it MUST indicate the URL where such terms can be accessed in the \"terms-of-service\" subfield of the \"meta\" field in the directory object, and the server MUST reject new-account requests that do not have the \"terms-of-service-agreed\" set to \"true\". Clients SHOULD NOT automatically agree to terms by default. Rather, they SHOULD require some user interaction for agreement to terms. If the client wishes to update this information in the future, it sends a POST request with updated information to the account URL. The server MUST ignore any updates to \"order\" fields or any other fields it does not recognize. For example, to update the contact information in the above account, the client could send the following request: Servers SHOULD NOT respond to GET requests for account resources as these requests are not authenticated. If a client wishes to query the server for information about its account (e.g., to examine the", "comments": "This PR is a first go at addressing the following points I raised on the mailing list.\nThis allows us to address the account URL recovery use case in without having special handling for empty payloads.", "new_text": "supported scheme but an invalid value then the server MUST return an error of type \"invalidContact\". If the server wishes to present the client with terms under which the ACME service is to be used, it MUST indicate the URL where such terms can be accessed in the \"terms-of-service\" subfield of the \"meta\" field in the directory object, and the server MUST reject new-account requests that do not have the \"terms-of-service-agreed\" set to \"true\". Clients SHOULD NOT automatically agree to terms by default. Rather, they SHOULD require some user interaction for agreement to terms. The server creates an account and stores the public key used to verify the JWS (i.e., the \"jwk\" element of the JWS header) to authenticate future requests from the account. The server returns this account object in a 201 (Created) response, with the account URL in a Location header field. 7.3.1. If the server already has an account registered with the provided account key, then it MUST return a response with a 200 (OK) status code and provide the URL of that account in the Location header field. This allows a client that has an account key but not the corresponding account URL to recover the account URL. If a client wishes to recover an existing account and does not want a non existing account to be created, then it SHOULD do so by sending a POST request with an empty update. That is, it should send a JWS whose payload is an empty object ({}). 7.3.2. If the client wishes to update this information in the future, it sends a POST request with updated information to the account URL. The server MUST ignore any updates to \"order\" fields or any other fields it does not recognize. If the server accepts the update, it MUST return a response with a 200 (OK) status code and the resulting account object. For example, to update the contact information in the above account, the client could send the following request: 7.3.3. Servers SHOULD NOT respond to GET requests for account resources as these requests are not authenticated. If a client wishes to query the server for information about its account (e.g., to examine the"}
{"id": "q-en-acme-8a84ab0d6d27677388c00e4a94e3debeb2b4fda718309250517ce58a20d85036", "old_text": "a POST request with an empty update. That is, it should send a JWS whose payload is an empty object ({}). 7.3.1. As described above, a client can indicate its agreement with the CA's terms of service by setting the \"terms-of-service-agreed\" field in", "comments": "This PR is a first go at addressing the following points I raised on the mailing list.\nThis allows us to address the account URL recovery use case in without having special handling for empty payloads.", "new_text": "a POST request with an empty update. That is, it should send a JWS whose payload is an empty object ({}). 7.3.4. As described above, a client can indicate its agreement with the CA's terms of service by setting the \"terms-of-service-agreed\" field in"}
{"id": "q-en-acme-8a84ab0d6d27677388c00e4a94e3debeb2b4fda718309250517ce58a20d85036", "old_text": "human user to visit in order for instructions on how to agree to the terms. 7.3.2. The server MAY require a value to be present for the \"external- account-binding\" field. This can be used to an ACME account with an", "comments": "This PR is a first go at addressing the following points I raised on the mailing list.\nThis allows us to address the account URL recovery use case in without having special handling for empty payloads.", "new_text": "human user to visit in order for instructions on how to agree to the terms. 7.3.5. The server MAY require a value to be present for the \"external- account-binding\" field. This can be used to an ACME account with an"}
{"id": "q-en-acme-8a84ab0d6d27677388c00e4a94e3debeb2b4fda718309250517ce58a20d85036", "old_text": "any of these checks fail, then the CA MUST reject the new-account request. 7.3.3. A client may wish to change the public key that is associated with an account in order to recover from a key compromise or proactively", "comments": "This PR is a first go at addressing the following points I raised on the mailing list.\nThis allows us to address the account URL recovery use case in without having special handling for empty payloads.", "new_text": "any of these checks fail, then the CA MUST reject the new-account request. 7.3.6. A client may wish to change the public key that is associated with an account in order to recover from a key compromise or proactively"}
{"id": "q-en-acme-8a84ab0d6d27677388c00e4a94e3debeb2b4fda718309250517ce58a20d85036", "old_text": "responds with an error status code and a problem document describing the error. 7.3.4. A client can deactivate an account by posting a signed update to the server with a status field of \"deactivated.\" Clients may wish to do", "comments": "This PR is a first go at addressing the following points I raised on the mailing list.\nThis allows us to address the account URL recovery use case in without having special handling for empty payloads.", "new_text": "responds with an error status code and a problem document describing the error. 7.3.7. A client can deactivate an account by posting a signed update to the server with a status field of \"deactivated.\" Clients may wish to do"}
{"id": "q-en-acme-e3c2800e0f9ce09a90b5c7442ff7cc78755a2626128ad7b961b398377693b28a", "old_text": "then the server SHOULD change the status of the order to \"invalid\" and MAY delete the order resource. The server MUST issue the requested certificate and update the order resource with a URL for the certificate shortly after the client has fulfilled the server's requirements. If the client has already satisfied the server's requirements at the time of this request (e.g., by obtaining authorization for all of the identifiers in the certificate in previous transactions), then the server MUST proactively issue the requested certificate and provide a URL for it in the \"certificate\" field of the order. The server MUST, however, still list the completed authorizations in the \"authorizations\" array. Once the client believes it has fulfilled the server's requirements, it should send a GET request to the order resource to obtain its", "comments": "This change replaces \"shortly\" in the normative language of Section 7.4, Applying for Certificate Issuance, with the more precise expectation that the server begin the issuance process, as discussed on the mailing list. URL We could say \"MUST begin the issuance process\" The main things on my mind that could delay issuance slightly: Submitting to CT Checking CAA Internal queuing for available capacity Manual vetting I think \"MUST begin\" covers for all of those, while allowing some vagueness as to how long they will take. On 03/22/2017 09:39 AM, Daniel McCarney wrote:", "new_text": "then the server SHOULD change the status of the order to \"invalid\" and MAY delete the order resource. The server MUST begin the issuance process for the requested certificate and update the order resource with a URL for the certificate once the client has fulfilled the server's requirements. If the client has already satisfied the server's requirements at the time of this request (e.g., by obtaining authorization for all of the identifiers in the certificate in previous transactions), then the server MUST proactively issue the requested certificate and provide a URL for it in the \"certificate\" field of the order. The server MUST, however, still list the completed authorizations in the \"authorizations\" array. Once the client believes it has fulfilled the server's requirements, it should send a GET request to the order resource to obtain its"}
{"id": "q-en-acme-d2f8ed9d3749c2a35a7189051fcb67e11aef9991fc26dbc9c555e610f90a6213", "old_text": "not want an account to be created if one does not already exist, then it SHOULD do so by sending a POST request to the new-account URL with a JWS whose payload has an \"only-return-existing\" field set to \"true\" ({\"only-return-existing\": true}). 7.3.2.", "comments": "Is 200 really correct? This seems like a 404 maybe? Also we should specify the body (empty).\nWould a problem document be appropriate (with either a pre-existing ACME error type or a new one like \"accountDoesNotExist\")?\n404 talks about the effective request URI, so that's not going to work. I think that a 400 would be better on the basis that the request checked, but failed (NAME suggestion is a fine enhancement). The reason you make this request is to check; and having a 4xx series code is easier to test than the absence of a header field.\nThanks NAME and NAME implemented this in the latest commit.\nAs discussed; this is fine.", "new_text": "not want an account to be created if one does not already exist, then it SHOULD do so by sending a POST request to the new-account URL with a JWS whose payload has an \"only-return-existing\" field set to \"true\" ({\"only-return-existing\": true}). If a client sends such a request and an account does not exist, then the server MUST return an error response with status code 400 (Bad Request) and type \"urn:ietf:params:acme:error:accountDoesNotExist\". 7.3.2."}
{"id": "q-en-acme-0355d49974cb216d1557a7d3ddc85dea09a806057e8a2261ccf7fe52f13d56df", "old_text": "(OK) status code and the current contents of the account object. Once an account is deactivated, the server MUST NOT accept further requests authorized by that account's key. A server may take a variety of actions in response to an account deactivation, e.g., deleting data related to that account or sending mail to the account's contacts. Servers SHOULD NOT revoke certificates issued by the deactivated account, since this could cause operational disruption for servers using these certificates. ACME does not provide a way to reactivate a deactivated account. 7.4.", "comments": "It makes sense to cancel any pending cert orders when an account is deactivated, where the cert has not been issued yet.", "new_text": "(OK) status code and the current contents of the account object. Once an account is deactivated, the server MUST NOT accept further requests authorized by that account's key. The server SHOULD cancel any pending operations authorized by the account's key, such as certificate orders. A server may take a variety of actions in response to an account deactivation, e.g., deleting data related to that account or sending mail to the account's contacts. Servers SHOULD NOT revoke certificates issued by the deactivated account, since this could cause operational disruption for servers using these certificates. ACME does not provide a way to reactivate a deactivated account. 7.4."}
{"id": "q-en-acme-b3d4ef6c46e9ba8e55903c2a697f116c66928f3be01f494a10876183a534a925", "old_text": "To request authorization for an identifier, the client sends a POST request to the new-authorization resource specifying the identifier for which authorization is being requested and how the server should behave with respect to existing authorizations for this identifier. The identifier that the account is authorized to represent:", "comments": "Previously there was some notion of being able to hint to the server whether you wanted to reuse authorizations. This was removed from the spec but the pre-authorization flow still made a reference to it. This commit removes that obsolete reference. Thanks to NAME for pointing out the error. Resolves URL\nHi in the section on pre-authz there is this sentence: To request authorization for an identifier, the client sends a POST request to the new-authorization resource specifying the identifier for which authorization is being requested and how the server should behave with respect to existing authorizations for this identifier. the part \"and how the server should behave with respect to existing authorizations for this identifier\" probably is obsoleted\nHi NAME I submitted URL to fix this. Thanks for spotting & reporting :-)", "new_text": "To request authorization for an identifier, the client sends a POST request to the new-authorization resource specifying the identifier for which authorization is being requested. The identifier that the account is authorized to represent:"}
{"id": "q-en-acme-341338e4e4e20aebbd9ea0aa38a10862fb2c81177776a18f1b65c9423cc915a5", "old_text": "8.1. Several of the challenges in this document make use of a key authorization string. A key authorization is a string that expresses a domain holder's authorization for a specified key to satisfy a specified challenge, by concatenating the token for the challenge with a key fingerprint, separated by a \".\" character: The \"JWK_Thumbprint\" step indicates the computation specified in RFC7638, using the SHA-256 digest FIPS180-4. As noted in JWA RFC7518 any prepended zero octets in the fields of a JWK object MUST be stripped before doing the computation. As specified in the individual challenges below, the token for a challenge is a string comprised entirely of characters in the URL-", "comments": "I think without OOB all challenges use key-authorization \"JWA\" is not defined in this document. I think it can just be omitted\nNAME Thank you for the change! :smile:\nThanks again NAME I really appreciate your editing & the accompanying PRs! :-)", "new_text": "8.1. All challenges defined in this document make use of a key authorization string. A key authorization is a string that expresses a domain holder's authorization for a specified key to satisfy a specified challenge, by concatenating the token for the challenge with a key fingerprint, separated by a \".\" character: The \"JWK_Thumbprint\" step indicates the computation specified in RFC7638, using the SHA-256 digest FIPS180-4. As noted in RFC7518 any prepended zero octets in the fields of a JWK object MUST be stripped before doing the computation. As specified in the individual challenges below, the token for a challenge is a string comprised entirely of characters in the URL-"}
{"id": "q-en-acme-b2044467e584844d550510713d8cacc11307c391e6dc488ea70088e1a4572e71", "old_text": "Any identifier of type \"dns\" in a new-order request MAY have a wildcard domain name as its value. A wildcard domain name consists of a single asterisk character followed by a single full stop character (\"_.\") followed by a domain name as defined for use in the Subject Alternate Name Extension by RFC 5280 . An authorization returned by the server for a wildcard domain name identifier MUST NOT include the asterisk and full stop (\"_RFC5280.\") prefix in the authorization identifier value. The elements of the \"authorizations\" and \"identifiers\" array are immutable once set. The server MUST NOT change the contents either", "comments": "Prior to this commit the literal string \".\" was being rendered as the start of italics in the generated HTML doc. This commit updates the markup to escape the \"*.\"` in the generated document. Originally reported by Joel Sing - thanks!", "new_text": "Any identifier of type \"dns\" in a new-order request MAY have a wildcard domain name as its value. A wildcard domain name consists of a single asterisk character followed by a single full stop character (\"*.\") followed by a domain name as defined for use in the Subject Alternate Name Extension by RFC 5280 RFC5280. An authorization returned by the server for a wildcard domain name identifier MUST NOT include the asterisk and full stop (\"*.\") prefix in the authorization identifier value. The elements of the \"authorizations\" and \"identifiers\" array are immutable once set. The server MUST NOT change the contents either"}
{"id": "q-en-acme-e5bcb8df7267c7409ebef4adc3849698357b1f83cd17bf6815a563461ced82ef", "old_text": "a host which only functions as an ACME server could place the directory under the path \"/\". The object MAY additionally contain a field \"meta\". If present, it MUST be a JSON object; each field in the object is an item of metadata relating to the service provided by the ACME server.", "comments": "In Section 7.4.1 \"Pre-Authorization\" the spec says: If a CA wishes to allow pre-authorization within ACME, it can offer a \"new authorization\" resource in its directory by adding the field \"newAuthz\" with a URL for the new authorization resource. That text indicates that the CA may wish to not support pre-authorization in which case the \"newAuthz\" resource will not be present in the directory. One example of an ACME CA choosing to not support pre-authorization is Let's Encrypt. This commit clarifies the \"newAuthz\" field of the directory is optional in Section 7.1.1 \"Directory\" where a developer is likely to look to understand which resources must be supported/present in the directory and which are optional (\"meta\", \"newAuthz\"). If the server doesn't support pre-authorization it MUST omit the \"newAuthz\" field.", "new_text": "a host which only functions as an ACME server could place the directory under the path \"/\". If the ACME server does not implement pre-authorization (Section 7.4.1) it MUST omit the \"newAuthz\" field of the directory. The object MAY additionally contain a field \"meta\". If present, it MUST be a JSON object; each field in the object is an item of metadata relating to the service provided by the ACME server."}
{"id": "q-en-acme-a5bff19cc161777fcec4095bb5f97c84d2985b8c9c7030d363b28b3d416a51e7", "old_text": "problem. The \"type\" and \"detail\" fields MUST be populated. To facilitate automatic response to errors, this document defines the following standard tokens for use in the \"type\" field (within the \"urn:acme:\" namespace): Authorization and challenge objects can also contain error information to indicate why the server was unable to validate", "comments": "In addition to adding the error, as proposed in , this PR fixes some issues discussed in the F2F meeting, namely moving ACME URNs under the IETF parameters space, and having errors be a sub-namespace of an overall ACME namespace.", "new_text": "problem. The \"type\" and \"detail\" fields MUST be populated. To facilitate automatic response to errors, this document defines the following standard tokens for use in the \"type\" field (within the \"urn:ietf:params:acme:error:\" namespace): This list is not exhaustive. The server MAY return errors whose \"type\" field is set to a URI other than those defined above. Servers MUST NOT use the ACME URN namespace for errors other than the standard types. Clients SHOULD display the \"detail\" field of such errors. Authorization and challenge objects can also contain error information to indicate why the server was unable to validate"}
{"id": "q-en-acme-b7cecd46cefcba928190e82b9fae57a63d01bc281075d58725eff558b3361a35", "old_text": "The inner JWS MUST have the same \"url\" header parameter as the outer JWS. The inner JWS MAY omit the \"nonce\" header parameter. The server MUST ignore any value provided for the \"nonce\" header parameter. This transaction has signatures from both the old and new keys so that the server can verify that the holders of the two keys both", "comments": "URL\nThanks NAME On Sat, Sep 15, 2018 at 08:51:55PM -0500, Benjamin Kaduk wrote: My apologies if I missed it when it went by, but did we ever hear more about this requirement from the proponents of externalAccountBinding? Thanks, Benjamin", "new_text": "The inner JWS MUST have the same \"url\" header parameter as the outer JWS. The inner JWS MUST omit the \"nonce\" header parameter. This transaction has signatures from both the old and new keys so that the server can verify that the holders of the two keys both"}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "Cut and paste the CSR into a CA's web page. Prove ownership of the domain by one of the following methods: Put a CA-provided challenge at a specific place on the web server.", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "Cut and paste the CSR into a CA's web page. Prove ownership of the domain(s) in the CSR by one of the following methods: Put a CA-provided challenge at a specific place on the web server."}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "The CA verifies that the client controls the requested domain name(s) by having the ACME client perform some action(s) that can only be done with control of the domain name(s). For example, the CA might require a client requesting example.com to provision DNS record under example.com or an HTTP resource under http://example.com. Once the CA is satisfied, it issues the certificate and the ACME", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "The CA verifies that the client controls the requested domain name(s) by having the ACME client perform some action(s) that can only be done with control of the domain name(s). For example, the CA might require a client requesting example.com to provision a DNS record under example.com or an HTTP resource under http://example.com. Once the CA is satisfied, it issues the certificate and the ACME"}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "MUST verify the JWS before processing the request. Encapsulating request bodies in JWS provides authentication of requests. JWS objects sent in ACME requests MUST meet the following additional criteria: The JWS MUST be in the Flattened JSON Serialization", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "MUST verify the JWS before processing the request. Encapsulating request bodies in JWS provides authentication of requests. A JWS object sent as the body of an ACME request MUST meet the following additional criteria: The JWS MUST be in the Flattened JSON Serialization"}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "6.5.1. The Replay-Nonce header field includes a server-generated value that the server can use to detect unauthorized replay in future client requests. The server MUST generate the value provided in Replay- Nonce in such a way that they are unique to each message, with high probability, and unpredictable to anyone besides the server. For instance, it is acceptable to generate Replay-Nonces randomly. The value of the Replay-Nonce field MUST be an octet string encoded according to the base64url encoding described in Section 2 of RFC7515. Clients MUST ignore invalid Replay-Nonce values. The ABNF RFC5234 for the Replay-Nonce header field follows:", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "6.5.1. The Replay-Nonce HTTP header field includes a server-generated value that the server can use to detect unauthorized replay in future client requests. The server MUST generate the values provided in Replay-Nonce header fields in such a way that they are unique to each message, with high probability, and unpredictable to anyone besides the server. For instance, it is acceptable to generate Replay-Nonces randomly. The value of the Replay-Nonce header field MUST be an octet string encoded according to the base64url encoding described in Section 2 of RFC7515. Clients MUST ignore invalid Replay-Nonce values. The ABNF RFC5234 for the Replay-Nonce header field follows:"}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "signed with the requested new account key. This \"inner\" JWS becomes the payload for the \"outer\" JWS that is the body of the ACME request. The outer JWS MUST meet the normal requirements for an ACME JWS (see request-authentication). The inner JWS MUST meet the normal requirements, with the following differences: The inner JWS MUST have a \"jwk\" header parameter, containing the public key of the new key pair.", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "signed with the requested new account key. This \"inner\" JWS becomes the payload for the \"outer\" JWS that is the body of the ACME request. The outer JWS MUST meet the normal requirements for an ACME JWS request body (see request-authentication). The inner JWS MUST meet the normal requirements, with the following differences: The inner JWS MUST have a \"jwk\" header parameter, containing the public key of the new key pair."}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "old key. The \"outer\" JWS represents the current account holder's assent to this request. On receiving keyChange request, the server MUST perform the following steps in addition to the typical JWS validation: Validate the POST request belongs to a currently active account, as described in message-transport.", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "old key. The \"outer\" JWS represents the current account holder's assent to this request. On receiving a keyChange request, the server MUST perform the following steps in addition to the typical JWS validation: Validate the POST request belongs to a currently active account, as described in message-transport."}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "That the client controls the identifier in question. This process may be repeated to associate multiple identifiers to a key pair (e.g., to request certificates with multiple identifiers) or to associate multiple accounts with an identifier (e.g., to allow multiple entities to manage certificates). Authorization resources are created by the server in response to certificate orders or authorization requests submitted by an account key holder; their URLs are provided to the client in the responses to these requests. The authorization object is implicitly tied to the account key used to sign the request. When a client receives an order from the server in reply to a new order request, it downloads the authorization resources by sending POST-as-GET requests to the indicated URLs. If the client initiates authorization using a request to the newAuthz resource, it will have already received the pending authorization object in the response to", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "That the client controls the identifier in question. This process may be repeated to associate multiple identifiers with an account (e.g., to request certificates with multiple identifiers) or to associate multiple accounts with an identifier (e.g., to allow multiple entities to manage certificates). Authorization resources are created by the server in response to newOrder or newAuthorization requests submitted by an account key holder; their URLs are provided to the client in the responses to these requests. The authorization object is implicitly tied to the account key used to sign the request. When a client receives an order from the server in reply to a newOrder request, it downloads the authorization resources by sending POST-as-GET requests to the indicated URLs. If the client initiates authorization using a request to the newAuthz resource, it will have already received the pending authorization object in the response to"}
{"id": "q-en-acme-2e550bc46eaf6ef157b515ed47ad11dc0aed4522d6b7afedde1fcb3a70d73805", "old_text": "representation of the challenge with the response object provided by the client. The server MUST ignore any fields in the response object that are not specified as response fields for this type of challenge. The server provides a 200 (OK) response with the updated challenge object as its body. If the client's response is invalid for any reason or does not provide the server with appropriate information to validate the", "comments": "Some grammar fixes POST-as-GETs should be indicated as having a signature in flow Specify that the JWS profile we define in Section 6.2 is for ACME request bodies, to distinuish it from the profile used for External Account Binding. They have incompatible requirements: MUST NOT use MAC vs MUST use MAC. Clarify that Replay-Nonce is an HTTP header field since it's defined near JWS header parameters. Don't include \"wildcard\": false in examples. Section 7.1.4 requires that it MUST be absent when false. There was lingering language in the Section 7.5.1 about \"updating\" an authorization body in response to fields in a challenge POST. Since the challenge POST is now described explicitly as \"an empty JSON body\", this language didn't make sense. Fixed.\nPer offline review, I added back in a sentence that contained a MUST:", "new_text": "representation of the challenge with the response object provided by the client. The server MUST ignore any fields in the response object that are not specified as response fields for this type of challenge. Note that the challenges in this document do not define any response fields, but future specifications might define them. The server provides a 200 (OK) response with the updated challenge object as its body. If the client's response is invalid for any reason or does not provide the server with appropriate information to validate the"}
{"id": "q-en-acme-ba2984ce9ab3724ed14c7a07306fbc07dc630e4844d969fa17b4d1aa73a161aa", "old_text": "As a domain may resolve to multiple IPv4 and IPv6 addresses, the server will connect to at least one of the hosts found in A and AAAA records, at its discretion. The HTTP server may be made available over either HTTPS or unencrypted HTTP; the client tells the server in its response which to check. The string \"simpleHttp\"", "comments": "On the mailing list, NAME and NAME pointed out some risks arising from the default virtual host behavior of some web servers. This PR integrates the changes they suggest, namely (1) removing the \"tls\" option from the challenge, and (2) adding iterations to the challenge. This PR is based on top of and .\nSorry, feel free to tell me to go away, but reading the backing decision to drop HTTPS for SimpleHTTP doesn't make sense to me. If I understand everything correctly, you control the originator of the HTTPS requests and the requests are always sent to domains. So why can't you just enforce to always send the Host header and this wouldn't be an issue?\nFrom : behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). As far as I know Apache does the exact same thing for nonencrypted HTTP hosts (this is also acknowledged by a later mail in the thread), so the reasoning is a little weird. I suppose the idea is that hosters are more likely to have HTTP configured correctly rather than HTTPS?\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks! Apache (and I gather nginx too) have the subtle and non-intuitive behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). URL URL This creates a vulnerability for SimpleHTTP and DVSNI in any multiple-tenant virtual hosting environment that failed to explicitly select a default vhost [1]. The vulnerability allows one tenant (typically one with the alphabetically lowest domain name -- a situation that may be easy for an attacker to arrange) to obtain certificates for other tenants. The exact conditions for the vulerability vary: SimpleHTTP is vulnerable if no default vhost is set explicitly, the server accepts the \"tls\": true parameter as currently specified, and the domains served over TLS/HTTPS are a strict subset of those served over HTTP. DVSNI is vulnerable if no default vhost is set explicitly, and the hosting environment provides a way for tenants to upload arbitrary certificates to be served on their vhost. James Kasten and Alex Halderman have done great work proposing a fix for DVSNI: URL That fix also has some significant performance benefits on systems with many virtual hosts, and the resulting challenge type should perhaps be called DVSNI2. The simplest fix for SimpleHTTP is to remove the \"tls\" option from the client's SimpleHTTP response, and require that the challenge be completed on port 80. URL Alternatively, the protocol could allow the server to specify which ports it will perform SimpleHTTP and SimpleHTTPS requests on; HTTPS on port 443 SHOULD be avoided unless the CA can confirm that the server has an explicitly configured default vhost. The above diffs are currently against Let's Encrypt's version of the ACME spec, we'll be happy to rebase them against the IETF version upon request. [1] though I haven't confirmed it, it's possible that sites that attempted to set a default vhost are vulernable if they followed advice like this when doing so: URL ; an attacker need only sign up with a lexicographically lower name on that hosting platform to obtain control of the default. Peter Eckersley EMAIL Chief Computer Scientist Tel +1 415 436 9333 x131 Electronic Frontier Foundation Fax +1 415 436 9993", "new_text": "As a domain may resolve to multiple IPv4 and IPv6 addresses, the server will connect to at least one of the hosts found in A and AAAA records, at its discretion. Because many webservers allocate a default HTTPS virtual host to a particular low-privilege tenant user in a subtle and non-intuitive manner, the challenge must be completed over HTTP, not HTTPS. The string \"simpleHttp\""}
{"id": "q-en-acme-ba2984ce9ab3724ed14c7a07306fbc07dc630e4844d969fa17b4d1aa73a161aa", "old_text": "value in the challenge. The client's response to this challenge indicates its agreement to this challenge in particular, and whether it would prefer for the validation request to be sent over TLS: The string \"simpleHttp\" The \"token\" value from the authorized key object in the challenge. If this attribute is present and set to \"false\", the server will perform its validation check over unencrypted HTTP (on port 80) rather than over HTTPS. Otherwise the check will be done over HTTPS, on port 443. On receiving a response, the server MUST verify that the \"token\" value in the response matches the \"token\" field in the authorized key object in the challenge. If they do not match, then the server MUST", "comments": "On the mailing list, NAME and NAME pointed out some risks arising from the default virtual host behavior of some web servers. This PR integrates the changes they suggest, namely (1) removing the \"tls\" option from the challenge, and (2) adding iterations to the challenge. This PR is based on top of and .\nSorry, feel free to tell me to go away, but reading the backing decision to drop HTTPS for SimpleHTTP doesn't make sense to me. If I understand everything correctly, you control the originator of the HTTPS requests and the requests are always sent to domains. So why can't you just enforce to always send the Host header and this wouldn't be an issue?\nFrom : behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). As far as I know Apache does the exact same thing for nonencrypted HTTP hosts (this is also acknowledged by a later mail in the thread), so the reasoning is a little weird. I suppose the idea is that hosters are more likely to have HTTP configured correctly rather than HTTPS?\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks! Apache (and I gather nginx too) have the subtle and non-intuitive behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). URL URL This creates a vulnerability for SimpleHTTP and DVSNI in any multiple-tenant virtual hosting environment that failed to explicitly select a default vhost [1]. The vulnerability allows one tenant (typically one with the alphabetically lowest domain name -- a situation that may be easy for an attacker to arrange) to obtain certificates for other tenants. The exact conditions for the vulerability vary: SimpleHTTP is vulnerable if no default vhost is set explicitly, the server accepts the \"tls\": true parameter as currently specified, and the domains served over TLS/HTTPS are a strict subset of those served over HTTP. DVSNI is vulnerable if no default vhost is set explicitly, and the hosting environment provides a way for tenants to upload arbitrary certificates to be served on their vhost. James Kasten and Alex Halderman have done great work proposing a fix for DVSNI: URL That fix also has some significant performance benefits on systems with many virtual hosts, and the resulting challenge type should perhaps be called DVSNI2. The simplest fix for SimpleHTTP is to remove the \"tls\" option from the client's SimpleHTTP response, and require that the challenge be completed on port 80. URL Alternatively, the protocol could allow the server to specify which ports it will perform SimpleHTTP and SimpleHTTPS requests on; HTTPS on port 443 SHOULD be avoided unless the CA can confirm that the server has an explicitly configured default vhost. The above diffs are currently against Let's Encrypt's version of the ACME spec, we'll be happy to rebase them against the IETF version upon request. [1] though I haven't confirmed it, it's possible that sites that attempted to set a default vhost are vulernable if they followed advice like this when doing so: URL ; an attacker need only sign up with a lexicographically lower name on that hosting platform to obtain control of the default. Peter Eckersley EMAIL Chief Computer Scientist Tel +1 415 436 9333 x131 Electronic Frontier Foundation Fax +1 415 436 9993", "new_text": "value in the challenge. The client's response to this challenge indicates its agreement to this challenge: The string \"simpleHttp\" The \"token\" value from the authorized key object in the challenge. On receiving a response, the server MUST verify that the \"token\" value in the response matches the \"token\" field in the authorized key object in the challenge. If they do not match, then the server MUST"}
{"id": "q-en-acme-ba2984ce9ab3724ed14c7a07306fbc07dc630e4844d969fa17b4d1aa73a161aa", "old_text": "as expected. Form a URI by populating the URI template RFC6570 \"{scheme}://{domain}/.well-known/acme-challenge/{token}\", where: * the scheme field is set to \"http\" if the \"tls\" field in the response is present and set to false, and \"https\" otherwise; * the domain field is set to the domain name being verified; and * the token field is set to the token in the authorized key object.", "comments": "On the mailing list, NAME and NAME pointed out some risks arising from the default virtual host behavior of some web servers. This PR integrates the changes they suggest, namely (1) removing the \"tls\" option from the challenge, and (2) adding iterations to the challenge. This PR is based on top of and .\nSorry, feel free to tell me to go away, but reading the backing decision to drop HTTPS for SimpleHTTP doesn't make sense to me. If I understand everything correctly, you control the originator of the HTTPS requests and the requests are always sent to domains. So why can't you just enforce to always send the Host header and this wouldn't be an issue?\nFrom : behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). As far as I know Apache does the exact same thing for nonencrypted HTTP hosts (this is also acknowledged by a later mail in the thread), so the reasoning is a little weird. I suppose the idea is that hosters are more likely to have HTTP configured correctly rather than HTTPS?\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks! Apache (and I gather nginx too) have the subtle and non-intuitive behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). URL URL This creates a vulnerability for SimpleHTTP and DVSNI in any multiple-tenant virtual hosting environment that failed to explicitly select a default vhost [1]. The vulnerability allows one tenant (typically one with the alphabetically lowest domain name -- a situation that may be easy for an attacker to arrange) to obtain certificates for other tenants. The exact conditions for the vulerability vary: SimpleHTTP is vulnerable if no default vhost is set explicitly, the server accepts the \"tls\": true parameter as currently specified, and the domains served over TLS/HTTPS are a strict subset of those served over HTTP. DVSNI is vulnerable if no default vhost is set explicitly, and the hosting environment provides a way for tenants to upload arbitrary certificates to be served on their vhost. James Kasten and Alex Halderman have done great work proposing a fix for DVSNI: URL That fix also has some significant performance benefits on systems with many virtual hosts, and the resulting challenge type should perhaps be called DVSNI2. The simplest fix for SimpleHTTP is to remove the \"tls\" option from the client's SimpleHTTP response, and require that the challenge be completed on port 80. URL Alternatively, the protocol could allow the server to specify which ports it will perform SimpleHTTP and SimpleHTTPS requests on; HTTPS on port 443 SHOULD be avoided unless the CA can confirm that the server has an explicitly configured default vhost. The above diffs are currently against Let's Encrypt's version of the ACME spec, we'll be happy to rebase them against the IETF version upon request. [1] though I haven't confirmed it, it's possible that sites that attempted to set a default vhost are vulernable if they followed advice like this when doing so: URL ; an attacker need only sign up with a lexicographically lower name on that hosting platform to obtain control of the default. Peter Eckersley EMAIL Chief Computer Scientist Tel +1 415 436 9333 x131 Electronic Frontier Foundation Fax +1 415 436 9993", "new_text": "as expected. Form a URI by populating the URI template RFC6570 \"http://{domain}/.well-known/acme-challenge/{token}\", where: * the domain field is set to the domain name being verified; and * the token field is set to the token in the authorized key object."}
{"id": "q-en-acme-ba2984ce9ab3724ed14c7a07306fbc07dc630e4844d969fa17b4d1aa73a161aa", "old_text": "A serialized authorized key object, base64-encoded. The \"key\" field in this object MUST match the client's account key. In response to the challenge, the client MUST parse the authorized key object and verify that its \"key\" field contains the client's account key. The client then computes the SHA-256 digest Z of the JSON-encoded authorized key object (without base64-encoding), and encodes Z in hexadecimal form. The client will generate a self-signed certificate with the subjectAlternativeName extension containing the dNSName \"..acme.invalid\". The client will then configure the TLS server at the domain such that when a handshake is initiated with the Server Name Indication extension set to \"..acme.invalid\", the generated test certificate is presented. The response to the DVSNI challenge simply acknowledges that the client is ready to fulfill this challenge.", "comments": "On the mailing list, NAME and NAME pointed out some risks arising from the default virtual host behavior of some web servers. This PR integrates the changes they suggest, namely (1) removing the \"tls\" option from the challenge, and (2) adding iterations to the challenge. This PR is based on top of and .\nSorry, feel free to tell me to go away, but reading the backing decision to drop HTTPS for SimpleHTTP doesn't make sense to me. If I understand everything correctly, you control the originator of the HTTPS requests and the requests are always sent to domains. So why can't you just enforce to always send the Host header and this wouldn't be an issue?\nFrom : behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). As far as I know Apache does the exact same thing for nonencrypted HTTP hosts (this is also acknowledged by a later mail in the thread), so the reasoning is a little weird. I suppose the idea is that hosters are more likely to have HTTP configured correctly rather than HTTPS?\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks! Apache (and I gather nginx too) have the subtle and non-intuitive behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). URL URL This creates a vulnerability for SimpleHTTP and DVSNI in any multiple-tenant virtual hosting environment that failed to explicitly select a default vhost [1]. The vulnerability allows one tenant (typically one with the alphabetically lowest domain name -- a situation that may be easy for an attacker to arrange) to obtain certificates for other tenants. The exact conditions for the vulerability vary: SimpleHTTP is vulnerable if no default vhost is set explicitly, the server accepts the \"tls\": true parameter as currently specified, and the domains served over TLS/HTTPS are a strict subset of those served over HTTP. DVSNI is vulnerable if no default vhost is set explicitly, and the hosting environment provides a way for tenants to upload arbitrary certificates to be served on their vhost. James Kasten and Alex Halderman have done great work proposing a fix for DVSNI: URL That fix also has some significant performance benefits on systems with many virtual hosts, and the resulting challenge type should perhaps be called DVSNI2. The simplest fix for SimpleHTTP is to remove the \"tls\" option from the client's SimpleHTTP response, and require that the challenge be completed on port 80. URL Alternatively, the protocol could allow the server to specify which ports it will perform SimpleHTTP and SimpleHTTPS requests on; HTTPS on port 443 SHOULD be avoided unless the CA can confirm that the server has an explicitly configured default vhost. The above diffs are currently against Let's Encrypt's version of the ACME spec, we'll be happy to rebase them against the IETF version upon request. [1] though I haven't confirmed it, it's possible that sites that attempted to set a default vhost are vulernable if they followed advice like this when doing so: URL ; an attacker need only sign up with a lexicographically lower name on that hosting platform to obtain control of the default. Peter Eckersley EMAIL Chief Computer Scientist Tel +1 415 436 9333 x131 Electronic Frontier Foundation Fax +1 415 436 9993", "new_text": "A serialized authorized key object, base64-encoded. The \"key\" field in this object MUST match the client's account key. Number of DVSNI iterations In response to the challenge, the client MUST decode and parse the authorized keys object and verify that it contains exactly one entry, whose \"token\" and \"key\" attributes match the token for this challenge and the client's account key. The client then computes the SHA-256 digest Z0 of the JSON-encoded authorized key object (without base64-encoding), and encodes Z0 in UTF-8 lower-case hexadecimal form. The client then generates iterated hash values Z1...Z(n-1) as follows: The client generates a self-signed certificate for each iteration of Zi with a single subjectAlternativeName extension dNSName that is \"..acme.invalid\", where \"Zi[0:32]\" and \"Zi[32:64]\" represent the first 32 and last 32 characters of the hex- encoded value, respectively (following the notation used in Python). The client then configures the TLS server at the domain such that when a handshake is initiated with the Server Name Indication extension set to \"..acme.invalid\", the corresponding generated certificate is presented. The response to the DVSNI challenge simply acknowledges that the client is ready to fulfill this challenge."}
{"id": "q-en-acme-ba2984ce9ab3724ed14c7a07306fbc07dc630e4844d969fa17b4d1aa73a161aa", "old_text": "client's control of the domain by verifying that the TLS server was configured appropriately. Compute the Z-value from the authorized key object in the same way as the client. Open a TLS connection to the domain name being validated on port 443, presenting the value \"..acme.invalid\" in the SNI field (where the comparison is case-insensitive). Verify that the certificate contains a subjectAltName extension with the dNSName of \"..acme.invalid\". It is RECOMMENDED that the ACME server validation TLS connections from multiple vantage points to reduce the risk of DNS hijacking", "comments": "On the mailing list, NAME and NAME pointed out some risks arising from the default virtual host behavior of some web servers. This PR integrates the changes they suggest, namely (1) removing the \"tls\" option from the challenge, and (2) adding iterations to the challenge. This PR is based on top of and .\nSorry, feel free to tell me to go away, but reading the backing decision to drop HTTPS for SimpleHTTP doesn't make sense to me. If I understand everything correctly, you control the originator of the HTTPS requests and the requests are always sent to domains. So why can't you just enforce to always send the Host header and this wouldn't be an issue?\nFrom : behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). As far as I know Apache does the exact same thing for nonencrypted HTTP hosts (this is also acknowledged by a later mail in the thread), so the reasoning is a little weird. I suppose the idea is that hosters are more likely to have HTTP configured correctly rather than HTTPS?\nThe section on the \"\" registry says that \"Fields marked as \"client configurable\" may be included in a new-account request\". This seems like a copy-paste-o, and perhaps should say that such fields may be included in a new-order request.\nGood catch. I opened URL to fix. Thanks! Apache (and I gather nginx too) have the subtle and non-intuitive behaviour that if a default TLS/HTTPS virtual host is not configured explicitly, one will be selected based on the ordering of vhosts in the webserver configuration (in practice, this often amounts to alphabetical ordering of files in a configuration directory). URL URL This creates a vulnerability for SimpleHTTP and DVSNI in any multiple-tenant virtual hosting environment that failed to explicitly select a default vhost [1]. The vulnerability allows one tenant (typically one with the alphabetically lowest domain name -- a situation that may be easy for an attacker to arrange) to obtain certificates for other tenants. The exact conditions for the vulerability vary: SimpleHTTP is vulnerable if no default vhost is set explicitly, the server accepts the \"tls\": true parameter as currently specified, and the domains served over TLS/HTTPS are a strict subset of those served over HTTP. DVSNI is vulnerable if no default vhost is set explicitly, and the hosting environment provides a way for tenants to upload arbitrary certificates to be served on their vhost. James Kasten and Alex Halderman have done great work proposing a fix for DVSNI: URL That fix also has some significant performance benefits on systems with many virtual hosts, and the resulting challenge type should perhaps be called DVSNI2. The simplest fix for SimpleHTTP is to remove the \"tls\" option from the client's SimpleHTTP response, and require that the challenge be completed on port 80. URL Alternatively, the protocol could allow the server to specify which ports it will perform SimpleHTTP and SimpleHTTPS requests on; HTTPS on port 443 SHOULD be avoided unless the CA can confirm that the server has an explicitly configured default vhost. The above diffs are currently against Let's Encrypt's version of the ACME spec, we'll be happy to rebase them against the IETF version upon request. [1] though I haven't confirmed it, it's possible that sites that attempted to set a default vhost are vulernable if they followed advice like this when doing so: URL ; an attacker need only sign up with a lexicographically lower name on that hosting platform to obtain control of the default. Peter Eckersley EMAIL Chief Computer Scientist Tel +1 415 436 9333 x131 Electronic Frontier Foundation Fax +1 415 436 9993", "new_text": "client's control of the domain by verifying that the TLS server was configured appropriately. Choose a subset of the N DVSNI iterations to check, according to local policy. For each iteration, compute the Zi-value from the authorized keys object in the same way as the client. Open a TLS connection to the domain name being validated on the requested port, presenting the value \"..acme.invalid\" in the SNI field (where the comparison is case-insensitive). Verify that the certificate contains a subjectAltName extension with the dNSName of \"..acme.invalid\", and that no other dNSName entries of the form \"*.acme.invalid\" are present in the subjectAltName extension. It is RECOMMENDED that the ACME server verify a random subset of the N iterations with an appropriate sized to ensure that an attacker who can provision certs for a default virtual host, but not for arbitrary simultaneous virtual hosts, cannot pass the challenge. For instance, testing a subset of 5 of N=25 domains ensures that such an attacker has only a one in 25/5 chance of success if they post certs Zn in random succession. (This probability is enforced by the requirement that each certificate have only one Zi value.) It is RECOMMENDED that the ACME server validation TLS connections from multiple vantage points to reduce the risk of DNS hijacking"}
{"id": "q-en-anima-brski-prm-07432318f6592df19cc58130a3c24601512886fb4523a2e9077277298daf5838", "old_text": "This is necessary to allow the discovery of pledges by the registrar- agent using mDNS. The list may be provided by administrative means or the registrar agent may get the information via an interaction with the pledge, like scanning of product-serial-number information using a QR code or similar. According to RFC8995 section 5.3, the domain registrar performs the pledge authorization for bootstrapping within his domain based on the", "comments": "Included in the text. Maybe will include further text for\nIn 5.5.1: This kind of suggests that an Agent may also not know the serial number. But in creating the agent-signed-data, this number is required later on. Could we clarify perhaps the Agent MUST know the serial-number; e.g. in one of these ways: configured (as in quoted text above) discovered (in \"standard\" way using mDNS / DNS-SD first part of service instance name) discovered (in vendor-specific way e.g. RF beacons) other (in vendor-specific way, e.g. scan QR code of packaging) - maybe that's the same as category 1 from the Agent point of view.\nAdditional Note: figure 4 doesn't show the part where the Agent discovers the serial number of the Pledge. This was adding to the confusion. Maybe figure 4 can stay as is; this is just to clarify that at the start of Fig 4 the Agent MUST know the serial-number. So some small link is missing to get the complete picture.\nI reopened as the initial point if the registrar-agent MUST know the serial number is not solved by the QR code reference. As the registrar-agent needs to know, which pledges he is bootstrapping, he needs the list of serial numbers in advance to be able to construct the agent signed data. Text proposed will be included into the pre-conditions described in section 5.5.1, In addition, the registrar-agent MUST know the product-serial-number(s) of the pledge(s) to be bootstrapped. The registrar-agent MAY be provided with the product-serial-number in different ways: configured, e.g., as a list of pledges to be bootstrapped via QR code scanning discovered by using standard approaches like mDNS as described in 5.4.2. discovered by using a vendor specific approach, e.g., RF beacons\nOk", "new_text": "This is necessary to allow the discovery of pledges by the registrar- agent using mDNS. The list may be provided by administrative means or the registrar agent may get the information via an interaction with the pledge. For instance, RFC9238 describes scanning of a QR code, the product-serial-number would be initialized from the 12N B005 Product Serial Number. According to RFC8995 section 5.3, the domain registrar performs the pledge authorization for bootstrapping within his domain based on the"}
{"id": "q-en-api-drafts-4bf2e8e113cb189771011164dc2f8fe5730df030834a9aa972d3b8f004989f42", "old_text": "the main Connection implementation that delivers events to the custom framer implementation whenever data is ready to be parsed or framed. When a Connection establishment attempt begins, an event can be delivered to notify the framer implementation that a new Connection is being created. Similarly, a stop event can be delivered when a", "comments": "It seems blindingly obvious to me that an implementation would need to take care of this... and this note of caution seems quite appropriate (if unnecessary... but surely it doesn't hurt).\nDynamic framer add and passthrough (last two paragraphs of section 6.1) assume synchronous operation; what happens to bytes in the send/receive buffers when the framer stack changes? There either needs to be an explicit mutex here (allowing the framer/implementation to lock the buffers at the exact point the framer change happens, not allowing any bytes to be processed while the dynamic framer decision is made), or some way to annotate in a buffer the replay point after which a dynamic reframing is necessary. Are we going to leave this up to implementors to figure out, or should we point out that here be dragons?\nWe should clarify that these operations within the data path of the stack should be synchronized.", "new_text": "the main Connection implementation that delivers events to the custom framer implementation whenever data is ready to be parsed or framed. The Transport Services implementation needs to ensure that all of the events and actions taken on a Message Framer are synchronized to ensure consistent behavior. For example, some of the actions defined below (such as PrependFramer and StartPassthrough) modify how data flows in a protocol stack, and require synchronization with sending and parsing data in the Message Framer. When a Connection establishment attempt begins, an event can be delivered to notify the framer implementation that a new Connection is being created. Similarly, a stop event can be delivered when a"}
{"id": "q-en-api-drafts-9ce4403a8d8791280bf4d897ab6b6c60d06149002a9cd899cfe4f38e3b51b6f3", "old_text": "4.1.2. Branch types must occur in a specific order relative to one another to avoid creating leaf nodes with invalid or incompatible settings. In the example above, it would be invalid to branch for derived endpoints (the DNS results for www.example.com) before branching between interface paths, since there are situations when the results will be different across networks due to private names or different supported IP versions. Implementations must be careful to branch in an order that results in usable leaf nodes whenever there are multiple branch types that could be used from a single node. The order of operations for branching should be: Alternate Paths Protocol Options", "comments": "SORRY I made a PR on your PR, when I meant to leave a review comment, see 1024. .... Silly to make a PR on a PR - I added a review comment instead...\nIn section 4.1.4 in the impl draft, we have: \"The order of operations for branching, where lower numbers are acted upon first, should be: Alternate Paths Protocol Options Derived Endpoints\" However, I would say there is not really a strict ordering required. The text actually indicates that selection of the protocol is rather independent while there is usually a dependency between the interface and derived endpoint. I think this is related to my other issue about the use of the term path, because a path consists of both the local and remote endpoint and therefore there is of course a dependency. So in summary I'm not sure how useful I find this whole section as currently written...\nWe should conclude if anything more is needed here besides cleaning up the terminology, where Alternate Paths is not a good term - see .\nAny implementation should have a consistent order of operations. This is a recommended example. Reference the arch text that says it needs to work the same way each time.\nReference ICE/HE", "new_text": "4.1.2. Branch types ought to occur in a specific order relative to one another to avoid creating leaf nodes with invalid or incompatible settings. In the example above, it would be invalid to branch for derived endpoints (the DNS results for www.example.com) before branching between interface paths, since there are situations when the results will be different across networks due to private names or different supported IP versions. Implementations need to be careful to branch in a consistent order that results in usable leaf nodes whenever there are multiple branch types that could be used from a single node. This document recommends the following order of operations for branching: Network Paths Protocol Options"}
{"id": "q-en-api-drafts-6e463665b6451b890ec684b00861b97ed863c46948a8a1ef38859e6f56418928", "old_text": "more. connection-props provides a list of Connection Properties, while Selection Properties are listed in the subsections below. Many properties are only considered during establishment, and can not be changed after a Connection is established; however, they can still be queried. The return type of a queried Selection Property is Boolean, where \"true\" means that the Selection Property has been applied and \"false\" means that the Selection Property has not been applied. Note that \"true\" does not mean that a request has been honored. For example, if \"Congestion control\" was requested with preference level \"Prefer\", but congestion control could not be supported, querying the \"congestionControl\" property yields the value \"false\". If the preference level \"Avoid\" was used for \"Congestion control\", and, as requested, the Connection is not congestion controlled, querying the \"congestionControl\" property also yields the value \"false\". An implementation of the Transport Services API must provide sensible defaults for Selection Properties. The default values for each", "comments": "NAME wrote: \"Why is the text talking about \"many properties\" that can no longer be changed, instead of relying on the earlier, nice and clear distinction between Selection and Connection Properties?\". => In line with the suggestion from NAME this is now changed, it says Selection Properties. NAME wrote: \"Where earlier, Selection Properties were said to remain readable, now they are only queryable?\". => In line with NAME answer, this PR replaces \"query\" with \"read\" in this text. NAME wrote: \"Are \"applied\" Selection Properties still Selection Properties? Shouldn't they better be referred to using a different term?\" and here, NAME suggested: \"In the , we define Protocol Stack as a Transport Services implementation concept, and we talk about a certain protocol stacks providing certain features (see . Perhaps we could rephrase this part of the API draft to use that terminology, e.g., the application can query the actually used protocol stack and then know which Transport Features it supports.\" => I don't know how to apply this change, and it generally sounds a bit complicated to me. I think that Selection Properties should keep their name. Regarding the confusing text with the example, I removed this and I think it makes things clearer. Regarding the return type, I think I now finally understood the bigger problem. Returning a Boolean instead of a Preference should make sense and should be easy to implement (as also described in this PR). The problem is that not all Selection Properties are of type Preference. E.g., is an enumeration, representing , or . What sense does it make if this becomes only or ? Similarly, and are of type \"Collection of (Preference, Enumeration)\", and indeed it would make sense to expect a Collection of booleans, not a singular boolean. This PR now explicitly says that, upon reading, the Preference type of a Selection Property (not the whole Selection Property!) changes into Boolean.\nMANY thanks for the proposed fix (which I committed)! I hadn't really understood the problem just from reading the issue discussion.\nAs , here follows a bit of feedback on the different Property types in the Interface draft, from the perspective of an academic developer who is currently attempting to follow the draft spec as closely as possible while trying to implement a path-aware transport system in a In , it says: At this point, the reader of the document could be expected to conclude the following: Selection Properties are a set of preferences for protocol/path selection that (obviously) can no longer be meaningfully changed once a connection has been established. Connection Properties are a set of \"run-time preferences\" for connections which could still be adjusted when a connection is already established, but should ideally already be known during protocol/path selection The text goes on to state: With this wording, the reader can now reasonably conclude that the original Selection Properties that have resulted in a given Connection should remain available during the lifetime of that Connection, perhaps in form of a copy of the original data structure. So far so straight forward. In however, this conclusion is called into question. After the text has gone on to introduce the 5 different Preference Levels, it states At this point, the reader becomes confused. Why is the text talking about \"many properties\" that can no longer be changed, instead of relying on the earlier, nice and clear distinction between Selection and Connection Properties? Where earlier, Selection Properties were said to remain readable, now they are only queryable? The type of a Selection Property that has been \"applied\" mutates from a Preference Level to become a Boolean? Are \"applied\" Selection Properties still Selection Properties? Shouldn't they better be referred to using a different term? So the original Selection Preferences that resulted in the current Connection are not available after all? Only the (Boolean) Properties of the chosen Protocol can be \"queried\"? But wait, the text goes on: This is a complicated way of saying that the values only reflect the actual Properties of the chosen Connection and (in direct contradiction to the previous paragraph) do not actually indicate whether a Selection Property has also been \"applied\" or not. (A Boolean only carries exactly one bit of information after all.) In other words, the Protocol Properties can be queried. The usefulness of this is probably limited however. If we already know that our Connection is using, e.g., TCP, we don't have to issue any \"queries\" to know that the Connection offers reliable transport, is subject to congestion control and preserves packet ordering. (This is indeed how I have chosen to implement this. The chosen Protocol can be queried for the Properties it inherently has. In the Preconnection phase, the Selection Preferences are matched against these and any / conflict results in that Protocol being dropped from consideration.) Hoping not to be unreasonable (or somehow block-headed), I have the following suggestions Either introduce a new term for Properties that have been \"applied\", or Use the term \"Preferences\" (e.g., \"Selection Preferences\", \"Connection Preferences\") to refer to a collection of adjustable \"Settings\" and the term \"Properties\" to refer to measured or queried values that are actual Properties of underlying Connections and Protocols Go into a bit more detail about what the draft really means when talking about \"querying\" Properties (versus setting them), maybe even in terms of exemplary interactions with Property data structures. (After all, the draft elsewhere goes into considerable detail with it's exemplary Event-based system) Think about whether it in general would clear things up when the explicit notion of Protocol Properties as a (static) counterpoint to (adjustable) Selection Properties is introduced\n(Replying as an individual and co-author) Thanks for raising this issue! Yes, this matches my understanding is well. If the later text confuses this issue for the reader, then perhaps we'll need to tweak it. I'd say that Selection Properties can no longer be changed, Connection Properties still can be changed (though maybe changing a Connection Property later would not have any effect, e.g., if Capacity Profile changes but was only taken into account when selecting a path). Perhaps the text should reflect that. For me, these are synonyms, whereby a \"query\" is what the application has to do in order to read the properties. Perhaps the \"queried\" should become \"read\" to avoid confusion. That is indeed how I think of it on a conceptual level, but I understand how this might confuse implementers, especially if they can't just change a data type of a property. In the , we define Protocol Stack as a Transport Services implementation concept, and we talk about a certain protocol stacks providing certain features (see . Perhaps we could rephrase this part of the API draft to use that terminology, e.g., the application can query the actually used protocol stack and then know which Transport Features it supports. I would strongly prefer using our existing terminology to introducing yet another term. Good question. I vaguely remember discussing this, and I think folks felt like querying the Selection Properties that the application had set was not very useful at that stage. The other text you quoted indeed sounds a bit complicated and maybe it can be reworded to be clearer. Curious to hear what my co-authors and/or other TAPS participants think!\nI completely second what NAME wrote. With regards to querying the Transport Features selected vs. getting the preferences provided earlier, I remember we had lengthy discussions and somewhat converged to the current state. Primary arguments ware Multiple read/query variants are confusing Original Preferences are not really useful - if you really want to read them later, keep the Preferences Object Constrained implementations don't want to be forced to store them. NAME based on these arguments, what option would you prefer a) different get/query operations of selection properties after the connection has been set up b) the current way of \"mutating\" preferences with clearer description in the text\nJust to add: having finally gotten around to carefully reading this, I also completely agree with everything NAME wrote. This is exactly how I would have answered, and her suggestions are exactly what I think should be done to address these comments. I also agree with NAME that these were the arguments regarding querying the querying issue, and I agree with the arguments as well. The quoted text containing the example is a great catch! Such a terribly convoluted paragraph. I have a nagging feeling that this was my fault! :-(\nPR tries to solve this; the PR itself contains a full explanation of my rationale for the changes. Please take a look!\nGreat, this looks good to me now. I think now it is simple enough that we don't need an example anymore.Looks good, one clarification suggestedWorks for me (with NAME changes). I would propose a slight rewording -> and add a hint that this exposing can also be implemented differently.", "new_text": "more. connection-props provides a list of Connection Properties, while Selection Properties are listed in the subsections below. Selection Properties are only considered during establishment, and can not be changed after a Connection is established. After a Connection is established, Selection Properties can only be read to check the properties used by the Connection. Upon reading, the Preference type of a Selection Property changes into Boolean, where \"true\" means that the selected Protocol Stack supports the feature or uses the path associated with the Selection Property, and \"false\" means that the Protocol Stack does not support the feature or use the path. Implementations of Transport Services systems may alternatively use the two Preference values \"Require\" and \"Prohibit\" to represent \"true\" and \"false\", respectively. An implementation of the Transport Services API must provide sensible defaults for Selection Properties. The default values for each"}
{"id": "q-en-api-drafts-da7626fc03fbcaf1ca6439281a420b40331778b283d2750369148088274466c5", "old_text": "highest available capacity, based on the observed maximum throughput. Implementations process the Properties (Section 6.2 of I-D.ietf-taps- interface) in the following order: Prohibit, Require, Prefer, Avoid. If Selection Properties contain any prohibited properties, the", "comments": "Appendix B.1 lists two properties as \"under discussion\": Bounds on Send or Receive Rate: (PR 1036 fixes the name): this is no longer under discussion, these properties exist. Cost Preferences: this is no longer under discussion, I suppose?? We don't have this in the interface draft. So I have the impression that this should no longer be an appendix, the text on Cost Preferences should be removed, and the text on Bounds on Send and Receive Rate should be moved elsewhere, and not be mentioned as \"under discussion\".\nLGMT", "new_text": "highest available capacity, based on the observed maximum throughput. As another example, branch sorting can also be influenced by bounds on the Send or Receive rate (Selection Properties \"minSendRate\" / \"minRecvRate\" / \"maxSendRate\" / \"maxRecvRate\"): if the application indicates a bound on the expected Send or Receive bitrate, an implementation may prefer a path that can likely provide the desired bandwidth, based on cached maximum throughput, see performance- caches. The application may know the Send or Receive Bitrate from metadata in adaptive HTTP streaming, such as MPEG-DASH. Implementations process the Properties (Section 6.2 of I-D.ietf-taps- interface) in the following order: Prohibit, Require, Prefer, Avoid. If Selection Properties contain any prohibited properties, the"}
{"id": "q-en-api-drafts-10bd83f54c46e4cb5e585dd57962e1152a99429a7f419a2a37f075865f38cd07", "old_text": "implementation to communicate control data to the Remote Endpoint that can be used to parse Messages. Similarly, when a Message Framer generates a \"Stop\" event, the framer implementation has the opportunity to write some final data or clear up its local state before the \"Closed\" event is delivered to the Application. The framer implementation can indicate that it has finished with this. At any time if the implementation encounters a fatal error, it can also cause the Connection to fail and provide an error.", "comments": "Capitalization seems to be a problem for us. We need to make things uniform. At first, I reacted to seeing \"connection\" as parameters in this PR, because we use the term \"Connection\" to refer to a TAPS-provided connection (as opposed to a connection of the underlying system), and so, de-capitalizing the term here may look better, but it is also somewhat imprecise. This said, we do the same with other capitalized terms too, here and in the API draft - e.g., \"messageContext\". Should we generally agree to begin terms with a lowercase letter when they are used as parameters in code examples? I think we have this in most places, though at least the API draft has several counterexamples... e.g., the server example code: Preconnection := NewPreconnection(LocalSpecifier, TransportProperties, SecurityParameters) (..) Listener -> ConnectionReceived\nFrom discussion on\nThis was a mistake. As I wanted to do this now, I noticed that all occurrences of capitalized terms as parameters are objects that are defined and used elsewhere in the code examples. For example: Preconnection := NewPreconnection(RemoteSpecifier, TransportProperties, SecurityParameters) refers to RemoteSpecifier etc., being objects that are initialized some lines above. Making them appear in lower case everywhere would be awkward, this is then a much more drastic change to our pseudocode. As a result, I think it's fine to leave things as they are, and I suggest to close this issue with no action.\nFramers: What is the difference between the Final message property and the separate IsEndOfMessage event argument?\nAlso there, the term MessageContext could use a definition\nand searching \"isEndOfMessage\" tells me that there is no text describing this parameter. Regarding the question, the difference should be clear: seems self-explanatory, whereas indicates that the Message carrying it is the last one of the Connection. So, this is End of the Message vs. End of the Connection.\nSounds like this needs a reference to the API doc, and change IsEndOfMessage to endOfMessage to match the API.\nProbably fix with\nI think this looks ok (and thanks for doing this!), but see also my comment about making things uniform wrt. capitalization: if we merge this PR, we should also do one that does the same with the API draft.Thanks LGTM! Although I noticed another nit that can perhaps be fixed in the same PR even if not directly related to the parameter names. Regarding the capitalization I think we can merge this PR and then have a separate issue for the capitalization as we will need to fix this either way.", "new_text": "implementation to communicate control data to the Remote Endpoint that can be used to parse Messages. Once the framer implementation has completed its setup or handshake, it can indicate to the application that it is ready to handling data with this call. Similarly, when a Message Framer generates a \"Stop\" event, the framer implementation has the opportunity to write some final data or clear up its local state before the \"Closed\" event is delivered to the Application. The framer implementation can indicate that it has finished with this call. At any time if the implementation encounters a fatal error, it can also cause the Connection to fail and provide an error."}
{"id": "q-en-api-drafts-10bd83f54c46e4cb5e585dd57962e1152a99429a7f419a2a37f075865f38cd07", "old_text": "6.2. Message Framers generate an event whenever a Connection sends a new Message. Upon receiving this event, a framer implementation is responsible for performing any necessary transformations and sending the resulting", "comments": "Capitalization seems to be a problem for us. We need to make things uniform. At first, I reacted to seeing \"connection\" as parameters in this PR, because we use the term \"Connection\" to refer to a TAPS-provided connection (as opposed to a connection of the underlying system), and so, de-capitalizing the term here may look better, but it is also somewhat imprecise. This said, we do the same with other capitalized terms too, here and in the API draft - e.g., \"messageContext\". Should we generally agree to begin terms with a lowercase letter when they are used as parameters in code examples? I think we have this in most places, though at least the API draft has several counterexamples... e.g., the server example code: Preconnection := NewPreconnection(LocalSpecifier, TransportProperties, SecurityParameters) (..) Listener -> ConnectionReceived\nFrom discussion on\nThis was a mistake. As I wanted to do this now, I noticed that all occurrences of capitalized terms as parameters are objects that are defined and used elsewhere in the code examples. For example: Preconnection := NewPreconnection(RemoteSpecifier, TransportProperties, SecurityParameters) refers to RemoteSpecifier etc., being objects that are initialized some lines above. Making them appear in lower case everywhere would be awkward, this is then a much more drastic change to our pseudocode. As a result, I think it's fine to leave things as they are, and I suggest to close this issue with no action.\nFramers: What is the difference between the Final message property and the separate IsEndOfMessage event argument?\nAlso there, the term MessageContext could use a definition\nand searching \"isEndOfMessage\" tells me that there is no text describing this parameter. Regarding the question, the difference should be clear: seems self-explanatory, whereas indicates that the Message carrying it is the last one of the Connection. So, this is End of the Message vs. End of the Connection.\nSounds like this needs a reference to the API doc, and change IsEndOfMessage to endOfMessage to match the API.\nProbably fix with\nI think this looks ok (and thanks for doing this!), but see also my comment about making things uniform wrt. capitalization: if we merge this PR, we should also do one that does the same with the API draft.Thanks LGTM! Although I noticed another nit that can perhaps be fixed in the same PR even if not directly related to the parameter names. Regarding the capitalization I think we can merge this PR and then have a separate issue for the capitalization as we will need to fix this either way.", "new_text": "6.2. Message Framers generate an event whenever a Connection sends a new Message. The parameters to the event align with the Send call in the API (Section 9.2 of I-D.ietf-taps-interface). Upon receiving this event, a framer implementation is responsible for performing any necessary transformations and sending the resulting"}
{"id": "q-en-api-drafts-10bd83f54c46e4cb5e585dd57962e1152a99429a7f419a2a37f075865f38cd07", "old_text": "In order to parse a received flow of data into Messages, the Message Framer notifies the framer implementation whenever new data is available to parse. Upon receiving this event, the framer implementation can inspect the inbound data. The data is parsed from a particular cursor", "comments": "Capitalization seems to be a problem for us. We need to make things uniform. At first, I reacted to seeing \"connection\" as parameters in this PR, because we use the term \"Connection\" to refer to a TAPS-provided connection (as opposed to a connection of the underlying system), and so, de-capitalizing the term here may look better, but it is also somewhat imprecise. This said, we do the same with other capitalized terms too, here and in the API draft - e.g., \"messageContext\". Should we generally agree to begin terms with a lowercase letter when they are used as parameters in code examples? I think we have this in most places, though at least the API draft has several counterexamples... e.g., the server example code: Preconnection := NewPreconnection(LocalSpecifier, TransportProperties, SecurityParameters) (..) Listener -> ConnectionReceived\nFrom discussion on\nThis was a mistake. As I wanted to do this now, I noticed that all occurrences of capitalized terms as parameters are objects that are defined and used elsewhere in the code examples. For example: Preconnection := NewPreconnection(RemoteSpecifier, TransportProperties, SecurityParameters) refers to RemoteSpecifier etc., being objects that are initialized some lines above. Making them appear in lower case everywhere would be awkward, this is then a much more drastic change to our pseudocode. As a result, I think it's fine to leave things as they are, and I suggest to close this issue with no action.\nFramers: What is the difference between the Final message property and the separate IsEndOfMessage event argument?\nAlso there, the term MessageContext could use a definition\nand searching \"isEndOfMessage\" tells me that there is no text describing this parameter. Regarding the question, the difference should be clear: seems self-explanatory, whereas indicates that the Message carrying it is the last one of the Connection. So, this is End of the Message vs. End of the Connection.\nSounds like this needs a reference to the API doc, and change IsEndOfMessage to endOfMessage to match the API.\nProbably fix with\nI think this looks ok (and thanks for doing this!), but see also my comment about making things uniform wrt. capitalization: if we merge this PR, we should also do one that does the same with the API draft.Thanks LGTM! Although I noticed another nit that can perhaps be fixed in the same PR even if not directly related to the parameter names. Regarding the capitalization I think we can merge this PR and then have a separate issue for the capitalization as we will need to fix this either way.", "new_text": "In order to parse a received flow of data into Messages, the Message Framer notifies the framer implementation whenever new data is available to parse. The parameters to the the events and calls for receiving data with a framer align with the Receive call in the API (Section 9.3 of I-D.ietf-taps-interface). Upon receiving this event, the framer implementation can inspect the inbound data. The data is parsed from a particular cursor"}
{"id": "q-en-api-drafts-46cdc90734579a8a2adf16b7fa71b6c5fa2f013ea245feb03e707006e8edff02", "old_text": "the IP header can be obtained (GET_ECN.UDP(-Lite)). Calling \"Close\" on a UDP Connection (ABORT.UDP(-Lite)) releases the local port reservation. Calling \"Abort\" on a UDP Connection (ABORT.UDP(-Lite)) is identical to calling \"Close\", except that the Connection will send a ConnectionError Event rather than a Closed Event. Calling \"CloseGroup\" on a UDP Connection (ABORT.UDP(-Lite)) is identical to calling \"Close\" on this Connection and on all", "comments": "Takes care of item 5 and .\nSee discussion on\nProvide a complete template that covers all functions/properties from the API. Indicate how the template should be used (e.g. leave out the parts that are not relevant for a particular protocol or include all fields and mark fields that are not relevant as such.\nI'm wondering if this is really something that's in scope that we should cover. We do have the message properties section that explains how to implement various properties, but I think adding each property to every protocol mapping will be a huge amount of text that will be difficult to get exactly right.\nI agree, we have added additional properties since we opened this issue to check the mapping and make sure we have a complete coverage. We should still be consistent in how we treat things though, and perhaps add a sentence that we do not describe all the properties for instance. I have made a pass through to check the mapping from the API to impl. draft for all the protocol mappings as a basis for making things consistent. Here is the summary. We cover the same actions and events for all the protocols with the exceptions that UDP Multicast does not cover the ConnectionError event and SCTP covers the Priority connection property. And MPTCP and UDP-lite are special as they refer back to the base protocols. There is one terminology inconsistency where the impl. draft refers to InitiateError and this has been generalized to EstablishmentError in the API Unless I missed something, there are only two actions that can be made on a connection that are not covered, addRemote and removeRemote. This is probably fine as they do not mean much for most of the protocols (or we could add them to be complete). We should perhaps mention how they relate to MPTCP though if we leave them out. We also do not cover the Preconnection.Rendezvous action. Again this may be fine as it is a bit special (or we can mention it for consistency.) For the events, we cover a subset with main focus on the connection related events. The Closed Event and some events related to connection setup are missing. As we cover ConnectionError we should perhaps cover the Closed event as well? We do not cover any of the Send or Receive events which seems fine. In relation to Connection and Message properties, we do not cover any of them except for the Connection priority covered for SCTP and we talk about some properties in connection to UDP-lite and MPTCP. SCTP also has support for partial reliability and unordered messages so not sure why the priority part was singled out and added. NAME any particular reasoning behind this part? Possibly we should remove the priority part for SCTP or discuss it in running text in a similar fashion as for UDP-lite and MPTCP We may want to update the UDP-lite description to also say something about what properties it supports over UDP. The current description sounds like it is a subset. Once we agree on how to deal with the bullets above, only smaller modifications should be needed to the text.\nConnectionError should be added; SCTP priority should be removed Fix the inconsistency Add addRemote and removeRemote text related to MPTCP No action Mention Closed Event See item 1, no further action Update UDP-Lite description", "new_text": "the IP header can be obtained (GET_ECN.UDP(-Lite)). Calling \"Close\" on a UDP Connection (ABORT.UDP(-Lite)) releases the local port reservation. The Connection then issues a \"Closed\" event. Calling \"Abort\" on a UDP Connection (ABORT.UDP(-Lite)) is identical to calling \"Close\", except that the Connection will send a \"ConnectionError\" Event rather than a \"Closed\" Event. Calling \"CloseGroup\" on a UDP Connection (ABORT.UDP(-Lite)) is identical to calling \"Close\" on this Connection and on all"}
{"id": "q-en-api-drafts-46cdc90734579a8a2adf16b7fa71b6c5fa2f013ea245feb03e707006e8edff02", "old_text": "Calling \"Close\" on a UDP Multicast Receive Connection (ABORT.UDP(- Lite)) releases the local port reservation and leaves the group. Calling \"Abort\" on a UDP Multicast Receive Connection (ABORT.UDP(- Lite)) is identical to calling \"Close\". Calling \"CloseGroup\" on a UDP Multicast Receive Connection (ABORT.UDP(-Lite)) is identical to calling \"Close\" on this", "comments": "Takes care of item 5 and .\nSee discussion on\nProvide a complete template that covers all functions/properties from the API. Indicate how the template should be used (e.g. leave out the parts that are not relevant for a particular protocol or include all fields and mark fields that are not relevant as such.\nI'm wondering if this is really something that's in scope that we should cover. We do have the message properties section that explains how to implement various properties, but I think adding each property to every protocol mapping will be a huge amount of text that will be difficult to get exactly right.\nI agree, we have added additional properties since we opened this issue to check the mapping and make sure we have a complete coverage. We should still be consistent in how we treat things though, and perhaps add a sentence that we do not describe all the properties for instance. I have made a pass through to check the mapping from the API to impl. draft for all the protocol mappings as a basis for making things consistent. Here is the summary. We cover the same actions and events for all the protocols with the exceptions that UDP Multicast does not cover the ConnectionError event and SCTP covers the Priority connection property. And MPTCP and UDP-lite are special as they refer back to the base protocols. There is one terminology inconsistency where the impl. draft refers to InitiateError and this has been generalized to EstablishmentError in the API Unless I missed something, there are only two actions that can be made on a connection that are not covered, addRemote and removeRemote. This is probably fine as they do not mean much for most of the protocols (or we could add them to be complete). We should perhaps mention how they relate to MPTCP though if we leave them out. We also do not cover the Preconnection.Rendezvous action. Again this may be fine as it is a bit special (or we can mention it for consistency.) For the events, we cover a subset with main focus on the connection related events. The Closed Event and some events related to connection setup are missing. As we cover ConnectionError we should perhaps cover the Closed event as well? We do not cover any of the Send or Receive events which seems fine. In relation to Connection and Message properties, we do not cover any of them except for the Connection priority covered for SCTP and we talk about some properties in connection to UDP-lite and MPTCP. SCTP also has support for partial reliability and unordered messages so not sure why the priority part was singled out and added. NAME any particular reasoning behind this part? Possibly we should remove the priority part for SCTP or discuss it in running text in a similar fashion as for UDP-lite and MPTCP We may want to update the UDP-lite description to also say something about what properties it supports over UDP. The current description sounds like it is a subset. Once we agree on how to deal with the bullets above, only smaller modifications should be needed to the text.\nConnectionError should be added; SCTP priority should be removed Fix the inconsistency Add addRemote and removeRemote text related to MPTCP No action Mention Closed Event See item 1, no further action Update UDP-Lite description", "new_text": "Calling \"Close\" on a UDP Multicast Receive Connection (ABORT.UDP(- Lite)) releases the local port reservation and leaves the group. The Connection then issues a \"Closed\" Event. Calling \"Abort\" on a UDP Multicast Receive Connection (ABORT.UDP(- Lite)) is identical to calling \"Close\", except that the Connection will send a \"ConnectionError\" Event rather than a \"Closed\" Event. Calling \"CloseGroup\" on a UDP Multicast Receive Connection (ABORT.UDP(-Lite)) is identical to calling \"Close\" on this"}
{"id": "q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f", "old_text": "Protocol-specific Properties MUST use the protocol acronym as the Namespace (e.g., a \"tcp\" Connection could support a TCP-specific Transport Property, such as the user timeout value, in a protocol- specific property called \"tcp.userTimeoutValue\" (see tcp-uto). Vendor or implementation specific properties MUST use a string identifying the vendor or implementation as the Namespace.", "comments": "We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well.", "new_text": "Protocol-specific Properties MUST use the protocol acronym as the Namespace (e.g., a \"tcp\" Connection could support a TCP-specific Transport Property, such as the user timeout value, in a Protocol- specific Property called \"tcp.userTimeoutValue\" (see tcp-uto). Vendor or implementation specific properties MUST use a string identifying the vendor or implementation as the Namespace."}
{"id": "q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f", "old_text": "Namespaces for each of the keywords provided in the IANA protocol numbers registry (see https://www.iana.org/assignments/protocol- numbers/protocol-numbers.xhtml) are reserved for protocol-specific Properties and MUST NOT be used for vendor or implementation-specific properties. Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor- or implementation-", "comments": "We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well.", "new_text": "Namespaces for each of the keywords provided in the IANA protocol numbers registry (see https://www.iana.org/assignments/protocol- numbers/protocol-numbers.xhtml) are reserved for Protocol-specific Properties and MUST NOT be used for vendor or implementation-specific properties. Avoid using any of the terms listed as keywords in the protocol numbers registry as any part of a vendor- or implementation-"}
{"id": "q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f", "old_text": "The optional \"connectionProperties\" parameter allows passing Transport Properties that control the behavior of the underlying stream or connection to be created, e.g., protocol-specific properties to request specific stream IDs for SCTP or QUIC. Message Properties set on a Connection also apply only to that Connection.", "comments": "We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well.", "new_text": "The optional \"connectionProperties\" parameter allows passing Transport Properties that control the behavior of the underlying stream or connection to be created, e.g., Protocol-specific Properties to request specific stream IDs for SCTP or QUIC. Message Properties set on a Connection also apply only to that Connection."}
{"id": "q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f", "old_text": "Protocol-specific Properties are defined in a transport- and implementation-specific way to permit more specialized protocol features to be used. Too much reliance by an application on protocol-specific Properties can significantly reduce the flexibility of a transport services implementation to make appropriate selection and configuration choices. Therefore, it is RECOMMENDED that Protocol Properties are used for properties common across different protocols and that Protocol-specific Properties are only used where specific protocols or properties are necessary. The application can set and query Connection Properties on a per- Connection basis. Connection Properties that are not read-only can", "comments": "We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well.", "new_text": "Protocol-specific Properties are defined in a transport- and implementation-specific way to permit more specialized protocol features to be used. Too much reliance by an application on Protocol-specific Properties can significantly reduce the flexibility of a transport services implementation to make appropriate selection and configuration choices. Therefore, it is RECOMMENDED that Protocol-specific Properties are used for properties common across different protocols and that Protocol-specific Properties are only used where specific protocols or properties are necessary. The application can set and query Connection Properties on a per- Connection basis. Connection Properties that are not read-only can"}
{"id": "q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f", "old_text": "Message Properties can be set and queried using the Message Context: These Message Properties may be generic properties or protocol- specific Properties. For MessageContexts returned by send Events (see send-events) and", "comments": "We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well.", "new_text": "Message Properties can be set and queried using the Message Context: These Message Properties may be generic properties or Protocol- specific Properties. For MessageContexts returned by send Events (see send-events) and"}
{"id": "q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f", "old_text": "In the following cases, failure should be detected during pre- establishment: A request by an application for Protocol Properties that cannot be satisfied by any of the available protocols. For example, if an application requires \"perMsgReliability\", but no such feature is available in any protocol on the host running the transport system this should result in an error, e.g., when SCTP is not supported by the operating system. A request by an application for Protocol Properties that are in conflict with each other, i.e., the required and prohibited properties cannot be satisfied by the same protocol. For example, if an application prohibits \"reliability\" but then requires \"perMsgReliability\", this mismatch should result in an error. To avoid allocating resources that are not finally needed, it is", "comments": "We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well.", "new_text": "In the following cases, failure should be detected during pre- establishment: A request by an application for properties that cannot be satisfied by any of the available protocols. For example, if an application requires \"perMsgReliability\", but no such feature is available in any protocol on the host running the transport system this should result in an error, e.g., when SCTP is not supported by the operating system. A request by an application for properties that are in conflict with each other, i.e., the required and prohibited properties cannot be satisfied by the same protocol. For example, if an application prohibits \"reliability\" but then requires \"perMsgReliability\", this mismatch should result in an error. To avoid allocating resources that are not finally needed, it is"}
{"id": "q-en-api-drafts-e8616ae4e1835748f6ad2983a5797000121a046ef6aaafae2635e70321fadf6f", "old_text": "implementation should store the properties in storage common to all protocols, and notify all protocol instances in the Protocol Stack whenever the properties have been modified by the application. For protocol-specfic properties, such as the User Timeout that applies to TCP, the Transport Services implementation only needs to update the relevant protocol instance. If an error is encountered in setting a property (for example, if the application tries to set a TCP-specific property on a Connection that", "comments": "We don't have \"Protocol Properties\" - two occurrences in -interface should really refer to \"protocol-specific Properties\", and two occurrences in -impl should really refer to properties.\nURL Are Protocol Specific Properties and Protocol Properties same or different? Also note that we are first time defining the Protocol Properties in the interface document ( described along with Connection Properties) and not introduced in the architecture document. I think we should be clear about what we are defining where and describing where. I would suggest use of cross reference rather describing everything again and again, which might create confusion and inconsistency.\nOh, this seems like a mistake (great catch, thanks!) - \"Protocol Properties\" don't exist, I think we only have \"protocol-specific Properties\".\nI caught this when reading the implementation doc. So it should be fixed there as well.", "new_text": "implementation should store the properties in storage common to all protocols, and notify all protocol instances in the Protocol Stack whenever the properties have been modified by the application. For Protocol-specific Properties, such as the User Timeout that applies to TCP, the Transport Services implementation only needs to update the relevant protocol instance. If an error is encountered in setting a property (for example, if the application tries to set a TCP-specific property on a Connection that"}
{"id": "q-en-api-drafts-222d1c53071e27fb4d52bb4eb4c9fb9ce874c578cc5875a4d75c0ce18ccc7b98", "old_text": "SHOULD be made accessible through a unified set of calls using the Transport Services API. As a baseline, any Transport Services API SHOULD allow access to the minimal set of features offered by transport protocols RFC8923. An application can specify constraints and preferences for the protocols, features, and network interfaces it will use via", "comments": "URL Assuming there would be more protocols in the future and the minset functionalities will expand via extension or amendments. It would important to indicate if this architecture should is also extensible or it only supports on the minimum set referred. Definitely we haven't seen the future yet, but it would be important to express if we have considered such expansion of minimum functionalities.\nTo me, the current spec is based on minset that describes the common functions. So, the addition I would expect here is to say that there can be extensions and protocol-specific features.\nI expect that if a common feature is added to the minset, then the architecture and API surface would also be extended to match.\nLGTM modulo a nit", "new_text": "SHOULD be made accessible through a unified set of calls using the Transport Services API. As a baseline, any Transport Services API SHOULD allow access to the minimal set of features offered by transport protocols RFC8923. If that minimal set is updated or expanded in the future, the Transport Services API ought to be extended to match. An application can specify constraints and preferences for the protocols, features, and network interfaces it will use via"}
{"id": "q-en-api-drafts-933c327b5fed1349e7192b6ed58018f62f9a9307318049554db0f24e012c7ad4", "old_text": "setting Connection and Message Properties in Pre-Establishment) or have been omitted for brevity and simplicity. 4.1.1. Endpoint: An endpoint represents an identifier for one side of a", "comments": "URL Unless there is a description the figure, with directed and non-directed lines it is very hard to interpret the figure. I would recommend to at least add some reading guidance for the lines.\nLGTM, thanks!Looks fine.", "new_text": "setting Connection and Message Properties in Pre-Establishment) or have been omitted for brevity and simplicity. In this diagram, the lifetime of a Connection object is broken into three phases: Pre-Establishment, the Established state, and Termination. Pre-Establishment is based around a Preconnection object, that contains various sub-objects that describe the properties and parameters of desired Connections (Local and Remote Endpoints, Transport Properties, and Security Parameters). A Preconnection can be used to start listening for inbound connections, in which case a Listener object is created, or can be used to establish a new connection directly using Initiate() (for outbound connections) or Rendezvous() (for peer-to-peer connections). Once a Connection is in the Established state, an application can send and receive Message objects, and receive state updates. Closing or aborting a connection, either locally or from the peer, can terminate a connection. 4.1.1. Endpoint: An endpoint represents an identifier for one side of a"}
{"id": "q-en-api-drafts-28dd9bd48f4d0a96f92a6def589131ea5a00af8b586523054b7d13b23c2ce816", "old_text": "application. Framer: A Framer is a data translation layer that can be added to a Connection to define how application-layer Messages are transmitted over a Protocol Stack. This is particularly relevant when using a protocol that otherwise presents unstructured streams, such as TCP. 4.1.6.", "comments": "(sorry that it took me so long) This uses some text that is also in the interface draft.\nThis I would like us to discuss. Well I know we have discussed this before. As I am reading the Interface document after a long while and having read it along with the architecture document, I kind of strongly feel that the architecture document lack of description on framer and the states. These are mentioned and pictured in the architecture document but haven't been clear there. Hence, I kind of feel that we need to move this two sections to architecture document. Please discuss so that I can understand the reasoning of not doing so.\nRegarding section 9.1.2, the framer description: I think it's good to be clear and complete in the API document, but I agree that framers should be appropriately described in the architecture document. I have the impression that the text in the -arch document on this matter may indeed be too short, while the text in the -interface document may be too long (although a little bit of repetition surely won't hurt here, it's good for this document to be understandable and clear on its own). Maybe some of it could be moved to -arch, or repeated there? Regarding section 11, Connection states: to me, this is fine as it is. Section 4.1 / figure 4 in the architecture document make state transitions pretty clear - and section 11 / figure 2 are just a more fine-grain look at these states as a way to explain exactly which Events will occur when. This seems needed in the API document, where all other Events and Actions are defined, and I would think that this is at a too detailed / fine-grain level for the architecture document.\nNAME after giving it some more thoughts, I think your suggested approach should be fine , meaning we put more description of framer in the architecture document and leave the connection states to the Interface document.\nNAME did you want to take a stab at this?\nLooks great!", "new_text": "application. Framer: A Framer is a data translation layer that can be added to a Connection. Framers allow extending a Connection's protocol stack to define how to encapsulate or encode outbound Messages, and how to decapsulate or decode inbound data into Messages. In this way, message boundaries can be preserved when using a Connection object, even with a protocol that otherwise presents unstructured streams, such as TCP. This is designed based on the fact that many of the current application protocols evolved over TCP, which does not provide message boundary preservation, and since many of these protocols require message boundaries to function, each application layer protocol has defined its own framing. For example, when an HTTP application sends and receives HTTP messages over a byte-stream transport, it must parse the boundaries of HTTP messages from the stream of bytes. 4.1.6."}
{"id": "q-en-api-drafts-69511599c0601aaa3030ceaf26a47984653fc749ae42f79c3bdc324a3fc0bc17", "old_text": "1.4. This subsection provides a glossary of key terms related to the Transport Services architecture. It provides a simple description of key terms that are later defined in this document. Application: An entity that uses the transport layer for end-to-", "comments": "Feel free to approve or propose changes to improve this sentence in Arch.\nBT: \"I\u2019m not convinced of the value of the glossary of terms, as it forwardrefs basically the entire document, and some of the definitions are so generic as to be potentially confusing (e.g. Event), so I think putting these up front reduces the document\u2019s approachability and readability. I understand that some people like these (and other standards orgs make them mandatory, hi ETSI), so I won\u2019t fight it.\" \"Adding a sentence making it clear it\u2019s safe to skip the glossary if you\u2019re going to read the rest of the document would address my approachability concern completely. \"", "new_text": "1.4. This subsection provides a glossary of key terms related to the Transport Services architecture. It provides a short description of key terms that are later defined in this document. Application: An entity that uses the transport layer for end-to-"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "constraints on the selection and configuration of paths and protocols to establish a Connection with a remote endpoint. A Connection represents a transport Protocol Stack on which data can be sent to and received from a remote endpoint. Connections can be created from Preconnections in three ways: by initiating the Preconnection (i.e., actively opening, as in a client), through listening on the Preconnection (i.e., passively opening, as in a server), or rendezvousing on the Preconnection (i.e. peer to peer establishment). Once a Connection is established, data can be sent on it in the form of Messages. The interface supports the preservation of message", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "constraints on the selection and configuration of paths and protocols to establish a Connection with a remote endpoint. A Connection represents a transport Protocol Stack on which data can be sent to and/or received from a remote endpoint (i.e., depending on the kind of transport, connections can be bi-directional or unidirectional). Connections can be created from Preconnections in three ways: by initiating the Preconnection (i.e., actively opening, as in a client), through listening on the Preconnection (i.e., passively opening, as in a server), or rendezvousing on the Preconnection (i.e. peer to peer establishment). Once a Connection is established, data can be sent on it in the form of Messages. The interface supports the preservation of message"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "5.2.4. Type: Preference This property specifies whether an application would like to supply a", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "5.2.4. Type: Enum This property specifies whether an application wants to use the connection for sending and/or receiving data. Possible values are: The connection must support sending and receiving data The connection must support sending data. The connection must support receiving data In case a unidirectional connection is requested, but the system should fall back to bidirectional transport if unidirectional connections are not supported by the transport protocol. 5.2.5. Type: Preference This property specifies whether an application would like to supply a"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "also send-idempotent. This is a strict requirement. The default is to not have this option. 5.2.5. Type: Preference", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "also send-idempotent. This is a strict requirement. The default is to not have this option. 5.2.6. Type: Preference"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "single underlying transport connection where possible. This is not a strict requirement. The default is to not have this option. 5.2.6. Type: Boolean", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "single underlying transport connection where possible. This is not a strict requirement. The default is to not have this option. 5.2.7. Type: Boolean"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "This property applies to Connections and Connection Groups. The default is to have this option. 5.2.7. Type: Boolean", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "This property applies to Connections and Connection Groups. The default is to have this option. 5.2.8. Type: Boolean"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "This property applies to Connections and Connection Groups. The default is not to have this option. 5.2.8. Type: Preference", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "This property applies to Connections and Connection Groups. The default is not to have this option. 5.2.9. Type: Preference"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "The default is full checksum coverage without being able to change it, and requiring a checksum when receiving. 5.2.9. Type: Tuple (Enumeration, Preference)", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "The default is full checksum coverage without being able to change it, and requiring a checksum when receiving. 5.2.10. Type: Tuple (Enumeration, Preference)"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "in the system policy. The valid values for the access network interface kinds are implementation specific. 5.2.10. Type: Enumeration", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "in the system policy. The valid values for the access network interface kinds are implementation specific. 5.2.11. Type: Enumeration"}
{"id": "q-en-api-drafts-f919ccf5b9589cef92b499c49d45bec9fe3cd9a5edd83f91b1d158a425cdd233", "old_text": "The status of the Connection, which can be one of the following: Establishing, Established, Closing, or Closed. Transport Features of the protocols that conform to the Required and Prohibited Transport Preferences, which might be selected by the transport system during Establishment. These features", "comments": "This closed As we have no methods on a connection object that query properties, cansend/canreceive also became connection properties. As with all the other connection properties, these are indeed a little underspecified. I want to keep it that way till we fix it for all connection properties with and\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? I would prefer we merge it now and make a new issue for the question whether the uni-directional connection should be more reflected in the connection. I am not particularly found on keeping it a selection property but would like to have the concept of unidirectional connections rather sooner than later. Our consensus in was that unidirectional connection should use the same abstraction for QUIC unidirectional streams, multicast sources/sinks, and half-closed connections \u2013 despite they are constructed in completely different ways. The Selection property is just for the QUIC style ones\u2026\n\nAehm\u2026 that comes surprising \u2013 NAME why didn't you bring this up in ? Do you think this pull-request is the right place to discuss this issue? Happy to move the discussion back over to the issue if you like. Will continue the thread here for now though since you've got points to address below. fine, but keep open then, because adding the selection property is possibly-necessary but definitely-not-sufficient to fully address unidirectional connection semantics. good, I agree with this. then I'm confused why this PR purports to since it only addresses part of the issue...?\nNAME agreed. The question (and discussion) was about support for including some support for unidirectional connections (with focus on QUIC and half-closed), but omitted details on multicast, as this has not laned yet. So indeed, this pull-requests does everything requested by , but does not fully capture the topic. I will move the open issues (Multicast support, is the Selection Property enough, find some place for more explanation of Uni/Bidi Streams) to once this PR is merged.\nIn Section 9 \u2013 \"Setting and Querying of Connection Properties\" \u2013 the properties of an established connection and path are heavily underspecified. Should be fixed along with .\nI'm working on this issue within our pull request for , using the terminology as simplified there. For Section 9, I'm proposing the following: An application can query Connection Properties on a connection. Connection Properties include the status of the connection and whether it is possible to send and to receive data on it. (One of them might not be possible, e.g., in a unidirectional connection.) Additionally, Connection Properties include the Connection's Transport Properties. Transport Properties include Selection Properties, so the application can know which Protocol Selection Properties had been specified, and which ones are actually supported by the protocol that has been chosen. For Establishing connections (i.e., we are not done racing yet, the final protocols have not been selected yet), the application gets the Selection Properties that had been specified on the Preconnection earlier. For Established, Closing, or Closed connections (i.e., the protocols have been selected), the application learns whether the Selection Properties are actually provided by the selected protocols. For example, if the application specified earlier, \"Control Checksum Coverage: Prefer\", now it will get \"Control checksum coverage: True\", indicating that on this connection it can indeed control the checksum coverage. Transport Properties also include Protocol Properties (Generic and Specific). These Protocol Properties can still be modified by the application. Basically, Protocol Properties are the only thing that the application can still set, everything else it can only query at this point. Finally, the application can query properties of the path. Not sure how well we have to specify them, probably this part will be implementation-specific just like, e.g., the Interface Instance or Type. Does everyone agree, or is there anything to discuss here? For me this is not a big change, it's just putting what was already there in terms of . The only thing I'm wondering if whether the Protocol Selection Properties of the finally chosen protocols are still \"Selection Properties\", as the selection is already done. That's why they were called \"Transport Features\" before, but now I want less terms and they're almost the same thing...\nOK - so I commented on these things before - but in these things I think I agree. I'd personally prefer to keep the features as attributes implemented in the transport (etc) and the properties as the things requested. Indeed this begs the question about what to call the resulting transport when you selected properties and now get a transport. I think still calling them (for consistency) properties seems really good to me. :-).\nAs QUIC has unidirectional streams, we might want to be able to accommodate them in our transport API. Do we want to handle them fundamentally different from a half-closed connection? This should match the decisions made in .\nSide note: we don't support half-closed connections to begin with. As for unidirectional streams, is this the final decision in QUIC's design, or might it change? I'd suggest postponing this until we can be sure.\nNAME regarding half-closed connections: we don't have any top-level state in the connection for being half-closed. However, being able to express that a message is final in a given direction when sending allows the same functionality. As I've discussed before, we absolutely need the ability to express sending a TCP FIN, but I whole-heartedly agree it doesn't need to be a top level concept. I think this is somewhat different from half-closing, since a unidirectional stream never allows sending on one end. You could imagine that it is a connection object that returns an error if you ever try to send on it, etc. If you wanted it to look like half-closing, you could open a connection (stream), and then mark it complete in a direction immediately? But that's a bit odd.\nAll ok, I agree with your suggestion to fin-mark a message\nI agreee with NAME regarding half-closed. My suggestion is to explicitly state that connections can be uni- or bi-directional and to add a method to query that (cansend / canrecv). This aligns nicely with final messages after which can_send becomes false. I can do a PR for that.\nThanks! I like this\nStill missing unidirectional stream for multicast. \u2013 please continue discussion of multicast aspects in The connection properties (can send/receive data) are still a little under-specified - please discuss in details in \u2013 we should fix this together with .\nWe need unidirectional connections. You know, for multicast. And ICN. And unidirectional streaming protocols.\nThis is a dup of , which has more discussion\nI am re-opening this ticket as only dealt with unidirectional streams in unicast context (added with PR ): Current status: The current model for unidirectional streams supports half-closed connections and QUIC style unidirectional streams so far. We don't know yet whether it is sufficient to have canread/canwrite property on the connection level to model unidirectional connections We don't have pre-connection methods to create multicast sessions yet\nthere's a selection property for this, tagged discuss, which we haven't discussed yet. I'm moving this down to the appendix as part of my restructuring in my work on\nIf help is still wanted here, I'm maybe up for an attempt or 2 at adding multicast support. Has there been some prior discussion I can review about rough ideas on pre-connection methods for multicast sessions, or should I just try making something up?\n\nok, with the caveat that we need to keep multicast as an open issueLGTM About the multicast concern: I agree that it would be good if this could function for multicast reception too, and I couldn't tell if this is suitable for multicast. I suspect that multicast support is much harder anyway - see RFC 3678... so for now, this is a good step forward IMO, and let's discuss how this relates to multicast when we see a concrete text proposal that addresses it.", "new_text": "The status of the Connection, which can be one of the following: Establishing, Established, Closing, or Closed. Whether the connection can be used to send data. A connection can not be used for sending if the connection was created unidirectional receive only or if a message with the final property was sent over this connection. Whether the connection can be used to receive data. A connection can not be used for reading if the connection was created unidirectional send only or if a message with the final property received was received. The latter is only supported by certain transport protocols, e.g., by TCP as half-closed connection. Transport Features of the protocols that conform to the Required and Prohibited Transport Preferences, which might be selected by the transport system during Establishment. These features"}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "Changing one of these Protocol Properties on one Connection in the group changes it for all others. Per-Message Protocol Properties, however, are not entangled. For example, changing \"Timeout for aborting Connection\" (see timeout) on one Connection in a group will automatically change this Protocol Property for all Connections in the group in the same way. However, changing \"Lifetime\" (see msg- lifetime) of a Message will only affect a single Message on a single Connection, entangled or not.", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "Changing one of these Protocol Properties on one Connection in the group changes it for all others. Per-Message Protocol Properties, however, are not entangled. For example, changing \"Timeout for aborting Connection\" (see conn-timeout) on one Connection in a group will automatically change this Protocol Property for all Connections in the group in the same way. However, changing \"Lifetime\" (see msg- lifetime) of a Message will only affect a single Message on a single Connection, entangled or not."}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "9. The application can set per-connection Protocol Properties (see protocol-props). Certain Procotol Properties may be read-only, on a protocol- and property-specific basis. ~~~ Connection.SetProperty(property, value) ~~~ At any point, the application can query Connection Properties. ~~~ ConnectionProperties := Connection.GetProperties() ~~~", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "9. 9.1. The application can set per-connection Properties. Certain Connection Properties may be read-only, on a protocol- and property- specific basis. ~~~ Connection.SetProperty(property, value) ~~~ At any point, the application can query Connection Properties. ~~~ ConnectionProperties := Connection.GetProperties() ~~~"}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "Depending on the status of the connection, the queried Connection Properties will include different information: The status of the connection, which can be one of the following: Establishing, Established, Closing, or Closed.", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "Depending on the status of the connection, the queried Connection Properties will include different information: [TODO: turn this list into actual properties or move up into the explaining text] The status of the connection, which can be one of the following: Establishing, Established, Closing, or Closed."}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "domain RFC7556, measurements by the Protocol Stack, or other sources. Slightly different from \"querying\", a SoftError Event can also occur, informing the application about the receipt of an ICMP error message related to the Connection. This will only happen if the underlying", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "domain RFC7556, measurements by the Protocol Stack, or other sources. 9.1.1. Boolean This property specifies whether an application considers it useful to be informed in case sent data was retransmitted more often than a certain threshold. When set to true, the effect is twofold: The application may receive events in case excessive retransmissions. In addition, the transport system considers this as a preference to use transports stacks that can provide this notification. This is not a strict requirement. If set to false, no notification of excessive retransmissions will be sent and this transport feature is ignored for protocol selection. The default is to have this option. 9.1.2. Integer This property specifies after how many retransmissions to inform the application about \"Excessive Retransmissions\". 9.1.3. Boolean This property specifies whether an application considers it useful to be informed when an ICMP error message arrives that does not force termination of a connection. When set to true, received ICMP errors will be available as SoftErrors. Note that even if a protocol supporting this property is selected, not all ICMP errors will necessarily be delivered, so applications cannot rely on receiving them. Setting this option also implies a preference to prefer transports stacks that can provide this notification. If not set, no events will be sent for ICMP soft error message and this transport feature is ignored for protocol selection. This property applies to Connections and Connection Groups. The default is not to have this option. 9.1.4. Integer This property specifies the part of the received data that needs to be covered by a checksum. It is given in Bytes. A value of 0 means that no checksum is required, and a special value (e.g., -1) indicates full checksum coverage. 9.1.5. Integer This Property is a non-negative integer representing the relative inverse priority of this Connection relative to other Connections in the same Connection Group. It has no effect on Connections not part of a Connection Group. As noted in groups, this property is not entangled when Connections are cloned. 9.1.6. Integer This property specifies how long to wait before aborting a Connection during establishment, or before deciding that a Connection has failed after establishment. It is given in seconds. 9.1.7. Enum This property specifies which scheduler should be used among Connections within a Connection Group, see groups. The set of schedulers can be taken from I-D.ietf-tsvwg-sctp-ndata. 9.1.8. Integer (read only) This property represents the maximum Message size that can be sent before or during Connection establishment, see also msg-idempotent. It is given in Bytes. 9.1.9. Integer (read only) This property, if applicable, represents the maximum Message size that can be sent without incurring network-layer fragmentation or transport layer segmentation at the sender. 9.1.10. Integer (read only) This property represents the maximum Message size that can be sent. 9.1.11. Integer (read only) This numeric property represents the maximum Message size that can be received. 9.2. Slightly different from \"querying\", a SoftError Event can also occur, informing the application about the receipt of an ICMP error message related to the Connection. This will only happen if the underlying"}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "12. Having discussed Transport Properties in connection-props and listed a number of Message Properties (\"Ordered\", \"Idempotent\" etc.) in message-props, we now provide a complete overview of a transport system's defined Transport Properties. Properties are structured in two ways: By how they influence the transport system, which leads to a classification into \"Selection Properties\", \"Protocol Properties\", \"Control Properties\" and \"Intents\". By the object they can be applied to: Preconnections, see connection-props, Connections, see introspection, and Messages, see message-props. Because some properties can be applied or queried on multiple objects, all Transport Properties are organized within a single namespace. Note that it is possible for a set of specified Transport Properties to be internally inconsistent, or to be inconsistent with the later", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "12. Having discussed Selection Properties in selection-props, Connection Properties connection-props, and Message Properties (message-props), we now provide a complete overview of a transport system's defined Transport Properties. Transport Properties are structured by the phase and object they are applied: - Selection Properties apply to Preconnections - see selection-props - Connections Properties apply to Connections - see connection-props - Messages Properties apply to Messages - see All Transport Properties are organized within a single namespace. This enables setting them as defaults in earlier stages and querying them in later stages: - Connections Properties can be set on Preconnections - Message Properties can be set on Preconnections and Connections - The effect of Selection Properties can be queried on Connections and Messages. Note that it is possible for a set of specified Transport Properties to be internally inconsistent, or to be inconsistent with the later"}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "12.2. Transport Properties - whether they apply to connections, preconnections, or messages - differ in the way they affect the transport system and protocols exposed through the transport system. The classification proposed below emphasizes two aspects of how properties affect the transport system, so applications know what to expect: Whether properties affect protocols exposed through the transport system (Protocol Properties) or the transport system itself (Control Properties) Whether properties have a clearly defined behavior that is likely to be invariant across implementations and environments (Protocol Properties and Control Properties) or whether the properties are interpreted by the transport system to provide a best effort service that matches the application needs as closely as possible (Intents). 12.2.1.", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "12.2. [TODO: This section is mostly obsolete due to our consensus on how to structure properties - double-check whether text needs to be moved and delete this section afterwards] 12.2.1."}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "12.3.11. Control Property [TODO: Discuss] Boolean Preconnection, Connection This property specifies whether an application considers it useful to be informed in case sent data was retransmitted more often than a certain threshold. When set to true, the effect is twofold: The application may receive events in case excessive retransmissions. In addition, the transport system considers this as a preference to use transports stacks that can provide this notification. This is not a strict requirement. If set to false, no notification of excessive retransmissions will be sent and this transport feature is ignored for protocol selection. The default is to have this option. 12.3.12. Control Property [TODO: Discuss] Integer Preconnection, Connection This property specifies after how many retransmissions to inform the application about \"Excessive Retransmissions\". 12.3.13. Control Property [TODO: Discuss] Boolean Preconnection, Connection This property specifies whether an application considers it useful to be informed when an ICMP error message arrives that does not force termination of a connection. When set to true, received ICMP errors will be available as SoftErrors. Note that even if a protocol supporting this property is selected, not all ICMP errors will necessarily be delivered, so applications cannot rely on receiving them. Setting this option also implies a preference to prefer transports stacks that can provide this notification. If not set, no events will be sent for ICMP soft error message and this transport feature is ignored for protocol selection. This property applies to Connections and Connection Groups. The default is not to have this option. 12.3.14.", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "12.3.11. Boolean Connection Property - see conn-retrans-notify. 12.3.12. Integer Connection Property - see 12.3.13. Boolean Connection Property - see 12.3.14."}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "12.3.16. Protocol Property (Generic) Integer Connection This property specifies the part of the received data that needs to be covered by a checksum. It is given in Bytes. A value of 0 means that no checksum is required, and a special value (e.g., -1) indicates full checksum coverage. 12.3.17.", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "12.3.16. Integer Connection Property - see 12.3.17."}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "12.3.21. Protocol Property (Generic) Integer Connection This Property is a non-negative integer representing the relative inverse priority of this Connection relative to other Connections in the same Connection Group. It has no effect on Connections not part of a Connection Group. As noted in groups, this property is not entangled when Connections are cloned. 12.3.22. [TODO: Discuss: should we remove this? Whether we need this or the other depends on how we want to implement multi-streaming. We don't need both, so we should make a decision.] Integer Message Property - see msg-niceness.", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "12.3.21. Integer Connection Property - see 12.3.22. [TODO: Discuss: should we remove this? Whether we need this or the other depends on how we want to implement multi-streaming. We don't need both, so we should make a decision. @mwelzl - These are really two different things @philsbln] Integer Message Property - see msg-niceness."}
{"id": "q-en-api-drafts-32b20825e2d45947ceab035d5d27d0094e3b71e51d76055261ff7f7040e55578", "old_text": "12.3.23. Control Property [TODO: Discuss] Integer Preconnection, Connection This property specifies how long to wait before aborting a Connection during establishment, or before deciding that a Connection has failed after establishment. It is given in seconds. 12.3.24. Protocol Property (Generic) / Control Property [TODO: Discuss] Enum Preconnection, Connection This property specifies which scheduler should be used among Connections within a Connection Group, see groups. The set of schedulers can be taken from I-D.ietf-tsvwg-sctp-ndata. 12.3.25. Protocol Property (Generic) Integer Connection (read only) This property represents the maximum Message size that can be sent before or during Connection establishment, see also msg-idempotent. It is given in Bytes. This property is read-only. 12.3.26. Protocol Property (Generic) Integer Connection (read only) This property, if applicable, represents the maximum Message size that can be sent without incurring network-layer fragmentation or transport layer segmentation at the sender. This property is read- only. 12.3.27. Protocol Property (Generic) Integer Connection (read only) This property represents the maximum Message size that can be sent. This property is read-only. 12.3.28. Protocol Property (Generic) Integer Connection (read only) This numeric property represents the maximum Message size that can be received. This property is read-only. 12.3.29.", "comments": "This is the second step towards applying our new Properties Classification. It moves up the Connection Properties and annotates most places with stuff to do.\nAs discussed in\nThanks for moving this forward! I only found a few nits. Also, I have a question about whether two of these Properties should maybe be changed to be Selection Properties, but that's more of a discussion and not something that needs to be done in this URL modulo NAME commentsI carefully checked it, but I find it very hard to be sure that this is 100% ok (as is usually the case IMO with diff's about text blocks being moved around). My suggestion is to land it and see.", "new_text": "12.3.23. Integer Connection Property - see 12.3.24. Enum Connection Property - see 12.3.25. Integer Connection Property (read-only) - see 12.3.26. Integer Connection Property (read-only) - see 12.3.27. Integer Connection Property (read-only) - see 12.3.28. Integer Connection Property (read-only) - see 12.3.29."}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "used to fine-tune the eventually established connection. These Connection Properties can also be used to monitor and fine-tune established connections. The behavior of the selected protocol stack(s) when sending Messages is controlled by Message Context Properties message-props. Collectively, Selection, Connection, and Message Context Properties can be referred to as Transport Properties. All Transport Properties, regardless of the phase in which they are used, are organized within a single namespace. This enables setting them as defaults in earlier stages and querying them in later stages: - Connection Properties can be set on Preconnections - Message Properties can be set on Preconnections and Connections - The effect of Selection Properties can be queried on Connections and Messages Transport Properties can have one of a set of data types:", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "used to fine-tune the eventually established connection. These Connection Properties can also be used to monitor and fine-tune established connections. The behavior of the selected protocol stack(s) when sending Messages is controlled by Message Properties message-props. Collectively, Selection, Connection, and Message Properties can be referred to as Transport Properties. All Transport Properties, regardless of the phase in which they are used, are organized within a single namespace. This enables setting them as defaults in earlier stages and querying them in later stages: - Connection Properties can be set on Preconnections - Message Properties can be set on Preconnections and Connections - The effect of Selection Properties can be queried on Connections and Messages Transport Properties can have one of a set of data types:"}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "Initiate. An InitiateError occurs either when the set of transport properties and cryptographic parameters cannot be fulfilled on a Connection for initiation (e.g. the set of available Paths and/or Protocol Stacks meeting the constraints is empty) or reconciled with the local and/or remote endpoints; when the remote specifier cannot be resolved; or", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "Initiate. An InitiateError occurs either when the set of transport properties and security parameters cannot be fulfilled on a Connection for initiation (e.g. the set of available Paths and/or Protocol Stacks meeting the constraints is empty) or reconciled with the local and/or remote endpoints; when the remote specifier cannot be resolved; or"}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "Applications may need to annotate the Messages they send with extra information to control how data is scheduled and processed by the transport protocols in the Connection. A messageContext object contains parameters for sending Messages, and can be passed to the Send Action. Note that these properties are per-Message, not per- Send if partial Messages are sent (send-partial). All data blocks associated with a single Message share properties. For example, it", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "Applications may need to annotate the Messages they send with extra information to control how data is scheduled and processed by the transport protocols in the Connection. A MessageContext object contains properties for sending Messages, and can be passed to the Send Action. Note that these properties are per-Message, not per- Send if partial Messages are sent (send-partial). All data blocks associated with a single Message share properties. For example, it"}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "allow the end of a Message to still be sent. The simpler form of Send that does not take any messageContext is equivalent to passing a default messageContext with not values added. If an application wants to override Message Properties for a specific message, it can acquire an empty messageContext Object and add all desired Message Properties to that Object. It can then reuse the same messageContext Object for sending multiple Messages with the same properties. Parameters may be added to a messageContext object only before the context is used for sending. Once a messageContext has been used with a Send call, modifying any of its parameters is invalid. Message Properties may be inconsistent with the properties of the Protocol Stacks underlying the Connection on which a given Message is", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "allow the end of a Message to still be sent. The simpler form of Send that does not take any messageContext is equivalent to passing a default MessageContext with not values added. If an application wants to override Message Properties for a specific message, it can acquire an empty MessageContext Object and add all desired Message Properties to that Object. It can then reuse the same messageContext Object for sending multiple Messages with the same properties. Properties may be added to a MessageContext object only before the context is used for sending. Once a messageContext has been used with a Send call, modifying any of its properties is invalid. Message Properties may be inconsistent with the properties of the Protocol Stacks underlying the Connection on which a given Message is"}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "Sending a Message with Message Properties inconsistent with the Selection Properties of the Connection yields an error. The following Message Context Parameters are supported: 7.3.1.", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "Sending a Message with Message Properties inconsistent with the Selection Properties of the Connection yields an error. The following Message Properties are supported: 7.3.1."}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "The following example sends a Message in two separate calls to Send. All messageData sent with the same messageContext object will be treated as belonging to the same Message, and will constitute an in- order series until the endOfMessage is marked. Once the end of the Message is marked, the messageContext object may be re-used as a new Message with identical parameters. 7.5.", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "The following example sends a Message in two separate calls to Send. All data sent with the same MessageContext object will be treated as belonging to the same Message, and will constitute an in-order series until the endOfMessage is marked. Once the end of the Message is marked, the MessageContext object may be re-used as a new Message with identical parameters. 7.5."}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "invoke Receive again. Multiple invocations of ReceivedPartial deliver data for the same Message by passing the same messageContext, until the endOfMessage flag is delivered or a ReceiveError occurs. All partial blocks of a single Message are delivered in order without gaps. This event does not support delivering discontiguous partial Messages.", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "invoke Receive again. Multiple invocations of ReceivedPartial deliver data for the same Message by passing the same MessageContext, until the endOfMessage flag is delivered or a ReceiveError occurs. All partial blocks of a single Message are delivered in order without gaps. This event does not support delivering discontiguous partial Messages."}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "conditions that irrevocably lead to the termination of the Connection are signaled using ConnectionError instead (see termination). The ReceiveError event passes an optional associated messageContext. This may indicate that a Message that was being partially received previously, but had not completed, encountered an error and will not be completed.", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "conditions that irrevocably lead to the termination of the Connection are signaled using ConnectionError instead (see termination). The ReceiveError event passes an optional associated MessageContext. This may indicate that a Message that was being partially received previously, but had not completed, encountered an error and will not be completed."}
{"id": "q-en-api-drafts-d0b8263feefc71d2b1d1d14db6b274c9ce5948c181130903c92d196809b553b8", "old_text": "RFC7657 apply. The Capacity Profile for a selected protocol stack may be modified on a per-Message basis using the Transmission Profile Message Context Property; see send-profile. 9.2.", "comments": "Message Context Parameters/Properties are now Message Properties, and associated terminology cleanup. Addresses part of the resolution to .\nLooks good, thanks for the cleanup!LGTM", "new_text": "RFC7657 apply. The Capacity Profile for a selected protocol stack may be modified on a per-Message basis using the Transmission Profile Message Property; see send-profile. 9.2."}
{"id": "q-en-api-drafts-ec7a9b561ba84224064766c6b0dbd3614242118d36cb0c23d81a9d4c213d8adf", "old_text": "Abstract This document provides an overview of the architecture of Transport Services, a system for exposing the features of transport protocols to applications. This architecture serves as a basis for Application Programming Interfaces (APIs) and implementations that provide flexible transport networking services. It defines the common set of terminology and concepts to be used in more detailed discussion of Transport Services. 1.", "comments": "Thanks for these fixes! I did a bit of reworking of the abstract, but it keeps the main part of the changes you made.\nRegarding the sentence you removed, that was an adaption of an existing sentence, which I believe aimed to mention the words implementation and API to implicitly refere to the other docs (which was probably too subtle for anybody to notice anyway). However, the other change I made to that sentence was to mention one of the benefits / deployment incentives for taps (dynamic protocol and path selection). I wasn't sure about the wording here, but I think it would be good to (re-)add a sentence to the abstract saying why taps is so great and that everybody should deploy it.\nYes, the original second sentence was aimed at mentioning that there were two other documents, API and Implementation, that would be based on this one. I liked how in your revision, it did try to make things clearer than just saying \"flexible transport networking services\", but I felt that the specific example of dynamic path and protocol selection was a bit too specific for that one line. (I was chatting with NAME about this as I was editing, btw). My thought was that the list of features that you added in right after that, which go through the top-level bullet points of the advantages, do a more comprehensive job of summarizing the benefit.", "new_text": "Abstract This document provides an overview of the architecture of Transport Services, a model for exposing transport protocol features to applications for network communication. In contrast to what is provided by most existing Application Programming Interfaces (APIs), it is based on an asynchronous, event-driven interaction pattern; it uses messages for representing data transfer to applications; and it assumes an implementation that can use multiple IP addresses, multiple protocols, multiple paths, and provide multiple application streams. This document further defines the common set of terminology and concepts to be used in definitions of APIs and implementations. 1."}
{"id": "q-en-api-drafts-ec7a9b561ba84224064766c6b0dbd3614242118d36cb0c23d81a9d4c213d8adf", "old_text": "applications adopt this interface, they will benefit from a wide set of transport features that can evolve over time, and ensure that the system providing the interface can optimize its behavior based on the application requirements and network conditions. This document is developed in parallel with the specification of the Transport Services API I-D.ietf-taps-interface and Implementation I- D.ietf-taps-impl documents. 1.1.", "comments": "Thanks for these fixes! I did a bit of reworking of the abstract, but it keeps the main part of the changes you made.\nRegarding the sentence you removed, that was an adaption of an existing sentence, which I believe aimed to mention the words implementation and API to implicitly refere to the other docs (which was probably too subtle for anybody to notice anyway). However, the other change I made to that sentence was to mention one of the benefits / deployment incentives for taps (dynamic protocol and path selection). I wasn't sure about the wording here, but I think it would be good to (re-)add a sentence to the abstract saying why taps is so great and that everybody should deploy it.\nYes, the original second sentence was aimed at mentioning that there were two other documents, API and Implementation, that would be based on this one. I liked how in your revision, it did try to make things clearer than just saying \"flexible transport networking services\", but I felt that the specific example of dynamic path and protocol selection was a bit too specific for that one line. (I was chatting with NAME about this as I was editing, btw). My thought was that the list of features that you added in right after that, which go through the top-level bullet points of the advantages, do a more comprehensive job of summarizing the benefit.", "new_text": "applications adopt this interface, they will benefit from a wide set of transport features that can evolve over time, and ensure that the system providing the interface can optimize its behavior based on the application requirements and network conditions, without requiring a change to the application. Further, this flexibility does not only enable faster deployment of new feature and protocols, it can also support applications with racing and fallback mechanisms which today usually need to be implemented in each application separately. This document is developed in parallel with the specification of the Transport Services API I-D.ietf-taps-interface and Implementation I- D.ietf-taps-impl documents. Although following the Transport Services Architecture does of course not mean that all APIs and implementations have to be identical, agreeing on a common minimal set of features and representing them in a similar fashion improves the ability to easily port applications from one system to the another. 1.1."}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "3. During pre-establishment the application specifies the Endpoints to be used for communication as well as its preferences regarding Protocol and Path Selection. The implementation stores these objects and properties as part of the Preconnection object for use during connection establishment. For Protocol and Path Selection Properties that are not provided by the application, the implementation must use the default values specified in the Transport Services API (I-D.ietf- taps-interface). 3.1.", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "3. During pre-establishment the application specifies the Endpoints to be used for communication as well as its preferences via Selection Properties and, if desired, also Connection Properties. Generally, Connection Properties should be configured as early as possible, as they may serve as input to decisions that are made by the implementation (the Capacity Profile may guide usage of a protocol offering scavenger-type congestion control, for example). In the remainder of this document, we only refer to Selection Properties because they are the more typical case and have to be handled by all implementations. The implementation stores these objects and properties as part of the Preconnection object for use during connection establishment. For Selection Properties that are not provided by the application, the implementation must use the default values specified in the Transport Services API (I-D.ietf-taps-interface). 3.1."}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "3.2. The properties specified during pre-establishment has a close connection to system policy. The implementation is responsible for combining and reconciling several different sources of preferences when establishing Connections. These include, but are not limited to: Application preferences, i.e., preferences specified during the pre-establishment such as Local Endpoint, Remote Endpoint, Path Selection Properties, and Protocol Selection Properties. Dynamic system policy, i.e., policy compiled from internally and externally acquired information about available network", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "3.2. The properties specified during pre-establishment have a close connection to system policy. The implementation is responsible for combining and reconciling several different sources of preferences when establishing Connections. These include, but are not limited to: Application preferences, i.e., preferences specified during the pre-establishment via Selection Properties. Dynamic system policy, i.e., policy compiled from internally and externally acquired information about available network"}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "The step of gathering candidates involves identifying which paths, protocols, and endpoints may be used for a given Connection. This list is determined by the requirements, prohibitions, and preferences of the application as specified in the Path Selection Properties and Protocol Selection Properties. 4.1.1.", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "The step of gathering candidates involves identifying which paths, protocols, and endpoints may be used for a given Connection. This list is determined by the requirements, prohibitions, and preferences of the application as specified in the Selection Properties. 4.1.1."}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "multiple layers of derivation or resolution, such as DNS service resolution and DNS hostname resolution. 4.3. Implementations should sort the branches of the tree of connection", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "multiple layers of derivation or resolution, such as DNS service resolution and DNS hostname resolution. For example, if the application has indicated both a preference for WiFi over LTE and for a feature only available in SCTP, branches will be first sorted accord to path selection, with WiFi at the top. Then, branches with SCTP will be sorted to the top within their subtree according to the properties influencing protocol selection. However, if the implementation has cached the information that SCTP is not available on the path over WiFi, there is no SCTP node in the WiFi subtree. Here, the path over WiFi will be tried first, and, if connection establishment succeeds, TCP will be used. So the Selection Property of preferring WiFi takes precedence over the Property that led to a preference for SCTP. 4.3. Implementations should sort the branches of the tree of connection"}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "with higher rankings represent connection attempts that will be raced first. Implementations should order the branches to reflect the preferences expressed by the application for its new connection, including Protocol and Path Selection Properties, which are specified in I-D.ietf-taps-interface. In addition to the properties provided by the application, an implementation may include additional criteria such as cached performance estimates, see performance-caches, or system policy, see role-of-system-policy, in the ranking. Two examples of how the Protocol and Path Selection Properties may be used to sort branches are provided below: Interface Type: If the application specifies an interface type to be preferred or avoided, implementations should rank paths accordingly. If the application specifies an interface type to be required or prohibited, we expect an implementation to not include the non-conforming paths into the three. Capacity Profile: An implementation may use the Capacity Profile to prefer paths optimized for the application's expected traffic pattern according to cached performance estimates, see performance-caches: Interactive/Low Latency: Prefer paths with the lowest expected Round Trip Time Constant Rate: Prefer paths that can satisfy the requested Stream Send or Stream Receive Bitrate, based on observed maximum throughput Scavenger/Bulk: Prefer paths with the highest expected available bandwidth, based on observed maximum throughput [Note: See branch-sorting-non-consensus for additional examples related to Properties under discussion.] Implementations should process properties in the following order: Prohibit, Require, Prefer, Avoid. If Protocol or Path Selection Properties contain any prohibited properties, the implementation should first purge branches containing nodes with these properties. For required properties, it should only keep branches that satisfy these requirements. Finally, it should order branches according to preferred properties, and finally use avoided properties as a tiebreaker. For Require and Avoid, Path Selection Properties take precedence over Protocol Selection Properties. For example, if the application has indicated both a preference for WiFi over LTE and for a feature only available in SCTP, branches will be first sorted accord to the Path Selection Property, with WiFi at the top. Then, branches with SCTP will be sorted to the top within their subtree according to the Protocol Selection Property. However, if the implementation has cached the information that SCTP is not available on the path over WiFi, there is no SCTP node in the WiFi subtree. Here, the path over WiFi will be tried first, and, if connection establishment succeeds, TCP will be used. So the Path Selection Property of preferring WiFi takes precedence over the Protocol Selection Property of preferring SCTP. 4.4. The primary goal of the Candidate Racing process is to successfully", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "with higher rankings represent connection attempts that will be raced first. Implementations should order the branches to reflect the preferences expressed by the application for its new connection, including Selection Properties, which are specified in I-D.ietf-taps- interface. In addition to the properties provided by the application, an implementation may include additional criteria such as cached performance estimates, see performance-caches, or system policy, see role-of-system-policy, in the ranking. Two examples of how Selection and Connection Properties may be used to sort branches are provided below: \"Interface Instance or Type\": If the application specifies an interface type to be preferred or avoided, implementations should rank paths accordingly. If the application specifies an interface type to be required or prohibited, we expect an implementation to not include the non-conforming paths into the three. \"Capacity Profile\": An implementation may use the Capacity Profile to prefer paths optimized for the application's expected traffic pattern according to cached performance estimates, see performance-caches: Scavenger: Prefer paths with the highest expected available bandwidth, based on observed maximum throughput Low Latency/Interactive: Prefer paths with the lowest expected Round Trip Time Constant-Rate Streaming: Prefer paths that can satisfy the requested Stream Send or Stream Receive Bitrate, based on observed maximum throughput Implementations should process properties in the following order: Prohibit, Require, Prefer, Avoid. If Selection Properties contain any prohibited properties, the implementation should first purge branches containing nodes with these properties. For required properties, it should only keep branches that satisfy these requirements. Finally, it should order branches according to preferred properties, and finally use avoided properties as a tiebreaker. 4.4. The primary goal of the Candidate Racing process is to successfully"}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "Local Endpoint is specified, the implementation should either use an ephemeral port or generate an error. If the Path Selection Properties do not require a single network interface or path, but allow the use of multiple paths, the Listener object should register for incoming traffic on all of the network interfaces or paths that conform to the Path Selection Properties. The set of available paths can change over time, so the implementation should monitor network path changes and register and de-register the Listener across all usable paths. When using multiple paths, the Listener is generally expected to use the same port for listening on each. If the Protocol Selection Properties allow multiple protocols to be used for listening, and the implementation supports it, the Listener object should register across the eligble protocols for each path. This means that inbound Connections delivered by the implementation may have heterogeneous protocol stacks. 4.8.1.", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "Local Endpoint is specified, the implementation should either use an ephemeral port or generate an error. If the Selection Properties do not require a single network interface or path, but allow the use of multiple paths, the Listener object should register for incoming traffic on all of the network interfaces or paths that conform to the Properties. The set of available paths can change over time, so the implementation should monitor network path changes and register and de-register the Listener across all usable paths. When using multiple paths, the Listener is generally expected to use the same port for listening on each. If the Selection Properties allow multiple protocols to be used for listening, and the implementation supports it, the Listener object should register across the eligble protocols for each path. This means that inbound Connections delivered by the implementation may have heterogeneous protocol stacks. 4.8.1."}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "support removing a Message from its buffer, this property may be ignored. Niceness: this represents the ability to de-prioritize a Message in favor of other Messages. This can be implemented by the system re-ordering Messages that have yet to be handed to the Protocol Stack, or by giving relative priority hints to protocols that support priorities per Message. For example, an implementation of HTTP/2 could choose to send Messages of different niceness on streams of different priority. Ordered: when this is false, it disables the requirement of in- order-delivery for protocols that support configurable ordering.", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "support removing a Message from its buffer, this property may be ignored. Priority: this represents the ability to prioritize a Message over other Messages. This can be implemented by the system re-ordering Messages that have yet to be handed to the Protocol Stack, or by giving relative priority hints to protocols that support priorities per Message. For example, an implementation of HTTP/2 could choose to send Messages of different Priority on streams of different priority. Ordered: when this is false, it disables the requirement of in- order-delivery for protocols that support configurable ordering."}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "as a result of racing multiple transports or as part of TCP Fast Open. Corruption Protection Length: when this is set to any value other than -1, it limits the required checksum in protocols that allow limiting the checksum length (e.g. UDP-Lite). Immediate Acknowledgement: this informs the implementation that the sender intends to execute tight control over the send buffer, and therefore wants to avoid delayed acknowledgements. In case of SCTP, a request to immediately send acknowledgements can be implemented using the \"sack-immediately flag\" described in Section 4.2 of RFC8303 for the SEND.SCTP primitive. Instantaneous Capacity Profile: when this is set to \"Interactive/ Low Latency\", the Message should be sent immediately, even when this comes at the cost of using the network capacity less efficiently. For example, small messages can sometimes be bundled to fit into a single data packet for the sake of reducing header overhead; such bundling should not be used. For example, in case of TCP, the Nagle algorithm should be disabled when Interactive/ Low Latency is selected as the capacity profile. Scavenger/Bulk can translate into usage of a congestion control mechanism such as LEDBAT, and/or the capacity profile can lead to a choice of a DSCP value as described in I-D.ietf-taps-minset). [Note: See also send-params-non-consensus for additional Send Parameters under discussion.] 5.1.1.2.", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "as a result of racing multiple transports or as part of TCP Fast Open. Final: when this is true, it means that a transport connection can be closed immediately after its transmission. Corruption Protection Length: when this is set to any value other than -1, it limits the required checksum in protocols that allow limiting the checksum length (e.g. UDP-Lite). Transmission Profile: TBD - because it's not final in the API yet. Old text follows: when this is set to \"Interactive/Low Latency\", the Message should be sent immediately, even when this comes at the cost of using the network capacity less efficiently. For example, small messages can sometimes be bundled to fit into a single data packet for the sake of reducing header overhead; such bundling should not be used. For example, in case of TCP, the Nagle algorithm should be disabled when Interactive/Low Latency is selected as the capacity profile. Scavenger/Bulk can translate into usage of a congestion control mechanism such as LEDBAT, and/ or the capacity profile can lead to a choice of a DSCP value as described in I-D.ietf-taps-minset). Singular Transmission: when this is true, the application requests to avoid transport-layer segmentation or network-layer fragmentation. Some transports implement network-layer fragmentation avoidance (Path MTU Discovery) without exposing this functionality to the application; in this case, only transport- layer segmentation should be avoided, by fitting the message into a single transport-layer segment or otherwise failing. Otherwise, network-layer fragmentation should be avoided--e.g. by requesting the IP Don't Fragment bit to be set in case of UDP(-Lite) and IPv4 (SET_DF in RFC8304). 5.1.1.2."}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "6.1. Appendix A.1 of I-D.ietf-taps-minset explains, using primitives that are described in RFC8303 and RFC8304, how to implement changing the following protocol properties of an established connection with TCP and UDP. Below, we amend this description for other protocols (if applicable): Relative niceness: for SCTP, this can be done using the primitive CONFIGURE_STREAM_SCHEDULER.SCTP described in section 4 of RFC8303. Timeout for aborting Connection: for SCTP, this can be done using the primitive CHANGE_TIMEOUT.SCTP described in section 4 of RFC8303. Abort timeout to suggest to the Remote Endpoint: for TCP, this can be done using the primitive CHANGE_TIMEOUT.TCP described in section 4 of RFC8303. Retransmission threshold before excessive retransmission notification: for TCP, this can be done using ERROR.TCP described in section 4 of RFC8303. Required minimum coverage of the checksum for receiving: for UDP- Lite, this can be done using the primitive SET_MIN_CHECKSUM_COVERAGE.UDP-Lite described in section 4 of RFC8303. Connection group transmission scheduler: for SCTP, this can be done using the primitive SET_STREAM_SCHEDULER.SCTP described in section 4 of RFC8303. It may happen that the application attempts to set a Protocol Property which does not apply to the actually chosen protocol. In this case, the implementation should fail gracefully, i.e., it may", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "6.1. Appendix A.1 of I-D.ietf-taps-minset explains, using primitives from RFC8303 and RFC8304, how to implement changing some of the following protocol properties of an established connection with TCP and UDP. Below, we amend this description for other protocols (if applicable) and extend it with Connection Properties that are not contained in I- D.ietf-taps-minset. Notification of excessive retransmissions: TODO Retransmission threshold before excessive retransmission notification: TODO; for TCP, this can be done using ERROR.TCP described in section 4 of RFC8303. Notification of ICMP soft error message arrival: TODO Required minimum coverage of the checksum for receiving: for UDP- Lite, this can be done using the primitive SET_MIN_CHECKSUM_COVERAGE.UDP-Lite described in section 4 of RFC8303. Priority (Connection): TODO; for SCTP, this can be done using the primitive CONFIGURE_STREAM_SCHEDULER.SCTP described in section 4 of RFC8303. Timeout for aborting Connection: for SCTP, this can be done using the primitive CHANGE_TIMEOUT.SCTP described in section 4 of RFC8303. Connection group transmission scheduler: for SCTP, this can be done using the primitive SET_STREAM_SCHEDULER.SCTP described in section 4 of RFC8303. Maximum message size concurrent with Connection establishment: TODO Maximum Message size before fragmentation or segmentation: TODO Maximum Message size on send: TODO Maximum Message size on receive: TODO Capacity Profile: TODO Bounds on Send or Receive Rate: TODO TCP-specific Property: User Timeout: for TCP, this can be configured using the primitive CHANGE_TIMEOUT.TCP described in section 4 of RFC8303. It may happen that the application attempts to set a Protocol Property which does not apply to the actually chosen protocol. In this case, the implementation should fail gracefully, i.e., it may"}
{"id": "q-en-api-drafts-26de4ee2d3a976ba775c719819e513b55f6c457a54c3212b732679376db1f117", "old_text": "If the Protocol Stack includes a transport protocol that supports multipath connectivity, an update to the available paths should inform the Protocol Instance of the new set of paths that are permissible based on the Path Selection Properties passed by the application. A multipath protocol can establish new subflows over new paths, and should tear down subflows over paths that are no longer available. If the Protocol Stack includes a transport", "comments": "Several small updates, mostly to clean things up a bit, and align property names and such.\nNAME - I think you're wrong: my understanding (and also the way the text is written - otherwise we'd have to change it in the API draft) was that Connection Properties can (and should) also be set on Preconnections, to give as much input as possible for Connection establishment. In the API draft, the Selection Properties in Section 5.2 can only be used on Preconnections: you can't ask for congestion control after establishment. You can't say that you'd rather have multistreaming support after establishment. But that doesn't exclude Connection properties from being usable earlier. Required minimum checksum coverage could influence choosing UDP-Lite vs. UDP. Bounds on Send or Receive Rate could influence the interface choice. The capacity profile could influence various things (and no, there isn't a matching Selection Property now - there is a per-Message override). So... I think that either you're wrong, or I'm missing something and the API draft heavily lags behind a decision that was made. I hope not!\nNAME The API drafts says: I interpret this as only the Selection Properties influence selection and the Connection properties are used to further configure the connections according to user preferences. Otherwise what is the difference between them?\nIf you interpret it as you do, I think the text isn't clear enough (indeed it isn't super clear - in particular, \"eventually established\" is misleading IMO). Though, note that it says that Connection properties can also be used later. Regarding the difference between them: Selection Properties only make sense for Preconnections.\nNAME - sorry lost this thread. But I saw that you have started cleaning up some Connection properties that had mixed Connection and Selection properties up in the API draft. Does it mean that we are now in synch and agree that only Selection properties are used in the selection and Connection properties are used for configuration (both during establisment and also after the connection setup).\nNAME the other PR means nothing about what I said here: it only tried to address issue which is about a wording inconsistency in a specific property. You say that Connection properties can also be used during establishment - if you agree with allowing this, then why not allow to configure them as early as possible and allow them to influence selection? What's bad about having more information available that can guide the selection process?\nNAME the problem is that the Connection properties are not expressed in a way suitable for selection. If they should also be used for Selection they would have to be reworked to be of type Preference. Otherwise I do not know what they mean for selection. What does it mean if I pick say a particular connection group transmission scheduler? Is it required or preferred? Most connection properties also do not look very useful to me for selection so in case there are some we need I think it would be simpler to just define a matching selection property.\nNAME we're only discussing whether it's allowed to use them as additional input or not. I agree, some (like the scheduler choice) don't seem useful, but some do... generally we may not have good guidance on how they should be used by an implementation, but giving the system freedom to use as many inputs as possible is probably a good thing. Regarding the type, require/prefer//ignore/avoid/prohibit obviously works fine for a \"wishlist\" of boolean character: having something, or not having it. The selection properties are generally like that - except \"Interface Instance or Type\" and \"Provisioning Domain Instance or Type\" which are, well... wishlist-collections. Consider the capacity profile. Surely configuring this early can be useful input - e.g. if someone picks low latency / interactive, you may even want to put this traffic on a specific network interface! But if you'd want to make a copy of the capacity profile as a \"proper\" selection property, how would you do it? Apply require/prefer/ignore/avoid/prohibit for each of its values? Then I could say \"prefer\" low latency/non-interactive but \"avoid\" low latency/non-interactive, which would surely be odd. Apply the qualifier to the whole thing? Then it's about, e.g. \"avoiding\" to use a capacity profile at all. That's odd as well. I think it's in the nature of the current selection properties that they merely say \"I want to be able to do this, and this is how important it is to me\", which is different from configuring something. That's also why, as written, they simply wouldn't work for later connection configuration. Connection properties really are about configuring something, but I still say that being able to use them as early input is good. I agree that the implementation isn't super clear as they don't come with a require/prefer/... etc. qualifier, but then we just offer some freedom to implementers.\nNAME I just pushed a commit that updates the selection and connection properties text, removing \"connection\" almost everywhere, but adding a disclaimer in the beginning that says that connection properties should be configured as early as possible because they can influence decisions made by the implementation, and \"In the remainder of this document, we only refer to Selection Properties because they are the more typical case.\" Frankly I'm not sure this is better... but it does remove most of the \"selection and connection\" double mentions that you pointed out.\nFixed up branch building and resolved merge conflicts. I'll merge this in later today before the posting deadline.\nCool thanks! Sorry if I messed something up\nAs discussed in\nThank you for updating this \u2013 looks good! just needs s/send-params/msg-properties/ and merging", "new_text": "If the Protocol Stack includes a transport protocol that supports multipath connectivity, an update to the available paths should inform the Protocol Instance of the new set of paths that are permissible based on the Selection Properties passed by the application. A multipath protocol can establish new subflows over new paths, and should tear down subflows over paths that are no longer available. If the Protocol Stack includes a transport"}
{"id": "q-en-api-drafts-0765f7f0c3e5c0a7ac1ae19d2ccf547bc6b6012454020fc159769bf842cac238", "old_text": "Many application programming interfaces (APIs) to perform transport networking have been deployed, perhaps the most widely known and imitated being the BSD socket() POSIX interface. The naming of objects and functions across these APIs is not consistent, and varies depending on the protocol being used. For example, sending and receiving streams of data is conceptually the same for both an unencrypted Transmission Control Protocol (TCP) stream and operating on an encrypted Transport Layer Security (TLS) RFC8446 stream over TCP, but applications cannot use the same socket send() and recv() calls on top of both kinds of connections. Similarly, terminology for the implementation of transport protocols varies based on the context of the protocols themselves: terms such as \"flow\", \"stream\", \"message\", and \"connection\" can take on many different meanings. This variety can lead to confusion when trying to understand the similarities and differences between protocols, and how applications can use them effectively. The goal of the Transport Services architecture is to provide a common, flexible, and reusable interface for transport protocols. As", "comments": "Closed\nI just read Arch and have the following NiTs which I think we should address: /Listeners by default register with multiple paths/ ought this to be: /Listeners, by default, register with multiple paths/ /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over a transport protocol. / I think this should be (because I don\u2019t think they are protocol specific) /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over the transport service. / /to retransmit in the event of packet loss,/ better to avoid event here :-) /to retransmit when there is packet loss,/\nI am just gonna add a few more nits here: The Listener object in figure 4 is a bit awkward, it appears as if the Listen() call appears out of thin air while per the API it gets called on the preconnection object like Initiate() and Rendezvous(). In section 4.1.1, there are 5 references to section 4.1.2, is it really necessary to have that many? Point 2 in section 4.2.3 only talks about required transport services, the same should probably be said about prohibited services, i.e. \"both protocol stacks MUST NOT offer transport services prohibited by the application\"\nLet me add to this: Section 1: print send() and recv() in fixed-with font [HTML/XML issue only] Section 2 - Figure 1: socket() [with the brackets] looks odd in the figure title\nNAME I fear framers may be specific to the underlaying protocol, e.g., for SIP slicing messages out of TCP stream whereby doing nothing for SCTP and UDP\nIn relation to frames I think I only asked to add /service/ instead of protocol... I don't understand how Phil's comment reacts to that\nNAME I think protocol is correct here and would not change the sentence.\nOK - so I think you are saying the framer depends only on the specific next protocol, rather than upon the transport service that was offered? - It is still not clear to me that this is correct.\nNAME that is a fairly good question\u2026 The behaviour of the framer defiantly depends on the transport service provided (which may be a subset or superset of the transport service requested). A framer needs to support all possible outcomes of the transport services requested It may optimize based on the protocols it actually finds, e.g., turn into a no-op with the transport preserves message boundaries. Whether the next protocol is a sufficient representation of this is unclear to me. Maybe not, if one thinks of supporting arbitrarily large messages and UDP over IPv6 with and without fragmentation\u2026\nLGTM, thanks!", "new_text": "Many application programming interfaces (APIs) to perform transport networking have been deployed, perhaps the most widely known and imitated being the BSD Socket POSIX interface. The naming of objects and functions across these APIs is not consistent, and varies depending on the protocol being used. For example, sending and receiving streams of data is conceptually the same for both an unencrypted Transmission Control Protocol (TCP) stream and operating on an encrypted Transport Layer Security (TLS) RFC8446 stream over TCP, but applications cannot use the same socket \"send()\" and \"recv()\" calls on top of both kinds of connections. Similarly, terminology for the implementation of transport protocols varies based on the context of the protocols themselves: terms such as \"flow\", \"stream\", \"message\", and \"connection\" can take on many different meanings. This variety can lead to confusion when trying to understand the similarities and differences between protocols, and how applications can use them effectively. The goal of the Transport Services architecture is to provide a common, flexible, and reusable interface for transport protocols. As"}
{"id": "q-en-api-drafts-0765f7f0c3e5c0a7ac1ae19d2ccf547bc6b6012454020fc159769bf842cac238", "old_text": "The traditional model of using sockets for networking can be represented as follows: Applications create connections and transfer data using the socket API. The socket API provides the interface to the implementations of TCP and UDP (typically implemented in the system's kernel). TCP and UDP in the kernel send and receive data over the available", "comments": "Closed\nI just read Arch and have the following NiTs which I think we should address: /Listeners by default register with multiple paths/ ought this to be: /Listeners, by default, register with multiple paths/ /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over a transport protocol. / I think this should be (because I don\u2019t think they are protocol specific) /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over the transport service. / /to retransmit in the event of packet loss,/ better to avoid event here :-) /to retransmit when there is packet loss,/\nI am just gonna add a few more nits here: The Listener object in figure 4 is a bit awkward, it appears as if the Listen() call appears out of thin air while per the API it gets called on the preconnection object like Initiate() and Rendezvous(). In section 4.1.1, there are 5 references to section 4.1.2, is it really necessary to have that many? Point 2 in section 4.2.3 only talks about required transport services, the same should probably be said about prohibited services, i.e. \"both protocol stacks MUST NOT offer transport services prohibited by the application\"\nLet me add to this: Section 1: print send() and recv() in fixed-with font [HTML/XML issue only] Section 2 - Figure 1: socket() [with the brackets] looks odd in the figure title\nNAME I fear framers may be specific to the underlaying protocol, e.g., for SIP slicing messages out of TCP stream whereby doing nothing for SCTP and UDP\nIn relation to frames I think I only asked to add /service/ instead of protocol... I don't understand how Phil's comment reacts to that\nNAME I think protocol is correct here and would not change the sentence.\nOK - so I think you are saying the framer depends only on the specific next protocol, rather than upon the transport service that was offered? - It is still not clear to me that this is correct.\nNAME that is a fairly good question\u2026 The behaviour of the framer defiantly depends on the transport service provided (which may be a subset or superset of the transport service requested). A framer needs to support all possible outcomes of the transport services requested It may optimize based on the protocols it actually finds, e.g., turn into a no-op with the transport preserves message boundaries. Whether the next protocol is a sufficient representation of this is unclear to me. Maybe not, if one thinks of supporting arbitrarily large messages and UDP over IPv6 with and without fragmentation\u2026\nLGTM, thanks!", "new_text": "The traditional model of using sockets for networking can be represented as follows: Applications create connections and transfer data using the Socket API. The Socket API provides the interface to the implementations of TCP and UDP (typically implemented in the system's kernel). TCP and UDP in the kernel send and receive data over the available"}
{"id": "q-en-api-drafts-0765f7f0c3e5c0a7ac1ae19d2ccf547bc6b6012454020fc159769bf842cac238", "old_text": "that care about timing; the ability to provide control of reliability, choosing which messages to retransmit in the event of packet loss, and how best to make use of the data that arrived; the ability to manage dependencies between messages, when the transport system could decide to not deliver a message, either", "comments": "Closed\nI just read Arch and have the following NiTs which I think we should address: /Listeners by default register with multiple paths/ ought this to be: /Listeners, by default, register with multiple paths/ /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over a transport protocol. / I think this should be (because I don\u2019t think they are protocol specific) /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over the transport service. / /to retransmit in the event of packet loss,/ better to avoid event here :-) /to retransmit when there is packet loss,/\nI am just gonna add a few more nits here: The Listener object in figure 4 is a bit awkward, it appears as if the Listen() call appears out of thin air while per the API it gets called on the preconnection object like Initiate() and Rendezvous(). In section 4.1.1, there are 5 references to section 4.1.2, is it really necessary to have that many? Point 2 in section 4.2.3 only talks about required transport services, the same should probably be said about prohibited services, i.e. \"both protocol stacks MUST NOT offer transport services prohibited by the application\"\nLet me add to this: Section 1: print send() and recv() in fixed-with font [HTML/XML issue only] Section 2 - Figure 1: socket() [with the brackets] looks odd in the figure title\nNAME I fear framers may be specific to the underlaying protocol, e.g., for SIP slicing messages out of TCP stream whereby doing nothing for SCTP and UDP\nIn relation to frames I think I only asked to add /service/ instead of protocol... I don't understand how Phil's comment reacts to that\nNAME I think protocol is correct here and would not change the sentence.\nOK - so I think you are saying the framer depends only on the specific next protocol, rather than upon the transport service that was offered? - It is still not clear to me that this is correct.\nNAME that is a fairly good question\u2026 The behaviour of the framer defiantly depends on the transport service provided (which may be a subset or superset of the transport service requested). A framer needs to support all possible outcomes of the transport services requested It may optimize based on the protocols it actually finds, e.g., turn into a no-op with the transport preserves message boundaries. Whether the next protocol is a sufficient representation of this is unclear to me. Maybe not, if one thinks of supporting arbitrarily large messages and UDP over IPv6 with and without fragmentation\u2026\nLGTM, thanks!", "new_text": "that care about timing; the ability to provide control of reliability, choosing which messages to retransmit when there is packet loss, and how best to make use of the data that arrived; the ability to manage dependencies between messages, when the transport system could decide to not deliver a message, either"}
{"id": "q-en-api-drafts-0765f7f0c3e5c0a7ac1ae19d2ccf547bc6b6012454020fc159769bf842cac238", "old_text": "its own, but needs a framing protocol on top to determine message boundaries. Both stacks MUST offer the transport services that are required by the application. For example, if an application specifies that it requires reliable transmission of data, then a Protocol Stack using UDP without any reliability layer on top would not be allowed to replace a Protocol Stack using TCP. However, if the application does not require reliability, then a Protocol Stack", "comments": "Closed\nI just read Arch and have the following NiTs which I think we should address: /Listeners by default register with multiple paths/ ought this to be: /Listeners, by default, register with multiple paths/ /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over a transport protocol. / I think this should be (because I don\u2019t think they are protocol specific) /A Framer is a data translation layer that can be added to a Connection to define how application-level Messages are transmitted over the transport service. / /to retransmit in the event of packet loss,/ better to avoid event here :-) /to retransmit when there is packet loss,/\nI am just gonna add a few more nits here: The Listener object in figure 4 is a bit awkward, it appears as if the Listen() call appears out of thin air while per the API it gets called on the preconnection object like Initiate() and Rendezvous(). In section 4.1.1, there are 5 references to section 4.1.2, is it really necessary to have that many? Point 2 in section 4.2.3 only talks about required transport services, the same should probably be said about prohibited services, i.e. \"both protocol stacks MUST NOT offer transport services prohibited by the application\"\nLet me add to this: Section 1: print send() and recv() in fixed-with font [HTML/XML issue only] Section 2 - Figure 1: socket() [with the brackets] looks odd in the figure title\nNAME I fear framers may be specific to the underlaying protocol, e.g., for SIP slicing messages out of TCP stream whereby doing nothing for SCTP and UDP\nIn relation to frames I think I only asked to add /service/ instead of protocol... I don't understand how Phil's comment reacts to that\nNAME I think protocol is correct here and would not change the sentence.\nOK - so I think you are saying the framer depends only on the specific next protocol, rather than upon the transport service that was offered? - It is still not clear to me that this is correct.\nNAME that is a fairly good question\u2026 The behaviour of the framer defiantly depends on the transport service provided (which may be a subset or superset of the transport service requested). A framer needs to support all possible outcomes of the transport services requested It may optimize based on the protocols it actually finds, e.g., turn into a no-op with the transport preserves message boundaries. Whether the next protocol is a sufficient representation of this is unclear to me. Maybe not, if one thinks of supporting arbitrarily large messages and UDP over IPv6 with and without fragmentation\u2026\nLGTM, thanks!", "new_text": "its own, but needs a framing protocol on top to determine message boundaries. Both stacks MUST offer the transport services that are requested by the application. For example, if an application specifies that it requires reliable transmission of data, then a Protocol Stack using UDP without any reliability layer on top would not be allowed to replace a Protocol Stack using TCP. However, if the application does not require reliability, then a Protocol Stack"}
{"id": "q-en-api-drafts-9a75929c97768b468224c5679037be7ee5dacc840e2019142921f8df24a0c53a", "old_text": "Delimiter-separated formats, such as HTTP/1.1. Common Message Framers can be provided by the Transport Services implementation, but an implemention ought to allow custom Message Framers to be defined by the application or some other piece of software. This section describes one possible interface for defining Message Framers as an example.", "comments": "Just some points about the new text on framers in the API draft: In 10.2 it is specified in which order multiple framers run on incoming and outgoing messages, it is however never specified what the order looks like on Start and Stop events. In 10.6, \"MessageFramer.DeliverAndAdvanceReceiveCursor\" appears to be missing the \"Data\" argument that is present in \"MessageFramer.Deliver\". Also in 10.6, in the second paragraph it says: When it is probably meant to say While the framer implementation is able to notify the Connection that it has finished handling a Start event (by calling MessageFramer.MakeConnectionReady), there is no such way for Stop events.\nAs notes for the responses to these points: If there are multiple framers, the \"lowest\" one (the one closest to the transport) would receive start first, and the next would get the start event when the lower one is ready. Similar for stop. Good calls on the missing data argument and the typo re application vs framer. I was thinking to use the failConnection call to complete a stop, potentially with no error?", "new_text": "Delimiter-separated formats, such as HTTP/1.1. Common Message Framers can be provided by the Transport Services implementation, but an implementation ought to allow custom Message Framers to be defined by the application or some other piece of software. This section describes one possible interface for defining Message Framers as an example."}
{"id": "q-en-api-drafts-9a75929c97768b468224c5679037be7ee5dacc840e2019142921f8df24a0c53a", "old_text": "implementation to communicate control data to the remote endpoint that can be used to parse Messages. At any time if the implementation encounters a fatal error, it can also cause the Connection to fail and provide an error. Before an implementation marks a Message Framer as ready, it can also dynamically add a protocol or framer above it in the stack. This allows protocols like STARTTLS, that need to add TLS conditionally,", "comments": "Just some points about the new text on framers in the API draft: In 10.2 it is specified in which order multiple framers run on incoming and outgoing messages, it is however never specified what the order looks like on Start and Stop events. In 10.6, \"MessageFramer.DeliverAndAdvanceReceiveCursor\" appears to be missing the \"Data\" argument that is present in \"MessageFramer.Deliver\". Also in 10.6, in the second paragraph it says: When it is probably meant to say While the framer implementation is able to notify the Connection that it has finished handling a Start event (by calling MessageFramer.MakeConnectionReady), there is no such way for Stop events.\nAs notes for the responses to these points: If there are multiple framers, the \"lowest\" one (the one closest to the transport) would receive start first, and the next would get the start event when the lower one is ready. Similar for stop. Good calls on the missing data argument and the typo re application vs framer. I was thinking to use the failConnection call to complete a stop, potentially with no error?", "new_text": "implementation to communicate control data to the remote endpoint that can be used to parse Messages. Similarly, when a Message Framer generates a \"Stop\" event, the framer implementation has the opportunity to write some final data or clear up its local state before the \"Closed\" event is delivered to the Application. The framer implementation can indicate that it has finished with this. At any time if the implementation encounters a fatal error, it can also cause the Connection to fail and provide an error. Should the framer implementation deem the candidate selected during racing unsuitable it can signal this by failing the Connection prior to marking it as ready. If there are no other candidates available, the Connection will fail. Otherwise, the Connection will select a different candidate and the Message Framer will generate a new \"Start\" event. Before an implementation marks a Message Framer as ready, it can also dynamically add a protocol or framer above it in the stack. This allows protocols like STARTTLS, that need to add TLS conditionally,"}
{"id": "q-en-api-drafts-9a75929c97768b468224c5679037be7ee5dacc840e2019142921f8df24a0c53a", "old_text": "Upon receiving this event, a framer implementation is responsible for performing any necessary transformations and sending the resulting data to the next protocol. Implementations SHOULD ensure that there is a way to pass the original data through without copying to improve performance. To provide an example, a simple protocol that adds a length as a", "comments": "Just some points about the new text on framers in the API draft: In 10.2 it is specified in which order multiple framers run on incoming and outgoing messages, it is however never specified what the order looks like on Start and Stop events. In 10.6, \"MessageFramer.DeliverAndAdvanceReceiveCursor\" appears to be missing the \"Data\" argument that is present in \"MessageFramer.Deliver\". Also in 10.6, in the second paragraph it says: When it is probably meant to say While the framer implementation is able to notify the Connection that it has finished handling a Start event (by calling MessageFramer.MakeConnectionReady), there is no such way for Stop events.\nAs notes for the responses to these points: If there are multiple framers, the \"lowest\" one (the one closest to the transport) would receive start first, and the next would get the start event when the lower one is ready. Similar for stop. Good calls on the missing data argument and the typo re application vs framer. I was thinking to use the failConnection call to complete a stop, potentially with no error?", "new_text": "Upon receiving this event, a framer implementation is responsible for performing any necessary transformations and sending the resulting data back to the Message Framer, which will in turn send it to the next protocol. Implementations SHOULD ensure that there is a way to pass the original data through without copying to improve performance. To provide an example, a simple protocol that adds a length as a"}
{"id": "q-en-api-drafts-9a75929c97768b468224c5679037be7ee5dacc840e2019142921f8df24a0c53a", "old_text": "discard a header once the content has been parsed). To deliver a Message to the application, the framer implementation can either directly deliever data that it has allocated, or deliver a range of data directly from the underlying transport and simulatenously advance the receive cursor. Note that \"MessageFramer.DeliverAndAdvanceReceiveCursor\" allows the framer implementation to earmark bytes as part of a Message even", "comments": "Just some points about the new text on framers in the API draft: In 10.2 it is specified in which order multiple framers run on incoming and outgoing messages, it is however never specified what the order looks like on Start and Stop events. In 10.6, \"MessageFramer.DeliverAndAdvanceReceiveCursor\" appears to be missing the \"Data\" argument that is present in \"MessageFramer.Deliver\". Also in 10.6, in the second paragraph it says: When it is probably meant to say While the framer implementation is able to notify the Connection that it has finished handling a Start event (by calling MessageFramer.MakeConnectionReady), there is no such way for Stop events.\nAs notes for the responses to these points: If there are multiple framers, the \"lowest\" one (the one closest to the transport) would receive start first, and the next would get the start event when the lower one is ready. Similar for stop. Good calls on the missing data argument and the typo re application vs framer. I was thinking to use the failConnection call to complete a stop, potentially with no error?", "new_text": "discard a header once the content has been parsed). To deliver a Message to the application, the framer implementation can either directly deliver data that it has allocated, or deliver a range of data directly from the underlying transport and simultaneously advance the receive cursor. Note that \"MessageFramer.DeliverAndAdvanceReceiveCursor\" allows the framer implementation to earmark bytes as part of a Message even"}
{"id": "q-en-api-drafts-237a6a86d8a1759b703a733f620c5a530cb291d279329d7b7bb4e13a6ef91f0f", "old_text": "A specialized feature could be required by an application only when using a specific protocol, and not when using others. For example, if an application is using UDP, it could require control over the checksum or fragmentation behavior for UDP; if it used a protocol to frame its data over a byte stream like TCP, it would not need these options. In such cases, the API ought to expose the features in such a way that they take effect when a particular protocol is selected, but do not imply that only that protocol could be used. For example, if the API allows an application to specify a preference to use a partial checksum, communication would not fail when a protocol such as TCP is selected, which uses a checksum covering the entire payload. Other specialized features, however, could be strictly required by an application and thus constrain the set of protocols that can be used.", "comments": "Addresses\nThe architecture draft uses UDP checksums as an example of a protocol-specific property in Section 3.2 and 4.1.2. I find this a bit confusing because in the Interface draft, checksum coverage is a Generic Connection Property (Section 11.1.2). Should we maybe use a different example of a protocol-specific property, such as the TCP User Timeout we have in the Interface draft?\nSure, we can switch the example around\nThanks, LGTM!Thanks \u2013 works", "new_text": "A specialized feature could be required by an application only when using a specific protocol, and not when using others. For example, if an application is using TCP, it could require control over the User Timeout Option for TCP; these options would not take effect for other transport protocols. In such cases, the API ought to expose the features in such a way that they take effect when a particular protocol is selected, but do not imply that only that protocol could be used. For example, if the API allows an application to specify a preference to use the User Timeout Option, communication would not fail when a protocol such as QUIC is selected. Other specialized features, however, could be strictly required by an application and thus constrain the set of protocols that can be used."}
{"id": "q-en-api-drafts-237a6a86d8a1759b703a733f620c5a530cb291d279329d7b7bb4e13a6ef91f0f", "old_text": "Connection Properties: The Connection Properties are used to configure protocol-specific options and control per-connection behavior of the Transport Services system; for example, a protocol-specific Connection Property can express that if UDP is used, the implementation ought to use checksums. Note that the presence of such a property does not require that a specific protocol will be used. In general, these properties do not explicitly determine the selection of paths or protocols, but can be used in this way by an implementation during connection establishment. Connection Properties are specified on a Preconnection prior to Connection establishment, and can be modified on the Connection later. Changes made to Connection", "comments": "Addresses\nThe architecture draft uses UDP checksums as an example of a protocol-specific property in Section 3.2 and 4.1.2. I find this a bit confusing because in the Interface draft, checksum coverage is a Generic Connection Property (Section 11.1.2). Should we maybe use a different example of a protocol-specific property, such as the TCP User Timeout we have in the Interface draft?\nSure, we can switch the example around\nThanks, LGTM!Thanks \u2013 works", "new_text": "Connection Properties: The Connection Properties are used to configure protocol-specific options and control per-connection behavior of the Transport Services system; for example, a protocol-specific Connection Property can express that if TCP is used, the implementation ought to use the User Timeout Option. Note that the presence of such a property does not require that a specific protocol will be used. In general, these properties do not explicitly determine the selection of paths or protocols, but can be used in this way by an implementation during connection establishment. Connection Properties are specified on a Preconnection prior to Connection establishment, and can be modified on the Connection later. Changes made to Connection"}
{"id": "q-en-api-drafts-4acc396139ca62ede4174a04e01d929a99b0073c8495327e071c2846c7b180fc", "old_text": "will never show up when queuing the value of a preference - the effective preference must be returned instead. Internally, the transport system will first exclude all protocols and paths that match a Prohibit, then exclude all protocols and paths that do not match a Require, then sort candidates according to Preferred properties, and then use Avoided properties as a tiebreaker. Selection Properties that select paths take preference over those that select protocols. For example, if an application indicates a preference for a specific path by specifying an interface, but also a preference for a protocol not available on this path, the transport system will try the path first, ignoring the protocol preference. Selection and Connection Properties, as well as defaults for Message Properties, can be added to a Preconnection to configure the", "comments": "Took a stab at normative language for the rules to process Selection Properties, as discussed in . As far as I remember, we always said that outcomes have to be deterministic/consistent, so I made everything that matters a MUST. I guess our example illustrates that even given the same Selection Properties, the outcome might change depending on the situation, e.g., your path might still be Wifi but it might or might not support your preferred protocol. So, does this really have to be all MUST, or is there a SHOULD in there? Discuss. :) Also, if anyone can name a good reason why the outcome has to be deterministic, please suggest text - maybe something like \"avoid surprises for the application\", but this didn't really sound right, so I left it out for now.\nIn section 5.2. in the API doc: \"Internally, the transport system will first exclude all protocols and paths that match a Prohibit, then exclude all protocols and paths that do not match a Require, then sort candidates according to Preferred properties, and then use Avoided properties as a tiebreaker. Selection Properties that select paths take preference over those that select protocols. For example, if an application indicates a preference for a specific path by specifying an interface, but also a preference for a protocol not available on this path, the transport system will try the path first, ignoring the preference.\" This sounds rather like an example algorithm that should maybe go into the implementation doc instead? Or is there any part of this that we need to specify normatively in order to ensure that all systems implement the same?\nThis goes back to issue on \"conflicting policies\". At the time, we decided that there must be a deterministic outcome, therefore, text was added to both the API and the Implementation document, see . If we keep it that way, perhaps the text here should point out that the goal is to resolve conflicting policies, so we achieve a deterministic outcome.\nAh okay, thanks for the background info; missed that. I think we need to use normative language here then. Maybe also a separate subsection makes this more prominent.\nThis reads to me like the rules for processing polices rather than an example algorithm. I think it is part of the API... Normative language would be good and a stronger opening that explains why the rules exist.\nI don't think a separate subsection would make this a better read - this fits fine as it stands, as an intro to the transport property specification IMO. Regarding normative language, that's a separate discussion. So, can we close this issue?\nAgree that we don't need a separate subsection. I can try to tweak the beginning of this part a bit to explain why the rules exist as NAME suggested. Regarding normative language - we definitely have it in the API draft, right? So why not have the discussion on it here as well? if we're serious about deterministic outcomes, I guess the whole thing needs to be a MUST, right?\nSorry, true, that normative language debate was about the arch draft, and we definitely have it in the API draft. So yes, let's discuss it here. I also think a MUST would make sense.\nSo the order of filtering out require/prohibit is not important (outcome is the same) - making the order a MUST does not make sense. Also, first sorting by preference and then filtering has the same outcome as doing it the other way around. the only thing we need to say is that filtering MUST happen and sorting SHOULD happen.\nok - I have to admit I didn't think this through, I just agree that we want a deterministic outcome from this.\nWhy did we require a deterministic behavior in first place? Does this prevent TAPS implementations from running experiments like Apple did with TCP SACK and ECN? Does this prevent probabilistic path probing?\nThis was discussed in the TAPS BoF? ... I thought it was about developers being assured that X will happen if they choose Y and not finding the ground shifting as they use the interface.\nInterim: Prefers and avoids allow the implementation to decide the ordering and weighting; require and prohibit are the only hard requirements. Applications should be able to cope with any possibility that isn't prohibited. Likely should add implementation text to say how preferences weight racing.\nDuring the interim discussion, we found that was too strict and limiting, so we opted for another approach that moves the selection algorithm to the Implementation draft. I'm preparing another PR implementing these changes.\nThe text that ended up in the implementation draft is not fully consistent with the racing text, opening a new issue for this and closing this issue.\nLGTM, merging this. thanks, Theresa!This reads good to me. Regarding the request for text, I don't have any ideas... but I think this could be fine as it stands? Needs more eyes though.", "new_text": "will never show up when queuing the value of a preference - the effective preference must be returned instead. The implementation MUST ensure a consistent outcome given the same Selection Properties. Internally, it MUST exclude all protocols and paths that match a Prohibit and exclude all protocols and paths that do not match a Require. It MUST then sort candidates according to Preferred properties, and then use Avoided properties as a tiebreaker. Note that the protocols and paths which are available on a specific system may vary, such that application preferences may conflict with each other. For example, if an application indicates a preference for a specific path by specifying an interface, but also a preference for a protocol, a situation might occur in which the preferred protocol is not available on the preferred path. In such cases, implementations MUST ensure a deterministic outcome by prioritizing Selection Properties that select paths over those that select protocols. Therefore, the transport system MUST try the path first, ignoring the protocol preference if the protocol does not work on the path. Selection and Connection Properties, as well as defaults for Message Properties, can be added to a Preconnection to configure the"}
{"id": "q-en-api-drafts-ec1775cfb1baf29d943c556da9293d5492e9d82de7835d85bc89b54edeab871c", "old_text": "configuration details for transport security protocols, as discussed in security-parameters. It does not recommend use (or disuse) of specific algorithms or protocols. Any API-compatible transport security protocol should work in a TAPS system. ", "comments": "This ended up in more text that I thought. I'm not certain if we need all of it. Further I'm unsure about the concrete wording in many cases, so any comments are more than welcome!\nThis is supposed to address issue\nI do mentioned name resolution, however, leakage though DNS might rather be a DNS problem and not necessarily an issue that needs to be considered in taps. Sharing cache state is a good point. I vaguely remember that we already address that somewhere else (could also be in the implementation draft as this is rather an implementation issue). But maybe we can add a pointer...?\nLeakage though DNS may need to be addressed by TAPS, as with the current sockets API, the application has complete control about when DNS resolution is done (and, at least as long as things like DoH/DoT are predominantly implemented by nonsystem resolver libraries, how). TAPS as envisioned gives the system more control over resolution.\n(IIRC the architecture does explicitly address and mention the shared cache issue; adding a line to refer to that in privacy considerations might be nice but isn't IMO necessary.)\nGorry, I addressed all your nits/comments.\nI think it is a good start, but I miss two aspects: Information leakage through DNS, which can also make addresses on different paths linkable Information leakage within an application though shared caches, e.g., within a browser. I agree this is a good starting point; note that I took the liberty to directly fix some nits (such as capitalizing TAPS) and typos; these never change the intended meaning.", "new_text": "configuration details for transport security protocols, as discussed in security-parameters. It does not recommend use (or disuse) of specific algorithms or protocols. Any API-compatible transport security protocol should work in a TAPS system. Security consideration for these protocols should be discussed in the respective specifications. The desribed API is used to exchange information between an application and the transport system. While it is not necessarily expected that both systems are implemented by the same authority, it is expected that the transport system implementation is either provided as a library that is selected by the application from a trusted party, or that it is part of the operating system that the application also relies on for other tasks. In either case, the TAPS API is an internal interface that is used to change information locally between two systems. However, as the transport system is responsible for network communication, it is in the position to potentially share any information provided by the application with the network or another communication peer. Most of the information provided over the TAPS API are useful to configure and select protocols and paths and are not necessarily privacy sensitive. Still, there is some information that could be privacy sensitve because this might reveal usage characteristics and habits of the user of an application. Of course any communication over a network reveals usage characteristics, as all packets as well as their timing and size are part of the network-visible wire image RFC8546. However, the selection of a protocol and its configuration also impacts which information is visible, potentially in clear text, and which other enties can access it. In most cases information that is provided for protocol and path selection should not directly translate to information that is can be observed by network devices on the path. But there might be specific configuration information that are intended for path exposure, such as e.g. a DiffServ codepoint setting, that is either povided directly by the application or indirectly configured over a traffic profile. Further, applications should be aware that communication attempts can lead to more than one connection establishment. This is for example the case when the transport system also excecutes name resolution; or when support mechanisms such as TURN or ICE are used to establish connectivity; or if protocols or paths are raised; or if a path fails and fallback or re-establishment is supported in the transport system. These communication activities are not different from what is used today, however, the goal of a TAPS transport system is to support such mechanisms as a generic service within the transport layer. This enables applications to more dynamically benefit from innovations and new protocols in the transport system but at the same time may reduce transparency of the underlying communication actions to the application itself. The TAPS API is designed such that protocol and path selection can be limited to a small and controlled set if required by the application for functional or security purposes. Further, TAPS implementations should provide an interface to poll information about which protocol and path is currently in use as well as provide logging about the communication events of each connection. "}
{"id": "q-en-api-drafts-05d343e85ac17c8b30dd03a0c45c0c4239b9fdedd6bd7aa9c641cf991951eea0", "old_text": "ConnectionError<> occurs when a Connection transitions to Closed state due to an error in all other circumstances. The interface provides the following guarantees about the ordering of operations:", "comments": "Would be nice if someone has time to do some ascii art :-)\nWouldn't that be fig. 4 in the architecture draft? Except that, looking at this now, I see inconsistencies! E.g., I don't see \"RendezvousDone\" in the figure. Also, instead of ConnectionReceived, it says \"Connection Received\" - rather, this should probably marked as an event by writing it as \"ConnectionReceived<>\". The figure is also missing InitiateWithSend() ... probably it would be good to make this figure both complete and consistent with section 13, \"Connection State and Ordering of Operations and Events\", of the API draft? I don't think very much is missing there.\nI think it would be nice to have a real state diagram here, probably with states like \"closed\", \"open\" and \"listen\" and the events that create a transition between these state.\nI'm not sure that Figure 4 in the Arch diagram serves the purpose that Mirja's trying to address with the state diagram in API. I also don't think we need FIgure 4 to align with the specific naming and CamelCasingConventions in API.\nThanks! I guess you could add a loop for sent<> and received<> in established state but maybe that also too much.Works for me", "new_text": "ConnectionError<> occurs when a Connection transitions to Closed state due to an error in all other circumstances. The following diagram shows the possible states of a Connection and the events that occur upon a transition from one state to another. The interface provides the following guarantees about the ordering of operations:"}
{"id": "q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7", "old_text": "protocols and paths without requiring major changes to the application. design explains the design principles behind the Transport Services API. These principles are intended to make sure that transport protocols can continue to be enhanced and evolve without requiring too many changes by application developers. concepts presents the Transport Services architecture diagram and defines the concepts that are used by both the API and", "comments": "This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass\u2014we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?", "new_text": "protocols and paths without requiring major changes to the application. requirements explains the fundamental requirements for a Transport Services API. These principles are intended to make sure that transport protocols can continue to be enhanced and evolve without requiring significant changes by application developers. concepts presents the Transport Services architecture diagram and defines the concepts that are used by both the API and"}
{"id": "q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7", "old_text": "performance and functional characteristics; and it can communicate with different remote systems to optimize performance, robustness to failure, or some other metric. Beyond these, if the API for the system remains the same over time, new protocols and features could be added to the system's implementation without requiring changes in applications for adoption. 3.1. Functionality that is common across multiple transport protocols ought to be accessible through a unified set of API calls. An application using a Transport Services API can implement logic for its basic use of transport networking (establishing the transport, and sending and receiving data) once, and expect that implementation to continue to function as the transports change. As a baseline, any Transport Services API needs to allow access to the distilled minimal set of features offered by transport protocols I-D.ietf-taps-minset. 3.2. There are applications that will need to control fine-grained details of transport protocols to optimize their behavior and ensure compatibility with remote systems. A Transport Services system therefore ought to also permit more specialized protocol features to be used. The interface for these specialized options ought to be exposed differently from the common options to ensure flexibility. A specialized feature could be required by an application only when using a specific protocol, and not when using others. For example,", "comments": "This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass\u2014we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?", "new_text": "performance and functional characteristics; and it can communicate with different remote systems to optimize performance, robustness to failure, or some other metric. Beyond these, if the API for the system remains the same over time, new protocols and features can be added to the system's implementation without requiring changes in applications for adoption. The normative requirements described here allow Transport Services APIs and Implementations to provide this functionlity without causing incompatibility or introducing security vulnerabilities. The rest of this document describes the architecture non-normatively. 3.1. Any functionality that is common across multiple transport protocols SHOULD be made accessible through a unified set of Transport Services API calls. As a baseline, any Transport Services API MUST allow access to the minimal set of features offered by transport protocols I-D.ietf-taps-minset. An application can specify constraints and preferences for the protocols, features, and network interfaces it will use via Properties. A Transport Services API SHOULD offer Properties that are common to multiple transport protocols, in order to enable the system to appropriately select between protocols that offer equivalent features. Similarly, a Transport Services API SHOULD offer Properties that are applicable to a variety of network layer interfaces and paths, in order to permit racing of different network paths without affecting the applications using the system. Each Property is expected to have a default value. The default values for Properties SHOULD be selected to ensure correctness for the widest set of applications, while providing the widest set of options for selection. For example, since both applications that require reliability and those that do not require reliability can function correctly when a protocol provides reliability, reliability ought to be enabled by default. As another example, the default value for a Property regarding the selection of network interfaces ought to permit as many interfaces as possible. Applications using a Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, where the automated selection is constrained by the requirements and preferences expressed by the application. 3.2. There are applications that will need to control fine-grained details of transport protocols to optimize their behavior and ensure compatibility with remote systems. A Transport Services system therefore SHOULD permit more specialized protocol features to be used. A specialized feature could be required by an application only when using a specific protocol, and not when using others. For example,"}
{"id": "q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7", "old_text": "preference to use the User Timeout Option, communication would not fail when a protocol such as QUIC is selected. Other specialized features, however, could be strictly required by an application and thus constrain the set of protocols that can be used. For example, if an application requires support for automatic handover or failover for a connection, only protocol stacks that", "comments": "This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass\u2014we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?", "new_text": "preference to use the User Timeout Option, communication would not fail when a protocol such as QUIC is selected. Other specialized features, however, can be strictly required by an application and thus constrain the set of protocols that can be used. For example, if an application requires support for automatic handover or failover for a connection, only protocol stacks that"}
{"id": "q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7", "old_text": "3.3. The Transport Services API is envisioned as the abstract model for a family of APIs that share a common way to expose transport features and encourage flexibility. The abstract API definition I-D.ietf- taps-interface describes this interface and how it can be exposed to application developers. Implementations that provide the Transport Services API I-D.ietf- taps-impl will vary due to system-specific support and the needs of the deployment scenario. It is expected that all implementations of Transport Services will offer the entire mandatory API. All implementations are expected to offer an API that is sufficient to use the distilled minimal set of features offered by transport protocols I-D.ietf-taps-minset, including API support for TCP and UDP transport. However, some features provided by this API will not be functional in certain implementations. For example, it is possible that some very constrained devices might not have a full TCP implementation beneath the API. To preserve flexibility and compatibility with future protocols, top- level features in the Transport Services API ought to avoid referencing particular transport protocols. The mappings of these API features to specific implementations of each feature is explained in the I-D.ietf-taps-impl along with the implications of the feature on existing protocols. It is expected that I-D.ietf-taps-interface will be updated and supplemented as new protocols and protocol features are developed. It is important to note that neither the Transport Services API I- D.ietf-taps-interface nor the Implementation document I-D.ietf-taps- impl define new protocols or protocol capabilities that affect what is communicated across the network. Use of a Transport Services system does not require that a peer on the other side of a connection uses the same API or implementation. A Transport Services system acting as a connection initiator can communicate with any existing system that implements the transport protocol(s) selected by the Transport Services system. Similarly, a Transport Services system acting as a listener can receive connections for any protocol that is supported by the system from existing initiators that implement the protocol, independent of whether the initiator uses Transport Services as well or not. 4.", "comments": "This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass\u2014we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?", "new_text": "3.3. A Transport Services implementation can select Protocol Stacks based on the Properties communicated by the application. If two different Protocol Stacks can be safely swapped, or raced in parallel (see racing), then they are considered to be \"equivalent\". Equivalent Protocol Stacks are defined as stacks that can provide the same transport properties and interface expectations as requested by the application. The following two examples show non-equivalent Protocol Stacks: If the application requires preservation of message boundaries, a Protocol Stack that runs UDP as the top-level interface to the application is not equivalent to a Protocol Stack that runs TCP as the top-level interface. A UDP stack would allow an application to read out message boundaries based on datagrams sent from the remote system, whereas TCP does not preserve message boundaries on its own, but needs a framing protocol on top to determine message boundaries. If the application specifies that it requires reliable transmission of data, then a Protocol Stack using UDP without any reliability layer on top would not be allowed to replace a Protocol Stack using TCP. The following example shows equivalent Protocol Stacks: If the application does not require reliable transmission of data, then a Protocol Stack that adds reliability could be regarded as an equivalent Protocol Stack as long as providing this would not conflict with any other application-requested properties. To ensure that security protocols are not incorrectly swapped, Transport Services systems MUST only select Protocol Stacks that meet application requirements (I-D.ietf-taps-transport-security). Systems SHOULD only race Protocol Stacks where the transport security protocols within the stacks are identical. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to explicitly specify that fallback to a specific other version of a protocol is allowed, e.g., to allow fallback to TLS 1.2 if TLS 1.3 is not available. 3.4. It is important to note that neither the Transport Services API I- D.ietf-taps-interface nor the Implementation document I-D.ietf-taps- impl define new protocols or protocol capabilities that affect what is communicated across the network. Use of a Transport Services system MUST NOT require that a peer on the other side of a connection uses the same API or implementation. A Transport Services system acting as a connection initiator is able to communicate with any existing system that implements the transport protocol(s) and all the required properties selected by the Transport Services system. Similarly, a Transport Services system acting as a listener can receive connections for any protocol that is supported by the system from existing initiators that implement the protocol, independent of whether the initiator uses a Transport Services system or not. In normal use, a Transport Services system SHOULD result in consistent protocol and interface selection decisions for the same network conditions given the same set of Properties. This is intended to provide predictable outcomes to the application using the API. 4."}
{"id": "q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7", "old_text": "4.2.3. The Transport Services architecture defines a mechanism that allows applications to easily make use of various network paths and Protocol Stacks without requiring major changes in application logic. In some cases, changing which Protocol Stacks or network paths are used will require updating the preferences expressed by the application that uses the Transport Services system. For example, an application can enable the use of a multipath or multistreaming transport protocol by modifying the properties in its Pre-Connection configuration. In some cases, however, the Transport Services system will be able to automatically change Protocol Stacks without an update to the application, either by selecting a new stack entirely, or by racing multiple candidate Protocol Stacks during connection establishment. This functionality in the API can be a powerful driver of new protocol adoption, but needs to be constrained carefully to avoid unexpected behavior that can lead to functional or security problems. If two different Protocol Stacks can be safely swapped, or raced in parallel (see racing), then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the following criteria: Both stacks MUST offer the interface requested by the application for connection establishment and data transmission. For example, if an application requires preservation of message boundaries, a Protocol Stack that runs UDP as the top-level interface to the application is not equivalent to a Protocol Stack that runs TCP as the top-level interface. A UDP stack would allow an application to read out message boundaries based on datagrams sent from the remote system, whereas TCP does not preserve message boundaries on its own, but needs a framing protocol on top to determine message boundaries. Both stacks MUST offer the transport services that are requested by the application. For example, if an application specifies that it requires reliable transmission of data, then a Protocol Stack using UDP without any reliability layer on top would not be allowed to replace a Protocol Stack using TCP. However, if the application does not require reliability, then a Protocol Stack that adds reliability could be regarded as an equivalent Protocol Stack as long as providing this would not conflict with any other application-requested properties. Both stacks MUST offer security protocols and parameters as requested by the application I-D.ietf-taps-transport-security. Security features and properties, such as cryptographic algorithms, peer authentication, and identity privacy vary across security protocols, and across versions of security protocols. Protocol equivalence ought not to be assumed for different protocols or protocol versions, even if they offer similar application configuration options. To ensure that security protocols are not incorrectly swapped, Transport Services systems SHOULD only automatically generate equivalent Protocol Stacks when the transport security protocols within the stacks are identical. Specifically, a Transport Services system would consider protocols identical only if they are of the same type and version. For example, the same version of TLS running over two different transport Protocol Stacks are considered equivalent, whereas TLS 1.2 and TLS 1.3 RFC8446 are not considered equivalent. However, Transport Services systems MAY allow applications to indicate that they consider two different transport protocols equivalent, e.g., to allow fallback to TLS 1.2 if TLS 1.3 is not available. 4.2.4. By default, stored properties of the implementation, such as cached protocol state, cached path state, and heuristics, may be shared (e.g. across multiple connections in an application). This provides", "comments": "This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass\u2014we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?", "new_text": "4.2.3. By default, stored properties of the implementation, such as cached protocol state, cached path state, and heuristics, may be shared (e.g. across multiple connections in an application). This provides"}
{"id": "q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7", "old_text": "D.ietf-taps-transport-security. As described above in equivalence, if a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Applications need to ensure that they use security APIs appropriately. In cases where applications use an interface to", "comments": "This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass\u2014we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?", "new_text": "D.ietf-taps-transport-security. As described above in equivalence, if a Transport Services system races between two different Protocol Stacks, both need to use the same security protocols and options. However, a Transport Services system can race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Applications need to ensure that they use security APIs appropriately. In cases where applications use an interface to"}
{"id": "q-en-api-drafts-9fd5f8d814bc86b7acb046aa22650a0fa9ef00bb860d881bca191d6d06379dd7", "old_text": "QUIC, and applications ought not to use private keys intended for server authentication as keys for client authentication. Moreover, Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. For example, if an application requests a specific version of TLS, but the desired version of TLS is not available, its connection will fail. Applications are thus responsible for implementing security protocol fallback or version fallback by creating multiple Transport Services Connections, if so desired. Alternatively, a Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. ", "comments": "This proposal consolidates normative language into the section on design principles (now requirements), using the suggestions in .\nPlease review the latest updates!\nEarly art-art review of draft-ietf-taps-architecture My first observation and request is that the group rethink the separation of the architecture and interface documents. It is a warning-sign that the interface document is a normative reference for the architecture document. As the documents are currently constructed, a new reader cannot grasp the architecture without inferring some of it through reading the interface. Consider reformulating the architecture document as an overview - introduce principles, concepts, and terminology, but don't attempt to be normative. (There are signs that this may already be, or have been, a path the document was being pushed towards). If nothing else, rename it, and acknowledge that the actual architecture is specified using the interface definition. Note that the first sentence of the Overview in the interface document also highlights the current structural problem.\nI don't entirely agree with this ART-ART comment: I think I might well agree with moving more of the description to this Arch document, but as discussed before, I also think it useful here to set the actual requirements for the architecture, and to define the architecture. It may need to be much clearer that this architecture consists of more the API.\nFor completeness, this is the first statement of the interface document review: This document reads fairly well. I have some structural concerns (see the early review of the architecture document). In particular I think this document contains the actual definition of the architecture and some of that is inferential.\nDiscussion at September 2021 interim: pass through this document with a view toward making it a high level introduction, not an architecture definition: \"An Introduction to the Transport Services Architecture\". remove normative language from this document (moving it to API, where necessary). move details not useful in a high level intro to API.\nI disagree with this conclusion at first pass\u2014we did a specific analysis and pass on this over a year ago, with PR . The normative language and the standards-track aspect was very intentional and something the WG had consensus on. The point is that the API is a specific incarnation of the TAPS architecture, but there are overarching architectural properties that go beyond a specific abstract API.\nI completely agree with Tommy here, and recommend we fix this in . We should also of course check for any stray text or missing concepts. I think it would also be useful to somehow better explain that in figure 3 the API function is ONE part of the system.\nSee linked PR first please to see if this resolves this?\nA PR was merged that addressed important text issues with separation of requirements and overview, and should more clearly set out that the requirements are for the system. The question of whether the overall TAPS systems requirements belong in this ID; the API; or a separate RFC needs some discussion based on the review, before this can be closed.\nMany good text improvements in the PR NAME but I noticed two smaller updates in the PR that I think were incorrect. Fixed in\nThe review comment showed were many places where the wording of architecture can be improved. There are some high level principles that I think should remain in the architecture document. The decision in today's interim is to review the architecture to get this more into a shape that is valuable as a reference, and then check the updated draft.\nI now suggest the issues are addressed and we close this issue without moving text between documents\nI agree with that; closing.\nOverall review: I think we need to keep the normative language for the key features that differentiate the API. I'd really like to use REQUIRED, RECOMMENDED, rather MUST and SHOULD - because these are not protocol operation requirements, but are implementation requirements to conform. This includes requirements based on Minset - where the WG dervied this API: This is after all why the WG worked on Minset as the basis of the API. I suggest we group all the requirements on architecture as short one or two sentence blocks (preferably as a bullet or definition list) collected into a single subsection, so they are easy to find and read and we say that these are the key architectural requirements. We insert this new requirements text either their own section, or in section 1.3 (although I am open to other places where we can call-out the architectural requirements). The following are key requirements, that define the TAPS architecture: It is RECOMMENDED that functionality that is common across multiple transport protocols is made accessible through a unified set of TAPS interface calls. As a baseline, a Transport Services system interface is REQUIRED to allow access to the minimal set of features offered by transport protocols {{?I-D.ietf-taps-minset}}. A Transport Services system automates the selection of protocols and features and network, based on a set of Properties supplied through the TAPS interface. It is RECOMMENDED that a Transport Services system offers Properties that are common to multiple transport protocols. Specifying common Properties enables a system to appropriately select between protocols that offer equivalent features. This design permits evolution of the transport protocols and functions without affecting the programs using the system. Specifying common Properties enables a Transport Services system to appropriately select an appropriate network interface. It is RECOMMENDED that a system offers Properties that are common to a variety network layer interfaces and paths. This design permits racing of different network paths without affecting the programs using the system. Applications using the Transport Services system interface are REQUIRED to be robust to the automated selection provided by the system, including the possibility to express requirements and preferences that constrain the choices that can be made by the system. If two different Protocol Stacks can be safely swapped, or raced in parallel (see Section 4.2.2), by the Transport Services system then they are considered to be \"equivalent\". Equivalent Protocol Stacks need to meet the requirements for racing specified in section XX. Applications using a Transport Services system MAY explicitly require or prevent the use of specific transport features (and transport protocols) and XXXinterfaces using the \"REQUIRED\" and \"PROHIBIT\" preferences. This allows an application to constrain the set of protocols and features that will be selected for the transport service. It is RECOMMENDED that the default usage by applications does not necessarily restrict this selection, so that the system can make informed choices (e.g., based on the availability of interfaces or set of transport protocols available for the specified remote endpoint). When a Transport Services system races between two different Protocol Stacks, both SHOULD use the same security protocols and options. However, a Transport Services system MAY race different security protocols, e.g., if the application explicitly specifies that it considers them equivalent. Transport Services systems MUST NOT automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols. A Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. It is RECOMMENDED that in normal use, a Transport Services system is designed so that all selection decisions result in consistent choices for the same network conditions when they have the same set of preferences. This is intended to provide predictable outcomes to the application using the transport service. I would then suggest to use lower case within other parts of the document - except were pointed to from here, (we could refer to the requirements subsection, if we thought necessary).\nfor discussion at the interim, but IMO we could start the interim with a PR based on these suggestions and work from there.\nI suspect that we need more protocol-specific properties to be able to talk to various kinds of native protocol peers (other than TCP and UDP: these should already work fine with what we have, I believe). A concrete example: we're hiding stream numbers. In NEAT, we needed to make them configurable though, because we wanted a NEAT client to be able to talk to a native SCTP server, where an application may expect certain type of data to arrive via a certain stream number. I suspect that we may need something similar for a QUIC peer. Should we address this? (I'd say yes) How .. is it a protocol-specific (per-Message) property? (I'd say yes... and perhaps a per-Connection property too, with text explaining that the per-Message one automatically becomes the Connection default unless otherwise specified) What other such things are we missing... what other expectations may e.g. a native QUIC or SCTP application have that we just cannot fulfil with our current system?", "new_text": "QUIC, and applications ought not to use private keys intended for server authentication as keys for client authentication. Moreover, Transport Services systems must not automatically fall back from secure protocols to insecure protocols, or to weaker versions of secure protocols (see equivalence). For example, if an application requests a specific version of TLS, but the desired version of TLS is not available, its connection will fail. Applications are thus responsible for implementing security protocol fallback or version fallback by creating multiple Transport Services Connections, if so desired. Alternatively, a Transport Services system MAY allow applications to specify that fallback to a specific other version of a protocol is allowed. "}
{"id": "q-en-api-drafts-7f04ed21c99e66692b18386351b36800bb3241cee4cb5d1176292a48f7a1ee9b", "old_text": "Transport Properties MUST always be specified while security parameters are OPTIONAL. Message Framers (see framing), if required, should be added to the Preconnection during pre-establishment. 5.1.", "comments": "In the API doc sec 5 says. \"Message Framers (see Section 10), if required, should be added to the Preconnection during pre-establishment.\" Should this be SHOULD or even MUST? If SHOULD is correct I think we need to say slightly more about when it makes sense to add a framer later.\nI think it should be \"can be added...\"; not normative, just descriptive of fact.\nCan we start with the \"If' perhaps... \"If messages frames are to be used, they MUST be added to the Preconnection during pre-establishment.\"?", "new_text": "Transport Properties MUST always be specified while security parameters are OPTIONAL. If Message Framers are used (see framing), they MUST be added to the Preconnection during pre-establishment. 5.1."}
{"id": "q-en-api-drafts-093e1972f003e145651fcd682bd41f51a410f41edffdab70f35398a0a688d91e", "old_text": "4.2.1. Immediate, or simultaneous, racing should be avoided by implementations. This approach can consume network resources and establish state that will not be used. 4.2.2.", "comments": "I actually really like: \"Simultaneous\" and \"Staggered\" as descriptive terms that avoid the question of \"delaying\".\nOpening this to follow up on a comment by NAME on that is now closed. \"Simultaneous\" and \"Staggered\" would be better names than \"Immediate\" and \"Delayed\". Just from these terms, I would normally think that \"Immediate\" means to start racing right away, whereas \"Delayed\" means that nothing would happen for some time. So these two terms are slightly misleading in my opinion.)\nI agree this would be an improvement.\nIn Section 4.2.1., I didn't really understand what \"immediate\" means from the text. I think it just needs one sentence.\nlgtm (side note - nothing blocking, this is just an idea: to me, \"Simultaneous\" and \"Staggered\" would be better names than \"Immediate\" and \"Delayed\". Just from these terms, I would normally think that \"Immediate\" means to start racing right away, whereas \"Delayed\" means that nothing would happen for some time. So these two terms are slightly misleading in my opinion.)Looks good, thanks! I will close this and open a separate issue for the naming.", "new_text": "4.2.1. Immediate racing is when multiple alternate branches are started without waiting for any one branch to make progress before starting the next alternative. This means the attempts are effectively simultaneous. Immediate racing should be avoided by implementations, since it consumes extra network resources and establishes state that might not be used. 4.2.2."}
{"id": "q-en-api-drafts-db8e680be01ef465b7303217c9a5697ec6e9b089fba59e7139e79197e95a852a", "old_text": "The Rendezvous() Action causes the Preconnection to listen on the Local Endpoint for an incoming Connection from the Remote Endpoint, while simultaneously trying to establish a Connection from the Local Endpoint to the Remote Endpoint. This corresponds to a TCP simultaneous open, for example. The Rendezvous() Action returns a Connection object. Once Rendezvous() has been called, any changes to the Preconnection MUST", "comments": "NAME 7df2d9c should address this\nTo we need multiple local and remote candidates. Should this be represented as either: 1) taking two lists of objects; or 2) as two objects each with multiple addresses, ports, etc. I suggest option 1, since this makes it easier to resolve a to multiple preconnections, for NAT binding discovery. Relates to but with also different types of endpoint.\nI like option 1 because it contains complexity.\nSee this email thread: URL Copy + pasting a relevant part from an email from NAME To me at least, it wasn't clear from the text that rendezvous() would kick off ICE-style probing. It's kind of obvious (when else would it happen?), but I think it would be good for the draft to spell this out.\nmodulo the do-we-want-to-bind-tightly-to-ICE question... Dear all, I know way too little about rendezvous mechanisms, so I may get this completely wrong - please bear with me! It may be that I describe an issue that\u2019s a non-issue. But, if I do understand this right, maybe that\u2019s something which needs fixing? Ok, here it goes\u2026 Our current Rendezvous text in the API draft says the following: \"The Rendezvous() Action causes the Preconnection to listen on the Local Endpoint for an incoming Connection from the Remote Endpoint, while simultaneously trying to establish a Connection from the Local Endpoint to the Remote Endpoint. This corresponds to a TCP simultaneous open, for example.\u201d Now, if an application chooses unreliable data transmission, it may get SCTP with partial reliability, or it may (in the future) get QUIC with datagrams, or whatever\u2026 and then, the protocol may be able to do what\u2019s needed (something similar to a simultaneous open). However, in case of UDP: Listening is quite okay, \u201csimultaneously trying to establish a Connection\u201d really means nothing - no packet will be sent. I believe that applications doing this over UDP will implement their own \u201cSYN\u201d-type signal for this - but then it\u2019s up to the application, which, over TAPS, would need to: create a passive connection, send a Message from that same connection, i.e. using the same local address and port (I believe?) - which is a matter of configuring the local endpoint, and doable if the underlying transport system can pull it off. I think there\u2019s nothing in our API preventing an application from doing this. If all of the above is correct, the problem that I see is that an application may ask for unreliable transmission, and then have one case in which calling \u201cRendezvous\u201d is the right thing to do, and another case in which it is not - so it would need to be able to distinguish between these two cases: is it a protocol with Rendezvous-capability, or do I have to do it myself. Maybe, in case of UDP, the call to Rendezvous should just fail, and our text should say that Rendezvous failing would inform the application that it would have to implement it by itself, by issuing Listen and then sending a Message to the peer. Thoughts? Am I completely off track, or is this something that needs a text fix? Cheers, Michael", "new_text": "The Rendezvous() Action causes the Preconnection to listen on the Local Endpoint for an incoming Connection from the Remote Endpoint, while also simultaneously trying to establish a Connection from the Local Endpoint to the Remote Endpoint. If there are multiple Local Endpoints or Remote Endpoints configured, then initiating a rendezvous action will systematically probe the reachability of those endpoints following an approach such as that used in Interactive Connectivity Establishment (ICE) RFC5245. If the endpoints are suspected to be behind a NAT, Rendezvous() can be initiated using Local and Remote Endpoints that support a method of discovering NAT bindings such as Session Traversal Utilities for NAT (STUN) RFC8489 or Traversal Using Relays around NAT (TURN) RFC5766. In this case, the Local Endpoint will resolve to a mixture of local and server reflexive addresses. The Resolve() action on the Preconnection can be used to discover these bindings: The Resolve() call returns a list of Preconnection Objects, that represent the concrete addresses, local and server reflexive, on which a Rendezvous() for the Preconnection will listen for incoming Connections. These resolved Preconnections will share all other Properties with the Preconnection from which they are derived, though some Properties may be made more-specific by the resolution process. An application that uses Rendezvous() to establish a peer-to-peer connection in the presence of NATs will configure the Preconnection object with a Local Endpoint that supports NAT binding discovery. It will then Resolve() on that endpoint, and pass the resulting list of candidate local addresses to the peer via a signalling protocol, for example as part of an ICE RFC5245 exchange within SIP RFC3261 or WebRTC RFC7478. The peer will, via the same signalling channel, return the remote endpoint candidates. These remote endpoint candidates are then configured on the Preconnection, allowing the Rendezvous() Action to be initiated. The Rendezvous() Action returns a Connection object. Once Rendezvous() has been called, any changes to the Preconnection MUST"}
{"id": "q-en-api-drafts-0c401721adf38c6cdf546f7119302b5d6e6716dbb8fd864eb73337b2c513b8df", "old_text": "other, and become part of a Connection Group. Calling Clone on any of these Connections adds another Connection to the Connection Group, and so on. \"Entangled\" Connections share all Connection Properties except \"Priority (Connection)\", see conn-priority. Like all other Properties, Priority is copied to the new Connection when calling Clone(), but it is not entangled: Changing Priority on one Connection does not change it on the other Connections in the same Connection Group. It is also possible to check which Connections belong to the same Connection Group. Calling GroupedConnections() on a specific", "comments": "(well, the priority framework is still confusing, this PR makes it clear that that is intentional, and why)\nMartin (I think) said that he'd check this offline after the Sep 11 interim.\nAt 8.1.2 you say lower values have higher priorities for connPrio. But later, at msgPrio, higher values have higher priority, and in the examples where you speak of the interaction of connPrio and msgPrio, you imply that higher means higher for both properties. Please check for consistency and clarify.\nLower values having higher priorities for connPrio is a remnant that we must have missed when we flipped this, e.g. PR which I don't know... too many iterations have mangled the text.\nAha, now I see what happened: following some more PRs, also , in PR , NAME re-introduced some text saying that sends on Connections with lower-priority values will be prioritized over sends on Connections with a higher-priority value. This reverses the logic for some part of the text - I suspect that NAME (very understandably!) forgot what happened and this created this mix. We have been editing these documents for way too long!\nI'm not sure the priority framework really fits together. Sec 6.4 says we 'should' do something that sounds like WFQ, but 7.1.5 allows the application to decide what happens. Meanwhile message priority suggests a strict priority-based scheduler. Meanwhile, in sec 8.1.3.2 \"Note that this property is not a per-message override of the connection Priority - see Section 7.1.3. Both Priority properties may interact, but can be used independently and be realized by different mechanisms.\" At the risk of overindexing on a partly-baked draft in httpbis, how would I implement strict priority of HTTP/3 streams over a taps API that implements QUIC? Would I need to request strict priority in both connection priority and message priority? Or I guess connection priority would be sufficient?\nWe seem comfortable with the mechanisms in the API, but we should have more text connecting the various sections explaining priority.\nMy comment concerned \"7.1.3. Priority (Connection)\", my thought was this just influences the sender-side to control queuing within a connection group. If so, if it worth calling this property \"Group Priority\" and explain this is sender-side?\nI find the priority framework perhaps overspecified at the connection level and underspecified at the message level, with the interaction between the two left explicitly unstated. I would almost rather see the priority type and meaning left completely implementation-specific; otherwise, my suspicion is that someone will write an application using numeric priorities, declare victory when it works fine on one platform, and not understand that the behavior is not universal. In general, I think if TAPS is going to produce a consistent, easily-understandable, and portable API, it needs to stick to easily-understood abstractions. Message-level priority is sort of like the TCP URG mechanism (though better specified and more useful, for sure), but I'm thinking urgent messages sent explicitly out-of-band (e.g., new Connection) with implementation-specific QoS will produce more consistent, predictable, and understandable outcomes in the pessimal case.\nOk for me (and thanks for doing this!)Some nits, but I believe this addresses my concerns in the issue.", "new_text": "other, and become part of a Connection Group. Calling Clone on any of these Connections adds another Connection to the Connection Group, and so on. \"Entangled\" Connections share all Connection Properties except \"Connection Priority\", see conn-priority. Like all other Properties, Connection Priority is copied to the new Connection when calling Clone(), but it is not entangled: Changing Connection Priority on one Connection does not change it on the other Connections in the same Connection Group. It is also possible to check which Connections belong to the same Connection Group. Calling GroupedConnections() on a specific"}
{"id": "q-en-api-drafts-0c401721adf38c6cdf546f7119302b5d6e6716dbb8fd864eb73337b2c513b8df", "old_text": "Attempts to clone a Connection can result in a CloneError: The Connection Property \"Priority\" operates on entangled Connections as in msg-priority: when allocating available network capacity among Connections in a Connection Group, sends on Connections with higher Priority values will be prioritized over sends on Connections with lower Priority values. A transport system implementation should, if possible, assign each Connection the capacity share (M-N) x C / M, where N is the Connection's Priority value, M is the maximum Priority value used by all Connections in the group and C is the total available capacity. However, this Priority setting is purely advisory, and provides no guarantees about the way capacity is shared. Each implementation is free to implement a way to share capacity that it sees fit. 7.", "comments": "(well, the priority framework is still confusing, this PR makes it clear that that is intentional, and why)\nMartin (I think) said that he'd check this offline after the Sep 11 interim.\nAt 8.1.2 you say lower values have higher priorities for connPrio. But later, at msgPrio, higher values have higher priority, and in the examples where you speak of the interaction of connPrio and msgPrio, you imply that higher means higher for both properties. Please check for consistency and clarify.\nLower values having higher priorities for connPrio is a remnant that we must have missed when we flipped this, e.g. PR which I don't know... too many iterations have mangled the text.\nAha, now I see what happened: following some more PRs, also , in PR , NAME re-introduced some text saying that sends on Connections with lower-priority values will be prioritized over sends on Connections with a higher-priority value. This reverses the logic for some part of the text - I suspect that NAME (very understandably!) forgot what happened and this created this mix. We have been editing these documents for way too long!\nI'm not sure the priority framework really fits together. Sec 6.4 says we 'should' do something that sounds like WFQ, but 7.1.5 allows the application to decide what happens. Meanwhile message priority suggests a strict priority-based scheduler. Meanwhile, in sec 8.1.3.2 \"Note that this property is not a per-message override of the connection Priority - see Section 7.1.3. Both Priority properties may interact, but can be used independently and be realized by different mechanisms.\" At the risk of overindexing on a partly-baked draft in httpbis, how would I implement strict priority of HTTP/3 streams over a taps API that implements QUIC? Would I need to request strict priority in both connection priority and message priority? Or I guess connection priority would be sufficient?\nWe seem comfortable with the mechanisms in the API, but we should have more text connecting the various sections explaining priority.\nMy comment concerned \"7.1.3. Priority (Connection)\", my thought was this just influences the sender-side to control queuing within a connection group. If so, if it worth calling this property \"Group Priority\" and explain this is sender-side?\nI find the priority framework perhaps overspecified at the connection level and underspecified at the message level, with the interaction between the two left explicitly unstated. I would almost rather see the priority type and meaning left completely implementation-specific; otherwise, my suspicion is that someone will write an application using numeric priorities, declare victory when it works fine on one platform, and not understand that the behavior is not universal. In general, I think if TAPS is going to produce a consistent, easily-understandable, and portable API, it needs to stick to easily-understood abstractions. Message-level priority is sort of like the TCP URG mechanism (though better specified and more useful, for sure), but I'm thinking urgent messages sent explicitly out-of-band (e.g., new Connection) with implementation-specific QoS will produce more consistent, predictable, and understandable outcomes in the pessimal case.\nOk for me (and thanks for doing this!)Some nits, but I believe this addresses my concerns in the issue.", "new_text": "Attempts to clone a Connection can result in a CloneError: The Connection Priority Connection Property operates on entangled Connections using the same approach as in msg-priority: when allocating available network capacity among Connections in a Connection Group, sends on Connections with lower Priority values will be prioritized over sends on Connections with higher Priority values. Capacity will be shared among these Connections according to the Connection Group Transmission Scheduler property (conn- scheduler). See priority-in-taps for more. 7."}
{"id": "q-en-api-drafts-0c401721adf38c6cdf546f7119302b5d6e6716dbb8fd864eb73337b2c513b8df", "old_text": "Group. As noted in groups, this property is not entangled when Connections are cloned, i.e., changing the Priority on one Connection in a Connection Group does not change it on the other Connections in the same Connection Group. 7.1.4.", "comments": "(well, the priority framework is still confusing, this PR makes it clear that that is intentional, and why)\nMartin (I think) said that he'd check this offline after the Sep 11 interim.\nAt 8.1.2 you say lower values have higher priorities for connPrio. But later, at msgPrio, higher values have higher priority, and in the examples where you speak of the interaction of connPrio and msgPrio, you imply that higher means higher for both properties. Please check for consistency and clarify.\nLower values having higher priorities for connPrio is a remnant that we must have missed when we flipped this, e.g. PR which I don't know... too many iterations have mangled the text.\nAha, now I see what happened: following some more PRs, also , in PR , NAME re-introduced some text saying that sends on Connections with lower-priority values will be prioritized over sends on Connections with a higher-priority value. This reverses the logic for some part of the text - I suspect that NAME (very understandably!) forgot what happened and this created this mix. We have been editing these documents for way too long!\nI'm not sure the priority framework really fits together. Sec 6.4 says we 'should' do something that sounds like WFQ, but 7.1.5 allows the application to decide what happens. Meanwhile message priority suggests a strict priority-based scheduler. Meanwhile, in sec 8.1.3.2 \"Note that this property is not a per-message override of the connection Priority - see Section 7.1.3. Both Priority properties may interact, but can be used independently and be realized by different mechanisms.\" At the risk of overindexing on a partly-baked draft in httpbis, how would I implement strict priority of HTTP/3 streams over a taps API that implements QUIC? Would I need to request strict priority in both connection priority and message priority? Or I guess connection priority would be sufficient?\nWe seem comfortable with the mechanisms in the API, but we should have more text connecting the various sections explaining priority.\nMy comment concerned \"7.1.3. Priority (Connection)\", my thought was this just influences the sender-side to control queuing within a connection group. If so, if it worth calling this property \"Group Priority\" and explain this is sender-side?\nI find the priority framework perhaps overspecified at the connection level and underspecified at the message level, with the interaction between the two left explicitly unstated. I would almost rather see the priority type and meaning left completely implementation-specific; otherwise, my suspicion is that someone will write an application using numeric priorities, declare victory when it works fine on one platform, and not understand that the behavior is not universal. In general, I think if TAPS is going to produce a consistent, easily-understandable, and portable API, it needs to stick to easily-understood abstractions. Message-level priority is sort of like the TCP URG mechanism (though better specified and more useful, for sure), but I'm thinking urgent messages sent explicitly out-of-band (e.g., new Connection) with implementation-specific QoS will produce more consistent, predictable, and understandable outcomes in the pessimal case.\nOk for me (and thanks for doing this!)Some nits, but I believe this addresses my concerns in the issue.", "new_text": "Group. As noted in groups, this property is not entangled when Connections are cloned, i.e., changing the Priority on one Connection in a Connection Group does not change it on the other Connections in the same Connection Group. See priority-in-taps. 7.1.4."}
{"id": "q-en-api-drafts-0c401721adf38c6cdf546f7119302b5d6e6716dbb8fd864eb73337b2c513b8df", "old_text": "prioritization. Note that this property is not a per-message override of the connection Priority - see conn-priority. Both Priority properties may interact, but can be used independently and be realized by different mechanisms. 8.1.3.3.", "comments": "(well, the priority framework is still confusing, this PR makes it clear that that is intentional, and why)\nMartin (I think) said that he'd check this offline after the Sep 11 interim.\nAt 8.1.2 you say lower values have higher priorities for connPrio. But later, at msgPrio, higher values have higher priority, and in the examples where you speak of the interaction of connPrio and msgPrio, you imply that higher means higher for both properties. Please check for consistency and clarify.\nLower values having higher priorities for connPrio is a remnant that we must have missed when we flipped this, e.g. PR which I don't know... too many iterations have mangled the text.\nAha, now I see what happened: following some more PRs, also , in PR , NAME re-introduced some text saying that sends on Connections with lower-priority values will be prioritized over sends on Connections with a higher-priority value. This reverses the logic for some part of the text - I suspect that NAME (very understandably!) forgot what happened and this created this mix. We have been editing these documents for way too long!\nI'm not sure the priority framework really fits together. Sec 6.4 says we 'should' do something that sounds like WFQ, but 7.1.5 allows the application to decide what happens. Meanwhile message priority suggests a strict priority-based scheduler. Meanwhile, in sec 8.1.3.2 \"Note that this property is not a per-message override of the connection Priority - see Section 7.1.3. Both Priority properties may interact, but can be used independently and be realized by different mechanisms.\" At the risk of overindexing on a partly-baked draft in httpbis, how would I implement strict priority of HTTP/3 streams over a taps API that implements QUIC? Would I need to request strict priority in both connection priority and message priority? Or I guess connection priority would be sufficient?\nWe seem comfortable with the mechanisms in the API, but we should have more text connecting the various sections explaining priority.\nMy comment concerned \"7.1.3. Priority (Connection)\", my thought was this just influences the sender-side to control queuing within a connection group. If so, if it worth calling this property \"Group Priority\" and explain this is sender-side?\nI find the priority framework perhaps overspecified at the connection level and underspecified at the message level, with the interaction between the two left explicitly unstated. I would almost rather see the priority type and meaning left completely implementation-specific; otherwise, my suspicion is that someone will write an application using numeric priorities, declare victory when it works fine on one platform, and not understand that the behavior is not universal. In general, I think if TAPS is going to produce a consistent, easily-understandable, and portable API, it needs to stick to easily-understood abstractions. Message-level priority is sort of like the TCP URG mechanism (though better specified and more useful, for sure), but I'm thinking urgent messages sent explicitly out-of-band (e.g., new Connection) with implementation-specific QoS will produce more consistent, predictable, and understandable outcomes in the pessimal case.\nOk for me (and thanks for doing this!)Some nits, but I believe this addresses my concerns in the issue.", "new_text": "prioritization. Note that this property is not a per-message override of the connection Priority - see conn-priority. The Priority properties may interact, but can be used independently and be realized by different mechanisms; see priority-in-taps. 8.1.3.3."}
{"id": "q-en-api-drafts-0c401721adf38c6cdf546f7119302b5d6e6716dbb8fd864eb73337b2c513b8df", "old_text": "will not result in a SendError separate from the InitiateError signaling the failure of Connection establishment. 8.3. Once a Connection is established, it can be used for receiving data", "comments": "(well, the priority framework is still confusing, this PR makes it clear that that is intentional, and why)\nMartin (I think) said that he'd check this offline after the Sep 11 interim.\nAt 8.1.2 you say lower values have higher priorities for connPrio. But later, at msgPrio, higher values have higher priority, and in the examples where you speak of the interaction of connPrio and msgPrio, you imply that higher means higher for both properties. Please check for consistency and clarify.\nLower values having higher priorities for connPrio is a remnant that we must have missed when we flipped this, e.g. PR which I don't know... too many iterations have mangled the text.\nAha, now I see what happened: following some more PRs, also , in PR , NAME re-introduced some text saying that sends on Connections with lower-priority values will be prioritized over sends on Connections with a higher-priority value. This reverses the logic for some part of the text - I suspect that NAME (very understandably!) forgot what happened and this created this mix. We have been editing these documents for way too long!\nI'm not sure the priority framework really fits together. Sec 6.4 says we 'should' do something that sounds like WFQ, but 7.1.5 allows the application to decide what happens. Meanwhile message priority suggests a strict priority-based scheduler. Meanwhile, in sec 8.1.3.2 \"Note that this property is not a per-message override of the connection Priority - see Section 7.1.3. Both Priority properties may interact, but can be used independently and be realized by different mechanisms.\" At the risk of overindexing on a partly-baked draft in httpbis, how would I implement strict priority of HTTP/3 streams over a taps API that implements QUIC? Would I need to request strict priority in both connection priority and message priority? Or I guess connection priority would be sufficient?\nWe seem comfortable with the mechanisms in the API, but we should have more text connecting the various sections explaining priority.\nMy comment concerned \"7.1.3. Priority (Connection)\", my thought was this just influences the sender-side to control queuing within a connection group. If so, if it worth calling this property \"Group Priority\" and explain this is sender-side?\nI find the priority framework perhaps overspecified at the connection level and underspecified at the message level, with the interaction between the two left explicitly unstated. I would almost rather see the priority type and meaning left completely implementation-specific; otherwise, my suspicion is that someone will write an application using numeric priorities, declare victory when it works fine on one platform, and not understand that the behavior is not universal. In general, I think if TAPS is going to produce a consistent, easily-understandable, and portable API, it needs to stick to easily-understood abstractions. Message-level priority is sort of like the TCP URG mechanism (though better specified and more useful, for sure), but I'm thinking urgent messages sent explicitly out-of-band (e.g., new Connection) with implementation-specific QoS will produce more consistent, predictable, and understandable outcomes in the pessimal case.\nOk for me (and thanks for doing this!)Some nits, but I believe this addresses my concerns in the issue.", "new_text": "will not result in a SendError separate from the InitiateError signaling the failure of Connection establishment. 8.2.7. The Transport Services interface provides two properties to allow a sender to signal the relative priority of data transmission: the Priority Message Property msg-priority, and the Connection Priority Connection Property conn-priority. These properties are designed to allow the expression and implementation of a wide variety of approaches to transmission priority in the transport and application layer, including those which do not appear on the wire (affecting only sender-side transmission scheduling) as well as those that do (e.g. I-D.ietf-httpbis-priority. A Transport Services system gives no guarantees about how its expression of relative priorities will be realized; for example, if a transport stack that only provides a single in-order reliable stream is selected, prioritization information can only be ignored. However, the Transport Services system will seek to ensure that performance of relatively-prioritized connections and messages is not worse with respect to those connections and messages than an equivalent configuration in which all prioritization properties are left at their defaults. The Transport Services interface does order Connection Priority over the Priority Message Property. In the absense of other externalities (e.g., transport-layer flow control), a priority 1 Message on a priority 0 Connection will be sent before a priority 0 Message on a priority 1 Connection in the same group. 8.3. Once a Connection is established, it can be used for receiving data"}
{"id": "q-en-api-drafts-9220e53f190070fdef15bbd5d7476fda07a0d6e2af62b4a54f3dfb579d5bdf92", "old_text": "to an inbound SYN with a SYN-ACK. Calling \"Clone\" on a TCP Connection creates a new Connection with equivalent parameters. The two Connections are otherwise independent. SEND.TCP. TCP does not on its own preserve Message boundaries. Calling \"Send\" on a TCP connection lays out the bytes on the TCP", "comments": "This provides more detail on how to implement Clone with TCP, and clarifies that SCTP's \"Initiate\" should not yield a stream of an already existing SCTP Association in case \"activeReadBeforeSend\" is Preferred or Required.\nCurrently the \"Clone\" part of, e.g., TCP (section 10.1) only says: \"Calling Clone on a TCP Connection creates a new Connection with equivalent parameters. The two Connections are otherwise independent.\" This text should reflect that equivalent behavior must be maintained upon later Connection configuration calls. And, there should be more text in general on cloning (see issue ).\nThis text must refer to a new \"do I want a Connection that supports initiate followed by listen?\" property. This property will be added to address - so I'll address this issue when the PR addressing has landed.\nAdded a question for NAME", "new_text": "to an inbound SYN with a SYN-ACK. Calling \"Clone\" on a TCP Connection creates a new Connection with equivalent parameters. These Connections, and Connections generated via later calls to \"Clone\" on one of them, form a Connection Group. To realize \"entanglement\" for these Connections, with the exception of \"Connection Priority\", changing a Connection Property on one of them must affect the Connection Properties of the others too. No guarantees of honoring the Connection Property \"Connection Priority\" are given, and thus it is safe for an implementation of a transport system to ignore this property. When it is reasonable to assume that Connections traverse the same path (e.g., when they share the same encapsulation), support for it can also experimentally be implemented using a congestion control coupling mechanism (see for example TCP-COUPLING or RFC3124). SEND.TCP. TCP does not on its own preserve Message boundaries. Calling \"Send\" on a TCP connection lays out the bytes on the TCP"}
{"id": "q-en-api-drafts-9220e53f190070fdef15bbd5d7476fda07a0d6e2af62b4a54f3dfb579d5bdf92", "old_text": "If this is the only Connection object that is assigned to the SCTP association or stream mapping has not been negotiated, CONNECT.SCTP is called. Else, a new stream is used: if there are enough streams available, \"Initiate\" is just a local operation that assigns a new stream number to the Connection object. The number of streams is negotiated as a parameter of the prior CONNECT.SCTP call, and it represents a trade-off between local resource usage and the number of Connection objects that can be mapped without requiring a reconfiguration signal. When running out of streams, ADD_STREAM.SCTP must be called. If this is the only Connection object that is assigned to the SCTP association or stream mapping has not been negotiated,", "comments": "This provides more detail on how to implement Clone with TCP, and clarifies that SCTP's \"Initiate\" should not yield a stream of an already existing SCTP Association in case \"activeReadBeforeSend\" is Preferred or Required.\nCurrently the \"Clone\" part of, e.g., TCP (section 10.1) only says: \"Calling Clone on a TCP Connection creates a new Connection with equivalent parameters. The two Connections are otherwise independent.\" This text should reflect that equivalent behavior must be maintained upon later Connection configuration calls. And, there should be more text in general on cloning (see issue ).\nThis text must refer to a new \"do I want a Connection that supports initiate followed by listen?\" property. This property will be added to address - so I'll address this issue when the PR addressing has landed.\nAdded a question for NAME", "new_text": "If this is the only Connection object that is assigned to the SCTP association or stream mapping has not been negotiated, CONNECT.SCTP is called. Else, unless the Selection Property \"activeReadBeforeSend\" is Preferred or Required, a new stream is used: if there are enough streams available, \"Initiate\" is just a local operation that assigns a new stream number to the Connection object. The number of streams is negotiated as a parameter of the prior CONNECT.SCTP call, and it represents a trade-off between local resource usage and the number of Connection objects that can be mapped without requiring a reconfiguration signal. When running out of streams, ADD_STREAM.SCTP must be called. If this is the only Connection object that is assigned to the SCTP association or stream mapping has not been negotiated,"}
{"id": "q-en-api-drafts-a668122bca696bcb2ef87e2a576c617a3be57493f3b2583681af31ce31ef0232", "old_text": "be sent in multiple parts. Protocols that provide the framing (such as length-value protocols, or protocols that use delimiters) provide data boundaries that may be longer than the typical datagram. Each Message for framing protocols corresponds to a single frame, which may be sent either as a complete Message, or in multiple parts. 5.1.", "comments": "Now I feel like I'm shaving the yak. This is more than what we talked about in the interim, but it felt like the following sentence required additional clarity. Let me know if I misunderstood the intended meaning, in which case I will rephrase.", "new_text": "be sent in multiple parts. Protocols that provide the framing (such as length-value protocols, or protocols that use delimiters) may support Message sizes that do not fit within a single datagram. Each Message for framing protocols corresponds to a single frame, which may be sent either as a complete Message in the underlying protocol, or in multiple parts. 5.1."}
{"id": "q-en-api-drafts-706c2a71a77deda77271a736ceb223b118da801cc6917ad39674d0fc4a9c26b8", "old_text": "5.4. Entangled Connections can be created using the Clone Action: Calling Clone on a Connection yields a group of Connections: the parent Connection on which Clone was called, and a resulting cloned Connection. The new Connection is actively openend, and it will send a Ready Event or an EstablishmentError Event. The Connections within a group are \"entangled\" with each other, and become part of a Connection Group. Calling Clone on any of these Connections adds another Connection to the Connection Group, and so on. \"Entangled\" Connections share all Connection Properties except \"Connection Priority\" (see conn-priority) . Like all other Properties, Connection Priority is copied to the new Connection when calling Clone(), but it is not entangled: Changing Connection Priority on one Connection does not change it on the other Connections in the same Connection Group. The stack of Message Framers associated with a Connection are also copied to the cloned Connection when calling Clone. In other words, a cloned Connection has the same stack of Message Framers as the Connection from which they are Cloned, but these Framers may internally maintain per-Connection state. It is also possible to check which Connections belong to the same Connection Group. Calling GroupedConnections() on a specific", "comments": "This changes the use of the term \"entanglement\" as discussed in , and moves some text around in this section for better clarity and less repetition. Also, addressing , it introduces a framer as an optional parameter to the Clone call.\nNAME I incorporated all of your suggestions, thanks!\nThis is not fully clear from section 5.4., however, if connections in a group are each QUIC streams, I think it should be possible to use different framers on each stream. So it might be okay that a framers is copied when clone is called but changing a framer on one connection should not change the framer on the other connection in a group...?\nAlso an related editorial comment (that could be addressed in the same PR eventually): this section on groups only at the very end has a paragraph starting with \"Note that calling Clone() can result in on-the-wire signaling...\" which kind of explains that groups are supposed to be used for multistream protocol but can also be mapped to separate connection if the protocol doesn't not support multiplexing of stream on one connection. I think this information should be added at the very beginning of this section.\nActually, now looking at the framer section, I guess you would rather like to already set the new framer before you clone a connection...?\nI agree with everything you say here; about your last suggestion, maybe a framer should be an optional argument of the clone call?\nI don\u2019t actually understand the difference between a connection in a Connection Group and an \u201centangled connection\u201d ? Both terms are used in API, but in different ways (possibly by different authors) - do we really really need the word \u201centangled\u201d?? if we do we need, can we define and explain this.\nI think that having the word is good as it indicates a deeper relationship than mere \"grouping\". Just based on terms, I'd (correctly) imagine that I can define priorities between members of a group, and that they somehow belong together... as in: Close with a common call, etc. ... but I don't think that I would expect, just from this term, that changing a property on one of them affects all the others. Hence the phrasing: \"The connections within a group are 'entangled' with each other\". However, I agree that in most other places, we should talk about \"groups\" instead. I can try to fix this.\nSounds good - we should add that explanation upfront in the definitions then also?\n[Yes, I think so.] EDIT: no, now I don't think so anymore. (see )\nI have addressed this in PR . Note that the text interchangeably talked about both Connections and Connection Properties as being \"entangled\". I have now written this such that the term \"entanglement\" applies to Connection Properties only, whereas Connections in a group are, well, members of the same Connection Group.\nLooks good to me.", "new_text": "5.4. Connection Groups can be created using the Clone Action: Calling Clone on a Connection yields a Connection Group containing two Connections: the parent Connection on which Clone was called, and a resulting cloned Connection. The new Connection is actively openend, and it will send a Ready Event or an EstablishmentError Event. Calling Clone on any of these Connections adds another Connection to the Connection Group. Connections in a Connection Group share all Connection Properties except \"Connection Priority\" (see conn-priority), and these Connection Properties are entangled: Changing one of the Connection Properties on one Connection in the Connection Group automatically changes the Connection Property for all others. For example, changing \"Timeout for aborting Connection\" (see conn-timeout) on one Connection in a Connection Group will automatically make the same change to this Connection Property for all other Connections in the Connection Group. Like all other Properties, \"Connection Priority\" is copied to the new Connection when calling Clone(), but in this case, a later change to the \"Connection Priority\" on one Connection does not change it on the other Connections in the same Connection Group. Message Properties are also not entangled. For example, changing \"Lifetime\" (see msg-lifetime) of a Message will only affect a single Message on a single Connection. A new Connection created by Clone can have a Message Framer assigned via the optional \"framer\" parameter of the Clone Action. If this parameter is not supplied, the stack of Message Framers associated with a Connection is copied to the cloned Connection when calling Clone. Then, a cloned Connection has the same stack of Message Framers as the Connection from which they are Cloned, but these Framers may internally maintain per-Connection state. It is also possible to check which Connections belong to the same Connection Group. Calling GroupedConnections() on a specific"}
{"id": "q-en-api-drafts-706c2a71a77deda77271a736ceb223b118da801cc6917ad39674d0fc4a9c26b8", "old_text": "is just a new stream of an already active multi-streaming protocol instance. Changing one of the Connection Properties on one Connection in the group changes it for all others. Message Properties, however, are not entangled. For example, changing \"Timeout for aborting Connection\" (see conn-timeout) on one Connection in a group will automatically change this Connection Property for all Connections in the group in the same way. However, changing \"Lifetime\" (see msg- lifetime) of a Message will only affect a single Message on a single Connection, entangled or not. If the underlying protocol supports multi-streaming, it is natural to use this functionality to implement Clone. In that case, entangled Connections are multiplexed together, giving them similar treatment not only inside endpoints, but also across the end-to-end Internet path. Note that calling Clone() can result in on-the-wire signaling, e.g., to open a new connection, depending on the underlying Protocol Stack. When Clone() leads to multiple connections being opened instead of multi-streaming, the Transport Services system will ensure consistency of Connection Properties by uniformly applying them to all underlying connections in a group. Even in such a case, there are possibilities for a Transport Services system to implement prioritization within a Connection Group TCP-COUPLING RFC8699. Attempts to clone a Connection can result in a CloneError: The Connection Priority Connection Property operates on entangled Connections using the same approach as in msg-priority: when allocating available network capacity among Connections in a Connection Group, sends on Connections with lower Priority values will be prioritized over sends on Connections with higher Priority values. Capacity will be shared among these Connections according to", "comments": "This changes the use of the term \"entanglement\" as discussed in , and moves some text around in this section for better clarity and less repetition. Also, addressing , it introduces a framer as an optional parameter to the Clone call.\nNAME I incorporated all of your suggestions, thanks!\nThis is not fully clear from section 5.4., however, if connections in a group are each QUIC streams, I think it should be possible to use different framers on each stream. So it might be okay that a framers is copied when clone is called but changing a framer on one connection should not change the framer on the other connection in a group...?\nAlso an related editorial comment (that could be addressed in the same PR eventually): this section on groups only at the very end has a paragraph starting with \"Note that calling Clone() can result in on-the-wire signaling...\" which kind of explains that groups are supposed to be used for multistream protocol but can also be mapped to separate connection if the protocol doesn't not support multiplexing of stream on one connection. I think this information should be added at the very beginning of this section.\nActually, now looking at the framer section, I guess you would rather like to already set the new framer before you clone a connection...?\nI agree with everything you say here; about your last suggestion, maybe a framer should be an optional argument of the clone call?\nI don\u2019t actually understand the difference between a connection in a Connection Group and an \u201centangled connection\u201d ? Both terms are used in API, but in different ways (possibly by different authors) - do we really really need the word \u201centangled\u201d?? if we do we need, can we define and explain this.\nI think that having the word is good as it indicates a deeper relationship than mere \"grouping\". Just based on terms, I'd (correctly) imagine that I can define priorities between members of a group, and that they somehow belong together... as in: Close with a common call, etc. ... but I don't think that I would expect, just from this term, that changing a property on one of them affects all the others. Hence the phrasing: \"The connections within a group are 'entangled' with each other\". However, I agree that in most other places, we should talk about \"groups\" instead. I can try to fix this.\nSounds good - we should add that explanation upfront in the definitions then also?\n[Yes, I think so.] EDIT: no, now I don't think so anymore. (see )\nI have addressed this in PR . Note that the text interchangeably talked about both Connections and Connection Properties as being \"entangled\". I have now written this such that the term \"entanglement\" applies to Connection Properties only, whereas Connections in a group are, well, members of the same Connection Group.\nLooks good to me.", "new_text": "is just a new stream of an already active multi-streaming protocol instance. If the underlying protocol supports multi-streaming, it is natural to use this functionality to implement Clone. In that case, Connections in a Connection Group are multiplexed together, giving them similar treatment not only inside endpoints, but also across the end-to-end Internet path. Note that calling Clone() can result in on-the-wire signaling, e.g., to open a new transport connection, depending on the underlying Protocol Stack. When Clone() leads to the opening of multiple such connections, the Transport Services system will ensure consistency of Connection Properties by uniformly applying them to all underlying connections in a group. Even in such a case, there are possibilities for a Transport Services system to implement prioritization within a Connection Group TCP-COUPLING RFC8699. Attempts to clone a Connection can result in a CloneError: The \"Connection Priority\" Connection Property operates on Connections in a Connection Group using the same approach as in msg-priority: when allocating available network capacity among Connections in a Connection Group, sends on Connections with lower Priority values will be prioritized over sends on Connections with higher Priority values. Capacity will be shared among these Connections according to"}
{"id": "q-en-api-drafts-706c2a71a77deda77271a736ceb223b118da801cc6917ad39674d0fc4a9c26b8", "old_text": "When set to true, this property will initiate new Connections using as little cached information (such as session tickets or cookies) as possible from previous connections that are not entangled with it. Any state generated by this Connection will only be shared with entangled connections. Cloned Connections will use saved state from within the Connection Group. This is used for separating Connection Contexts as specified in I-D.ietf-taps-arch. Note that this does not guarantee no leakage of information, as implementations may not be able to fully isolate all caches (e.g.", "comments": "This changes the use of the term \"entanglement\" as discussed in , and moves some text around in this section for better clarity and less repetition. Also, addressing , it introduces a framer as an optional parameter to the Clone call.\nNAME I incorporated all of your suggestions, thanks!\nThis is not fully clear from section 5.4., however, if connections in a group are each QUIC streams, I think it should be possible to use different framers on each stream. So it might be okay that a framers is copied when clone is called but changing a framer on one connection should not change the framer on the other connection in a group...?\nAlso an related editorial comment (that could be addressed in the same PR eventually): this section on groups only at the very end has a paragraph starting with \"Note that calling Clone() can result in on-the-wire signaling...\" which kind of explains that groups are supposed to be used for multistream protocol but can also be mapped to separate connection if the protocol doesn't not support multiplexing of stream on one connection. I think this information should be added at the very beginning of this section.\nActually, now looking at the framer section, I guess you would rather like to already set the new framer before you clone a connection...?\nI agree with everything you say here; about your last suggestion, maybe a framer should be an optional argument of the clone call?\nI don\u2019t actually understand the difference between a connection in a Connection Group and an \u201centangled connection\u201d ? Both terms are used in API, but in different ways (possibly by different authors) - do we really really need the word \u201centangled\u201d?? if we do we need, can we define and explain this.\nI think that having the word is good as it indicates a deeper relationship than mere \"grouping\". Just based on terms, I'd (correctly) imagine that I can define priorities between members of a group, and that they somehow belong together... as in: Close with a common call, etc. ... but I don't think that I would expect, just from this term, that changing a property on one of them affects all the others. Hence the phrasing: \"The connections within a group are 'entangled' with each other\". However, I agree that in most other places, we should talk about \"groups\" instead. I can try to fix this.\nSounds good - we should add that explanation upfront in the definitions then also?\n[Yes, I think so.] EDIT: no, now I don't think so anymore. (see )\nI have addressed this in PR . Note that the text interchangeably talked about both Connections and Connection Properties as being \"entangled\". I have now written this such that the term \"entanglement\" applies to Connection Properties only, whereas Connections in a group are, well, members of the same Connection Group.\nLooks good to me.", "new_text": "When set to true, this property will initiate new Connections using as little cached information (such as session tickets or cookies) as possible from previous connections that are not in the same Connection Group. Any state generated by this Connection will only be shared with Connections in the same Connection Group. Cloned Connections will use saved state from within the Connection Group. This is used for separating Connection Contexts as specified in I- D.ietf-taps-arch. Note that this does not guarantee no leakage of information, as implementations may not be able to fully isolate all caches (e.g."}
{"id": "q-en-api-drafts-706c2a71a77deda77271a736ceb223b118da801cc6917ad39674d0fc4a9c26b8", "old_text": "prioritization. Note that this property is not a per-message override of the connection Priority - see conn-priority. The Priority properties may interact, but can be used independently and be realized by different mechanisms; see priority-in-taps.", "comments": "This changes the use of the term \"entanglement\" as discussed in , and moves some text around in this section for better clarity and less repetition. Also, addressing , it introduces a framer as an optional parameter to the Clone call.\nNAME I incorporated all of your suggestions, thanks!\nThis is not fully clear from section 5.4., however, if connections in a group are each QUIC streams, I think it should be possible to use different framers on each stream. So it might be okay that a framers is copied when clone is called but changing a framer on one connection should not change the framer on the other connection in a group...?\nAlso an related editorial comment (that could be addressed in the same PR eventually): this section on groups only at the very end has a paragraph starting with \"Note that calling Clone() can result in on-the-wire signaling...\" which kind of explains that groups are supposed to be used for multistream protocol but can also be mapped to separate connection if the protocol doesn't not support multiplexing of stream on one connection. I think this information should be added at the very beginning of this section.\nActually, now looking at the framer section, I guess you would rather like to already set the new framer before you clone a connection...?\nI agree with everything you say here; about your last suggestion, maybe a framer should be an optional argument of the clone call?\nI don\u2019t actually understand the difference between a connection in a Connection Group and an \u201centangled connection\u201d ? Both terms are used in API, but in different ways (possibly by different authors) - do we really really need the word \u201centangled\u201d?? if we do we need, can we define and explain this.\nI think that having the word is good as it indicates a deeper relationship than mere \"grouping\". Just based on terms, I'd (correctly) imagine that I can define priorities between members of a group, and that they somehow belong together... as in: Close with a common call, etc. ... but I don't think that I would expect, just from this term, that changing a property on one of them affects all the others. Hence the phrasing: \"The connections within a group are 'entangled' with each other\". However, I agree that in most other places, we should talk about \"groups\" instead. I can try to fix this.\nSounds good - we should add that explanation upfront in the definitions then also?\n[Yes, I think so.] EDIT: no, now I don't think so anymore. (see )\nI have addressed this in PR . Note that the text interchangeably talked about both Connections and Connection Properties as being \"entangled\". I have now written this such that the term \"entanglement\" applies to Connection Properties only, whereas Connections in a group are, well, members of the same Connection Group.\nLooks good to me.", "new_text": "prioritization. Note that this property is not a per-message override of the Connection Priority - see conn-priority. The Priority properties may interact, but can be used independently and be realized by different mechanisms; see priority-in-taps."}
{"id": "q-en-api-drafts-706c2a71a77deda77271a736ceb223b118da801cc6917ad39674d0fc4a9c26b8", "old_text": "example, if reliable delivery was requested for a Message handed over before calling Close, the Closed Event will signify that this Message has indeed been delivered. This action does not affect any other Connection that is entangled with this one in a Connection Group. The Closed Event informs the application that the Remote Endpoint has closed the Connection. There is no guarantee that a remote Close will indeed be signaled. Abort terminates a Connection without delivering any remaining Messages. This action does not affect any other Connection that is entangled with this one in a Connection Group. CloseGroup gracefully terminates a Connection and any other Connections that are entangled with this one in a Connection Group. For example, all of the Connections in a group might be streams of a single session for a multistreaming protocol; closing the entire group will close the underlying session. See also groups. As with Close, any Messages remaining to be processed on a Connection will be handled prior to closing. AbortGroup terminates a Connection and any other Connections that are entangled with this one in a Connection Group without delivering any remaining Messages. A ConnectionError informs the application that: 1) data could not be delivered to the peer after a timeout, or 2) the Connection has been", "comments": "This changes the use of the term \"entanglement\" as discussed in , and moves some text around in this section for better clarity and less repetition. Also, addressing , it introduces a framer as an optional parameter to the Clone call.\nNAME I incorporated all of your suggestions, thanks!\nThis is not fully clear from section 5.4., however, if connections in a group are each QUIC streams, I think it should be possible to use different framers on each stream. So it might be okay that a framers is copied when clone is called but changing a framer on one connection should not change the framer on the other connection in a group...?\nAlso an related editorial comment (that could be addressed in the same PR eventually): this section on groups only at the very end has a paragraph starting with \"Note that calling Clone() can result in on-the-wire signaling...\" which kind of explains that groups are supposed to be used for multistream protocol but can also be mapped to separate connection if the protocol doesn't not support multiplexing of stream on one connection. I think this information should be added at the very beginning of this section.\nActually, now looking at the framer section, I guess you would rather like to already set the new framer before you clone a connection...?\nI agree with everything you say here; about your last suggestion, maybe a framer should be an optional argument of the clone call?\nI don\u2019t actually understand the difference between a connection in a Connection Group and an \u201centangled connection\u201d ? Both terms are used in API, but in different ways (possibly by different authors) - do we really really need the word \u201centangled\u201d?? if we do we need, can we define and explain this.\nI think that having the word is good as it indicates a deeper relationship than mere \"grouping\". Just based on terms, I'd (correctly) imagine that I can define priorities between members of a group, and that they somehow belong together... as in: Close with a common call, etc. ... but I don't think that I would expect, just from this term, that changing a property on one of them affects all the others. Hence the phrasing: \"The connections within a group are 'entangled' with each other\". However, I agree that in most other places, we should talk about \"groups\" instead. I can try to fix this.\nSounds good - we should add that explanation upfront in the definitions then also?\n[Yes, I think so.] EDIT: no, now I don't think so anymore. (see )\nI have addressed this in PR . Note that the text interchangeably talked about both Connections and Connection Properties as being \"entangled\". I have now written this such that the term \"entanglement\" applies to Connection Properties only, whereas Connections in a group are, well, members of the same Connection Group.\nLooks good to me.", "new_text": "example, if reliable delivery was requested for a Message handed over before calling Close, the Closed Event will signify that this Message has indeed been delivered. This action does not affect any other Connection in the same Connection Group. The Closed Event informs the application that the Remote Endpoint has closed the Connection. There is no guarantee that a remote Close will indeed be signaled. Abort terminates a Connection without delivering any remaining Messages. This action does not affect any other Connection that in the same Connection Group. CloseGroup gracefully terminates a Connection and any other Connections in the same Connection Group. For example, all of the Connections in a group might be streams of a single session for a multistreaming protocol; closing the entire group will close the underlying session. See also groups. As with Close, any Messages remaining to be processed on a Connection will be handled prior to closing. AbortGroup terminates a Connection and any other Connections in the same Connection Group without delivering any remaining Messages. A ConnectionError informs the application that: 1) data could not be delivered to the peer after a timeout, or 2) the Connection has been"}
{"id": "q-en-api-drafts-8823eb843c4119aa54bec9d58acae8e6a86a7b3630d8a0f8d9f290596a134434", "old_text": "during Connection establishment. This Message can potentially be received multiple times (i.e., multiple copies of the message data may be passed to the Remote Endpoint). See also msg- safelyreplayable. Note that disabling this property has no effect for protocols that are not connection-oriented and do not protect against duplicated messages, e.g., UDP. 4.2.6.", "comments": "The text says What does disabling mean? Does this mean mean that it is valid to set this property to \"Prohibit\" but still set \"safelyReplayable\" to \"Require\" on the PreConnection. Should that not be an error? This is also related to . The use of safelyReplayable for both reliable and unreliable services makes the text a bit confusing. Would it not be sufficient to state that if you do not Require Reliable Data Transfer you have to be prepared to deal with duplication?\nWe should remove this sentence in 4.2.5:", "new_text": "during Connection establishment. This Message can potentially be received multiple times (i.e., multiple copies of the message data may be passed to the Remote Endpoint). See also msg- safelyreplayable. 4.2.6."}
{"id": "q-en-api-drafts-c3ecd0fd8ff7a2998ad0f6e6bc8914f8a28d2b3f06dc037d9c8900836eca0772", "old_text": "a Preconnection allows applications to override this for more detailed control. Transport Properties MUST always be specified while security parameters are OPTIONAL. If Message Framers are used (see framing), they MUST be added to the Preconnection during pre-establishment.", "comments": "For other parameters you have to pass in an empty object if you do not want to set anything but not for SecurityParameters. Unclear why the parameters are treated differently?\nMake security parameters non-optional Remove \"Transport Properties MUST always be specified while security parameters are OPTIONAL.\" Define that there is an implementation-specific security parameter to disable security\nLGTM, thanks!", "new_text": "a Preconnection allows applications to override this for more detailed control. If Message Framers are used (see framing), they MUST be added to the Preconnection during pre-establishment."}
{"id": "q-en-api-drafts-c3ecd0fd8ff7a2998ad0f6e6bc8914f8a28d2b3f06dc037d9c8900836eca0772", "old_text": "Session cache management: Used to tune session cache capacity, lifetime, and other policies. Representation of Security Parameters in implementations should parallel that chosen for Transport Property names as recommended in scope-of-interface-defn.", "comments": "For other parameters you have to pass in an empty object if you do not want to set anything but not for SecurityParameters. Unclear why the parameters are treated differently?\nMake security parameters non-optional Remove \"Transport Properties MUST always be specified while security parameters are OPTIONAL.\" Define that there is an implementation-specific security parameter to disable security\nLGTM, thanks!", "new_text": "Session cache management: Used to tune session cache capacity, lifetime, and other policies. Connections that use Transport Services SHOULD use security in general. However, for compatibility with endpoints that do not support transport security protocols (such as a TCP endpoint that does not support TLS), applications can initialize their security parameters to indicate that security can be disabled, or can be opportunistic. If security is disabled, the Transport Services system will not attempt to add transport security automatically. If security is opportunistic, it will allow Connections without transport security, but will still attempt to use security if available. Representation of Security Parameters in implementations should parallel that chosen for Transport Property names as recommended in scope-of-interface-defn."}
{"id": "q-en-api-drafts-7879c35ccb9995ef783b420ea93a7d8117668296bd2b7fd462719cedbf6d6403", "old_text": "transport layer, in line with developments in modern platforms and programming languages; Selection between alternate network paths that can be informed by additional information that is available about the networks over which an endpoint can operate (e.g. Provisioning Domain (PvD) information RFC7556);", "comments": "This tries to write something small but specific about Network Policies\nLGTM\nIn Sec 2: Should Section 3.2 also talk about network policies, e.g. a certain network supports a DSCP that the application desires?\nAs discussed in Feb 2021 interim, this seems like a rabbithole. Gorry will figure out whether it is and write language if not, or close if so.\nI'd like to add a couple of lines of text here to bullet 2 that discusses how you might learn that an attached network supports certain features - e.g. through a PvD, routing advertisement, or local discovery method, and suggest that this could enable a system to know which paths have specific characteristics.\nI propose this as a starter: When additional information (such as Provisioning Domain (PvD) information {{RFC7556}}) is provided about the networks over which an endpoint can operate, this can inform the selection between alternate network paths. Such information includes network segment PMTU, set of supported DSCPs, expected usage, cost, etc. The usage of this information by the Transport Services API is generally independent of the specific mechanism/protocol that was used to receive the information (e.g. zero-conf, DHCP, or IPv6 RA). Thoughts... ?\nTried to create a PR to add some policy to the path selection\nI think the policy is separate from the API.LGTM", "new_text": "transport layer, in line with developments in modern platforms and programming languages; Selection between alternate network paths that can be informed when additional information is available about the networks over which an endpoint can operate (e.g. Provisioning Domain (PvD) information RFC7556);"}
{"id": "q-en-api-drafts-9a3ecb472be58d7ee18fb10a7d07ce6d1f3e5760f5ece79eda1798f19587bae7", "old_text": "A Preconnection Object holds properties reflecting the application's requirements and preferences for the transport. These include Selection Properties for selecting protocol stacks and paths, as well as Connection Properties for configuration of the detailed operation of the selected Protocol Stacks. The protocol(s) and path(s) selected as candidates during establishment are determined and configured using these properties.", "comments": "Applied requested changes\nIn 3.2 it is clear that you can set Message Properties on Connections and Preconnections, but the rest of the document is not very clear about this. In 4.2 for instance the text only talks about setting Selection Properties and Connection Properties on PreConnections. In 7.1.3 the text says \"Connection Properties describe the default behavior for all Messages on a Connection\". But I think that Message Properties set on a Connection or PreConnection may also describe the default behavior so this relationship should be clarified. Also, the Message Property defaults given for Properties that do not depend on other Properties sets the defaults for all Messages in a Connection I assume? I also assume that a Message Property set on a Message Context overrides a Message Property set on a PreConnection or Connection, but this is also not specified. In 5.4. Connection Groups, I assume that setting Message Properties on a Connection applies to all Connections in the Connection Group? This is not clear.\nIn general, I agree. If you set a message property dynamically on a connection, though, I don't think it would affect the group\u2014I could have one stream in a multiplexed protocol have separate behavior, such as priority.\n+1 to tfpauly. But this is already written in 5.4, maybe you missed it (Anna)? \"Message Properties are also not entangled. For example, changing Lifetime (see Section 7.1.3.1) of a Message will only affect a single Message on a single Connection.\"\nI am fine with an update of a Message Property to a connection applying only to that connection. Priority is not a good example as it is already a special case, but my main issue was that we need to be more clear about the behavior in the specification. NAME the issue is about setting a Message Property on a Connection. The text you refer to talk about setting Message Properties on a message. You could interpret the first part as a general statement, but the way the text is structured it is easy to think it refers to setting Message Properties on a message so I think we should be explicit also about setting Message Properties on a Connection. Could be a second example.\nSo after looking at the text again, I think the example talking about setting Message Properties on a message should be removed, see comment in the pull request.\nThanks!!", "new_text": "A Preconnection Object holds properties reflecting the application's requirements and preferences for the transport. These include Selection Properties for selecting protocol stacks and paths, as well as Connection Properties and Message Properties for configuration of the detailed operation of the selected Protocol Stacks on a per- Connection and Message level. The protocol(s) and path(s) selected as candidates during establishment are determined and configured using these properties."}
{"id": "q-en-api-drafts-9a3ecb472be58d7ee18fb10a7d07ce6d1f3e5760f5ece79eda1798f19587bae7", "old_text": "\"Connection Priority\" on one Connection does not change it on the other Connections in the same Connection Group. Message Properties are also not entangled. For example, changing \"Lifetime\" (see msg-lifetime) of a Message will only affect a single Message on a single Connection. A new Connection created by Clone can have a Message Framer assigned via the optional \"framer\" parameter of the Clone Action. If this", "comments": "Applied requested changes\nIn 3.2 it is clear that you can set Message Properties on Connections and Preconnections, but the rest of the document is not very clear about this. In 4.2 for instance the text only talks about setting Selection Properties and Connection Properties on PreConnections. In 7.1.3 the text says \"Connection Properties describe the default behavior for all Messages on a Connection\". But I think that Message Properties set on a Connection or PreConnection may also describe the default behavior so this relationship should be clarified. Also, the Message Property defaults given for Properties that do not depend on other Properties sets the defaults for all Messages in a Connection I assume? I also assume that a Message Property set on a Message Context overrides a Message Property set on a PreConnection or Connection, but this is also not specified. In 5.4. Connection Groups, I assume that setting Message Properties on a Connection applies to all Connections in the Connection Group? This is not clear.\nIn general, I agree. If you set a message property dynamically on a connection, though, I don't think it would affect the group\u2014I could have one stream in a multiplexed protocol have separate behavior, such as priority.\n+1 to tfpauly. But this is already written in 5.4, maybe you missed it (Anna)? \"Message Properties are also not entangled. For example, changing Lifetime (see Section 7.1.3.1) of a Message will only affect a single Message on a single Connection.\"\nI am fine with an update of a Message Property to a connection applying only to that connection. Priority is not a good example as it is already a special case, but my main issue was that we need to be more clear about the behavior in the specification. NAME the issue is about setting a Message Property on a Connection. The text you refer to talk about setting Message Properties on a message. You could interpret the first part as a general statement, but the way the text is structured it is easy to think it refers to setting Message Properties on a message so I think we should be explicit also about setting Message Properties on a Connection. Could be a second example.\nSo after looking at the text again, I think the example talking about setting Message Properties on a message should be removed, see comment in the pull request.\nThanks!!", "new_text": "\"Connection Priority\" on one Connection does not change it on the other Connections in the same Connection Group. Message Properties set on a Connection also apply only to that Connection. A new Connection created by Clone can have a Message Framer assigned via the optional \"framer\" parameter of the Clone Action. If this"}
{"id": "q-en-api-drafts-9a3ecb472be58d7ee18fb10a7d07ce6d1f3e5760f5ece79eda1798f19587bae7", "old_text": "Sending a Message with Message Properties inconsistent with the Selection Properties of the Connection yields an error. Connection Properties describe the default behavior for all Messages on a Connection. If a Message Property contradicts a Connection Property, and if this per-Message behavior can be supported, it overrides the Connection Property for the specific Message. For example, if \"Reliable Data Transfer (Connection)\" is set to \"Require\" and a protocol with configurable per-Message reliability is used, setting \"Reliable Data Transfer (Message)\" to \"false\" for a particular Message will allow this Message to be sent without any reliability guarantees. Changing the Reliable Data Transfer property on Messages is only possible for Connections that were established enabling the Selection Property \"Configure Per-Message Reliability\". The following Message Properties are supported:", "comments": "Applied requested changes\nIn 3.2 it is clear that you can set Message Properties on Connections and Preconnections, but the rest of the document is not very clear about this. In 4.2 for instance the text only talks about setting Selection Properties and Connection Properties on PreConnections. In 7.1.3 the text says \"Connection Properties describe the default behavior for all Messages on a Connection\". But I think that Message Properties set on a Connection or PreConnection may also describe the default behavior so this relationship should be clarified. Also, the Message Property defaults given for Properties that do not depend on other Properties sets the defaults for all Messages in a Connection I assume? I also assume that a Message Property set on a Message Context overrides a Message Property set on a PreConnection or Connection, but this is also not specified. In 5.4. Connection Groups, I assume that setting Message Properties on a Connection applies to all Connections in the Connection Group? This is not clear.\nIn general, I agree. If you set a message property dynamically on a connection, though, I don't think it would affect the group\u2014I could have one stream in a multiplexed protocol have separate behavior, such as priority.\n+1 to tfpauly. But this is already written in 5.4, maybe you missed it (Anna)? \"Message Properties are also not entangled. For example, changing Lifetime (see Section 7.1.3.1) of a Message will only affect a single Message on a single Connection.\"\nI am fine with an update of a Message Property to a connection applying only to that connection. Priority is not a good example as it is already a special case, but my main issue was that we need to be more clear about the behavior in the specification. NAME the issue is about setting a Message Property on a Connection. The text you refer to talk about setting Message Properties on a message. You could interpret the first part as a general statement, but the way the text is structured it is easy to think it refers to setting Message Properties on a message so I think we should be explicit also about setting Message Properties on a Connection. Could be a second example.\nSo after looking at the text again, I think the example talking about setting Message Properties on a message should be removed, see comment in the pull request.\nThanks!!", "new_text": "Sending a Message with Message Properties inconsistent with the Selection Properties of the Connection yields an error. If a Message Property contradicts a Connection Property, and if this per-Message behavior can be supported, it overrides the Connection Property for the specific Message. For example, if \"Reliable Data Transfer (Connection)\" is set to \"Require\" and a protocol with configurable per-Message reliability is used, setting \"Reliable Data Transfer (Message)\" to \"false\" for a particular Message will allow this Message to be sent without any reliability guarantees. Changing the Reliable Data Transfer property on Messages is only possible for Connections that were established enabling the Selection Property \"Configure Per-Message Reliability\". The following Message Properties are supported:"}
{"id": "q-en-api-drafts-95719253b38fc8d19dbaceb0ff88149b6824380fc0feffaa4309ecbd2b0b974b", "old_text": "IP address (IPv4 or IPv6 address): Interface name (string): Note that an IPv6 address specified with a scope (e.g. \"2001:db8:4920:e29d:a420:7461:7073:0a%en0\") is equivalent to", "comments": "Explain the different use-cases on specifying interfaces to\nThere's something obvious I'm not seeing: if I want to specify a multicast or link-local address for an endpoint, why not simply specify that address? What's the use of adding the interface into it? Some addresses will inherently limit interfaces; I thought interface preference was about preferring wifi to 4G, etc.\nLike, in section 6.1.1:\nMulticast groups can be scoped (see RFC7346), thus, may only be available on certain interfaces or even have different meanings based on the interface. So for Interface- and Link-Local scope, specifying the interface is strictly required. For others use cases (like coping with non-unique use of RFC1918 addresses) this may also be helpful.\nWhy does the API have both the ability to say LocalSpecifier.WithInterface and to set interface requirements on TransportProperties? Why not just keep the latter and remove the former?\nThere's history here: LocalSpecifier.WithInterface predates explicitly setting the interface name in interface requirements. There might be reasons to keep this: LocalSpecifier is more likely to show up in application UI than in transport properties. Any text proposed to keep both would need to have an unambiguous way to resolve conflicts between the two. This might be a source of hard-to-debug errors.\nTo discuss next interim", "new_text": "IP address (IPv4 or IPv6 address): Interface name (string), e.g., to qualify link-local or multicast addresses (see ifspec for details): Note that an IPv6 address specified with a scope (e.g. \"2001:db8:4920:e29d:a420:7461:7073:0a%en0\") is equivalent to"}
{"id": "q-en-api-drafts-95719253b38fc8d19dbaceb0ff88149b6824380fc0feffaa4309ecbd2b0b974b", "old_text": "6.1.2. An Endpoint can have an alternative definition when using different protocols. For example, a server that supports both TLS/TCP and QUIC may be accessible on two different port numbers depending on which", "comments": "Explain the different use-cases on specifying interfaces to\nThere's something obvious I'm not seeing: if I want to specify a multicast or link-local address for an endpoint, why not simply specify that address? What's the use of adding the interface into it? Some addresses will inherently limit interfaces; I thought interface preference was about preferring wifi to 4G, etc.\nLike, in section 6.1.1:\nMulticast groups can be scoped (see RFC7346), thus, may only be available on certain interfaces or even have different meanings based on the interface. So for Interface- and Link-Local scope, specifying the interface is strictly required. For others use cases (like coping with non-unique use of RFC1918 addresses) this may also be helpful.\nWhy does the API have both the ability to say LocalSpecifier.WithInterface and to set interface requirements on TransportProperties? Why not just keep the latter and remove the former?\nThere's history here: LocalSpecifier.WithInterface predates explicitly setting the interface name in interface requirements. There might be reasons to keep this: LocalSpecifier is more likely to show up in application UI than in transport properties. Any text proposed to keep both would need to have an unambiguous way to resolve conflicts between the two. This might be a source of hard-to-debug errors.\nTo discuss next interim", "new_text": "6.1.2. Note that this API has multiple ways to constrain and prioritize endpoint candidates based on the network interface: Specifying an interface on a RemoteEndpoint qualifies the scope of the remote endpoint, e.g., for link-local addresses. Specifying an interface on a LocalEndpoint explicitly binds all candidates derived from this endpoint to use the specified interface. Specifying an interface using the \"interface\" Selection Property (prop-interface) or indirectly via the \"pvd\" Selection Property (prop-pvd) influences the selection among the available candidates. While specifying an interface on an endpoint restricts the candidates available for connection establishment in the Pre-Establishment Phase, the Selection Properties prioritize and constrain the connection establishment. 6.1.3. An Endpoint can have an alternative definition when using different protocols. For example, a server that supports both TLS/TCP and QUIC may be accessible on two different port numbers depending on which"}
{"id": "q-en-api-drafts-95719253b38fc8d19dbaceb0ff88149b6824380fc0feffaa4309ecbd2b0b974b", "old_text": "The following example shows a case where \"example.com\" has a server running on port 443, with an alternate port of 8443 for QUIC. 6.1.3. The following examples of Endpoints show common usage patterns.", "comments": "Explain the different use-cases on specifying interfaces to\nThere's something obvious I'm not seeing: if I want to specify a multicast or link-local address for an endpoint, why not simply specify that address? What's the use of adding the interface into it? Some addresses will inherently limit interfaces; I thought interface preference was about preferring wifi to 4G, etc.\nLike, in section 6.1.1:\nMulticast groups can be scoped (see RFC7346), thus, may only be available on certain interfaces or even have different meanings based on the interface. So for Interface- and Link-Local scope, specifying the interface is strictly required. For others use cases (like coping with non-unique use of RFC1918 addresses) this may also be helpful.\nWhy does the API have both the ability to say LocalSpecifier.WithInterface and to set interface requirements on TransportProperties? Why not just keep the latter and remove the former?\nThere's history here: LocalSpecifier.WithInterface predates explicitly setting the interface name in interface requirements. There might be reasons to keep this: LocalSpecifier is more likely to show up in application UI than in transport properties. Any text proposed to keep both would need to have an unambiguous way to resolve conflicts between the two. This might be a source of hard-to-debug errors.\nTo discuss next interim", "new_text": "The following example shows a case where \"example.com\" has a server running on port 443, with an alternate port of 8443 for QUIC. 6.1.4. The following examples of Endpoints show common usage patterns."}
{"id": "q-en-api-drafts-95719253b38fc8d19dbaceb0ff88149b6824380fc0feffaa4309ecbd2b0b974b", "old_text": "Specify a Local Endpoint using a STUN server: 6.1.4. Specify a Local Endpoint using an Any-Source Multicast group to join on a named local interface:", "comments": "Explain the different use-cases on specifying interfaces to\nThere's something obvious I'm not seeing: if I want to specify a multicast or link-local address for an endpoint, why not simply specify that address? What's the use of adding the interface into it? Some addresses will inherently limit interfaces; I thought interface preference was about preferring wifi to 4G, etc.\nLike, in section 6.1.1:\nMulticast groups can be scoped (see RFC7346), thus, may only be available on certain interfaces or even have different meanings based on the interface. So for Interface- and Link-Local scope, specifying the interface is strictly required. For others use cases (like coping with non-unique use of RFC1918 addresses) this may also be helpful.\nWhy does the API have both the ability to say LocalSpecifier.WithInterface and to set interface requirements on TransportProperties? Why not just keep the latter and remove the former?\nThere's history here: LocalSpecifier.WithInterface predates explicitly setting the interface name in interface requirements. There might be reasons to keep this: LocalSpecifier is more likely to show up in application UI than in transport properties. Any text proposed to keep both would need to have an unambiguous way to resolve conflicts between the two. This might be a source of hard-to-debug errors.\nTo discuss next interim", "new_text": "Specify a Local Endpoint using a STUN server: 6.1.5. Specify a Local Endpoint using an Any-Source Multicast group to join on a named local interface:"}
{"id": "q-en-api-drafts-95719253b38fc8d19dbaceb0ff88149b6824380fc0feffaa4309ecbd2b0b974b", "old_text": "specified via Provisioning Domain attributes (see prop-pvd) or another specific property. 6.2.12. pvd", "comments": "Explain the different use-cases on specifying interfaces to\nThere's something obvious I'm not seeing: if I want to specify a multicast or link-local address for an endpoint, why not simply specify that address? What's the use of adding the interface into it? Some addresses will inherently limit interfaces; I thought interface preference was about preferring wifi to 4G, etc.\nLike, in section 6.1.1:\nMulticast groups can be scoped (see RFC7346), thus, may only be available on certain interfaces or even have different meanings based on the interface. So for Interface- and Link-Local scope, specifying the interface is strictly required. For others use cases (like coping with non-unique use of RFC1918 addresses) this may also be helpful.\nWhy does the API have both the ability to say LocalSpecifier.WithInterface and to set interface requirements on TransportProperties? Why not just keep the latter and remove the former?\nThere's history here: LocalSpecifier.WithInterface predates explicitly setting the interface name in interface requirements. There might be reasons to keep this: LocalSpecifier is more likely to show up in application UI than in transport properties. Any text proposed to keep both would need to have an unambiguous way to resolve conflicts between the two. This might be a source of hard-to-debug errors.\nTo discuss next interim", "new_text": "specified via Provisioning Domain attributes (see prop-pvd) or another specific property. Note that this property is not used to specify an interface scope for a particular endpoint. ifspec provides details about how to qualify endpoint candidates on a per-interface basis. 6.2.12. pvd"}
{"id": "q-en-api-drafts-1b9dce729795c271cd1221f28cc5d49a13863796b518792711076a93583c301f", "old_text": "Interface name (string): An Endpoint cannot have multiple identifiers of a same type set. That is, an endpoint cannot have two IP addresses specified. Two separate IP addresses are represented as two Endpoint Objects. If a", "comments": "Sections 4.1 and 4.1.3 introduce to configure an Endpoint Object. The address supplied in examples appears to be an address literal. However, Section 1.1 does not define any types explicitly associated with an internet address -- or could work, but I don't think that's the intent for these APIs, either. Section 3.2.2 does not extend this. It's unclear whether this literal is intended to support IPv6 scoped addresses as described in [RFC4007]. While this RFC describes a textual representation (possibly not in line with the goals of the API), [RFC6874] describes how to represent zone indexes in an address literal. If zone indexes are not intended to be supported, this could be clarified by, when \"IPv4 and IPv6 address\" is mentioned in section 4.1, also citing [RFC791] and [RFC4007] with the respective address family. If zone indexes are intended to be supported, ambiguity exists around the use of and on a Local Endpoint, where the supplied address has a zone index that does not match the supplied interface. This could be clarified by: Defining type names that represent IPv4 and IPv6 addresses in section 1.1. Specifying the properties of an address literal supported by that type (referencing [RFC791] for IPv4 and both [RFC4007] and [RFC6874] for IPv6). Clarifying in section 4.1 that \"IPv4 or IPv6 address\" are of the respective type described in section 1.1 when used in or . Clarifying the behavior of zone index and interface choices on a Local Endpoint in a new section 4.1.4. This may tie in to to some extent in that this is unlikely to be discoverable until is called. [RFC791]: URL [RFC4007]: URL [RFC6874]: URL\n(note this is in \u00a76.1/6.1.3 post-)\nMention that the %en0 addition to the IPv6 address is equivalent to explicitly setting WithInterface. Also mention that these are implied to be system types for addresses\nLooks sane", "new_text": "Interface name (string): Note that an IPv6 address specified with a scope (e.g. \"2001:db8:4920:e29d:a420:7461:7073:0a%en0\") is equivalent to \"WithIPv6Address\" with an unscoped address and \"WithInterface \" together. An Endpoint cannot have multiple identifiers of a same type set. That is, an endpoint cannot have two IP addresses specified. Two separate IP addresses are represented as two Endpoint Objects. If a"}
{"id": "q-en-api-drafts-95a39908888afd61070bd5e76fbd34e4c061f5dd67f3d4296e747d9c40a03e6d", "old_text": "the other hand, the Remote Endpoint specifies a hostname but no addresses, the Connection can perform name resolution and attempt using any address derived from the original hostname of the Remote Endpoint. The Transport Services system resolves names internally, when the Initiate(), Listen(), or Rendezvous() method is called to establish a", "comments": "I think there's a need for more discussion and clarity around the restriction that \"An Endpoint cannot have multiple identifiers of a same type set.\" Why not? You effectively are letting the endpoint have (potentially) multiple IP addresses when you create it with an identifier that is a DNS name. This feels like an artificial, inconsistent, restriction given the rest of the document (look at the 3rd paragraph of 6.1).\nI think the right thing to do is to delete \"An Endpoint cannot have multiple identifiers of a same type set. That is, an endpoint cannot have two IP addresses specified. Two separate IP addresses are represented as two Endpoint Objects.\"; if you agree, please do so.\nI don't agree that we should delete this. This has been a quite important point in our experience with the API. To make the system work well, resolution should be handled by the TAPS system. If you want to connect by a hostname, provide the hostname\u2014don't try to cache the addresses (often incorrectly) and just pass the same set of addresses again. Note that this doesn't stop multiple addresses from being added to a preconnection. We have this example: Just, one endpoint is a single identifier. You can pass multiple if you want multiple.\nNAME\nDoes this mean the API can't distinguish a remote endpoint that's known to be a single interface with two IP addresses assigned, which would have to be passed as two objects, from two remote endpoints each with a single interface? There's no way of saying \"these two IP addresses are known to be the same remote interface\"? If so, does this matter?", "new_text": "the other hand, the Remote Endpoint specifies a hostname but no addresses, the Connection can perform name resolution and attempt using any address derived from the original hostname of the Remote Endpoint. Note that multiple Remote Endpoints can be added to a Preconnection, as discussed in add-endpoints. The Transport Services system resolves names internally, when the Initiate(), Listen(), or Rendezvous() method is called to establish a"}
{"id": "q-en-api-e7ba696dc0f978d059d004dffd6632804908dcfca9332311f98f7dd0c98ec160", "old_text": "\"permitted\" (required, boolean): indicates whether or not the Captive Portal is open to the requesting host \"hmac-key\" (required, string): provides a per-host key that can be used to authenticate messages from the Captive Portal enforcement server \"user-portal-url\" (required, string): provides the URL of a web portal that can be presented to a user to interact with", "comments": "Satisfies\nlooks good.\nShould we hold the addition of this until we have something that depends on it being in the API?\nWe certainly can, although if we want to use a key as discussed in the WG meeting for the purposes of ICMP validation, having a way to add a key here is nice for removing a bootstrapping step. Hopefully we can get an updated ICMP document out with the API document, and they can reference one another in this way.\nDiscussed in London, remove this.\naccepted", "new_text": "\"permitted\" (required, boolean): indicates whether or not the Captive Portal is open to the requesting host \"user-portal-url\" (required, string): provides the URL of a web portal that can be presented to a user to interact with"}
{"id": "q-en-api-e7ba696dc0f978d059d004dffd6632804908dcfca9332311f98f7dd0c98ec160", "old_text": "\"bytes-remaining\" (optional, integer): indicates the number of bytes left, after which the host will be in a captive state Note that the use of the hmac-key is not defined in this document, but is intended for use in the enforcement step of the Captive Portal Architecture. 3.3. To request the Captive Portal JSON content, a host sends an HTTP GET", "comments": "Satisfies\nlooks good.\nShould we hold the addition of this until we have something that depends on it being in the API?\nWe certainly can, although if we want to use a key as discussed in the WG meeting for the purposes of ICMP validation, having a way to add a key here is nice for removing a bootstrapping step. Hopefully we can get an updated ICMP document out with the API document, and they can reference one another in this way.\nDiscussed in London, remove this.\naccepted", "new_text": "\"bytes-remaining\" (optional, integer): indicates the number of bytes left, after which the host will be in a captive state 3.3. To request the Captive Portal JSON content, a host sends an HTTP GET"}
{"id": "q-en-api-2fcd6b82c58e001e659eac0ce096ff66a918d986f45806f92d562e737b5a659d", "old_text": "it wishes to share with the user (e.g., store info, maps, flight status, or entertainment). \"seconds-remaining\" (optional, integer): indicates the number of seconds remaining, after which the client will be placed into a captive state. The API server SHOULD include this value if the", "comments": "Add key for extending sessions. Implements .\nI want to be clear that there is no extend-session API here. It's just an indication that the user could do that somehow, in a browser.\nNAME correct, there's no specific API to do so. It just indicates if there's a possibility of the user doing so. Future mechanisms for doing so automatically are left for the future =)\nBoolean value to add. Suggested names have been and .\nLGTM", "new_text": "it wishes to share with the user (e.g., store info, maps, flight status, or entertainment). \"can-extend-session\" (optional, boolean): indicates that the URL specified as \"user-portal-url\" allows the user to extend a session once the client is no longer in a state of captivity. This provides a hint that a client system can suggest accessing the portal URL to the user when the session is near its limit in terms of time or bytes. \"seconds-remaining\" (optional, integer): indicates the number of seconds remaining, after which the client will be placed into a captive state. The API server SHOULD include this value if the"}
{"id": "q-en-bundled-responses-6659d95f5271c9e4823416a36e53201960ba12eb78611f2a85747acc7b1eb5b6", "old_text": "\"\"index\"\" (index-section) \"\"manifest\"\" (manifest-section) \"\"critical\"\" (critical-section) \"\"responses\"\" (responses-section)", "comments": "This PR removes the manifest section. The motivation is to factor the \"core\" Web Bundles specification to include just the minimal amount needed to support cases like subresource loading, which does not need a manifest. This and other removed sections can be defined in other documents in the future. Adapted from URL\nWe might want to make sure the impact of removing manifest section from the bundle format on the chromium. NAME NAME Does Chrome have any usages of manifest section? It seems doesn't use it, however, I am wondering whether there is existing use case for that, I am not aware of.\nI never really understood the motivation of this section, since the primary URL response can contain a header pointing to the manifest.\nImagine the situation of application distribution, it is very useful to have a manifest section. For example, we can display the application icon without parsing the HTML of the primary URL. But I agree that the manifest section can be outside the scope of the \"core\" bundle specification. So removing it from the core spec looks good to me.\nLGTM (just for record). Thanks for working on this.", "new_text": "\"\"index\"\" (index-section) \"\"critical\"\" (critical-section) \"\"responses\"\" (responses-section)"}
{"id": "q-en-bundled-responses-6659d95f5271c9e4823416a36e53201960ba12eb78611f2a85747acc7b1eb5b6", "old_text": "4.2.2. The \"manifest\" section records a single URL identifying the manifest of the bundle. The URL MUST refer to a resource with representations contained in the bundle itself. The bundle can contain multiple representations at this URL, and the client is expected to content-negotiate for the best one. For example, a client might select the one matching an \"accept\" header of \"application/manifest+json\" (appmanifest) and an \"accept-language\" header of \"es-419\". 4.2.3. The \"critical\" section consists of the names of sections of the bundle that the client needs to understand in order to load the bundle correctly. Other sections are assumed to be optional.", "comments": "This PR removes the manifest section. The motivation is to factor the \"core\" Web Bundles specification to include just the minimal amount needed to support cases like subresource loading, which does not need a manifest. This and other removed sections can be defined in other documents in the future. Adapted from URL\nWe might want to make sure the impact of removing manifest section from the bundle format on the chromium. NAME NAME Does Chrome have any usages of manifest section? It seems doesn't use it, however, I am wondering whether there is existing use case for that, I am not aware of.\nI never really understood the motivation of this section, since the primary URL response can contain a header pointing to the manifest.\nImagine the situation of application distribution, it is very useful to have a manifest section. For example, we can display the application icon without parsing the HTML of the primary URL. But I agree that the manifest section can be outside the scope of the \"core\" bundle specification. So removing it from the core spec looks good to me.\nLGTM (just for record). Thanks for working on this.", "new_text": "4.2.2. The \"critical\" section consists of the names of sections of the bundle that the client needs to understand in order to load the bundle correctly. Other sections are assumed to be optional."}
{"id": "q-en-bundled-responses-6659d95f5271c9e4823416a36e53201960ba12eb78611f2a85747acc7b1eb5b6", "old_text": "Do all of: If the bundle's contained URLs (e.g. in the manifest and index) are derived from the request for the bundle, percent-encode [1] (URL) any bytes that are greater than 0x7E or are not URL code points [2] (URL) in these URLs. It is particularly important to make sure no unescaped nulls (0x00) or angle brackets (0x3C and 0x3E) appear. Similarly, if the request headers for any contained resource are based on the headers sent while requesting the bundle, only", "comments": "This PR removes the manifest section. The motivation is to factor the \"core\" Web Bundles specification to include just the minimal amount needed to support cases like subresource loading, which does not need a manifest. This and other removed sections can be defined in other documents in the future. Adapted from URL\nWe might want to make sure the impact of removing manifest section from the bundle format on the chromium. NAME NAME Does Chrome have any usages of manifest section? It seems doesn't use it, however, I am wondering whether there is existing use case for that, I am not aware of.\nI never really understood the motivation of this section, since the primary URL response can contain a header pointing to the manifest.\nImagine the situation of application distribution, it is very useful to have a manifest section. For example, we can display the application icon without parsing the HTML of the primary URL. But I agree that the manifest section can be outside the scope of the \"core\" bundle specification. So removing it from the core spec looks good to me.\nLGTM (just for record). Thanks for working on this.", "new_text": "Do all of: If the bundle's contained URLs (e.g. in the index) are derived from the request for the bundle, percent-encode [1] (URL) any bytes that are greater than 0x7E or are not URL code points [2] (URL) in these URLs. It is particularly important to make sure no unescaped nulls (0x00) or angle brackets (0x3C and 0x3E) appear. Similarly, if the request headers for any contained resource are based on the headers sent while requesting the bundle, only"}
{"id": "q-en-capport-wg-architecture-658dbb9aa1626048695481c94064ea77177065685dd87cf35ac8650883f6d5c8", "old_text": "Abstract This document describes a captive portal architecture. DHCP or Router Advertisements, an optional signaling protocol, and an HTTP API are used to provide the solution. The role of Provisioning Domains (PvDs) is described. 1.", "comments": "We never defined what RA was. Do so by expanding it to Router Advertisment in the abstract.\nNote: it's still not clear to me whether we need to cite the RA/DHCP/etc RFCs, or whether they're considered 'well-known' enough to be left alone. Or, is that a questdion for the rfc editors?\nI'm comfortable with taking DHCP and Router Advertisements as assumed knowledge. Both are on the .\nWe never define what an 'RA' is, but we use that term in section 4.1. We should consider expanding it to the full term (Router Advertisement). We may also want to add an informative reference to it when we first mention it. Note that we have the same problem with DHCP here -- I'm not sure offhand what the best practice is regarding terms that have fairly wide use, but may not be understood by everyone (and even what that set of terms is).\nRA is listed in the abbreviations document: URL but of course \"expand on first use\" seems like sound editorial practice.", "new_text": "Abstract This document describes a captive portal architecture. DHCP or Router Advertisements (RAs), an optional signaling protocol, and an HTTP API are used to provide the solution. The role of Provisioning Domains (PvDs) is described. 1."}
{"id": "q-en-capport-wg-architecture-ab987f38948654c75b4fd7ebb91ef1271e370a5fc7fabc54549b779cc428b221", "old_text": "such cases, it is possible for the components to still uniquely identify the device if they are aware of the port mapping. In some situations, the User Equipment may have multiple IP addresses, while still satisfying all of the recommended properties. This raises some challenges to the components of the network. For example, if the User Equipment tries to access the network with multiple IP addresses, should the Enforcement Device and API Server treat each IP address as a unique User Equipment, or should it tie the multiple addresses together into one view of the subscriber? An implementation MAY do either. Attention should be paid to IPv6 and the fact that it is expected for a device to have multiple IPv6 addresses on a single link. In such cases, identification could be", "comments": "If we're using an IP as an identifier, dual stack can add complexity. Mention this.\nRaised during review that dual-stack is a use-case worth calling out explicitly. Some suggested text: \"Further attention should be paid to a device using dual-stack [rfc4213]: it could have both an IPv4 and an IPv6 address at the same time. There could be no properties in common between the two addresses, meaning that some form of mapping solution could be required to form a single identity from the two address.\"", "new_text": "such cases, it is possible for the components to still uniquely identify the device if they are aware of the port mapping. In some situations, the User Equipment may have multiple IP addresses (either IPv4, IPv6 or a dual-stack RFC4213 combination), while still satisfying all of the recommended properties. This raises some challenges to the components of the network. For example, if the User Equipment tries to access the network with multiple IP addresses, should the Enforcement Device and API Server treat each IP address as a unique User Equipment, or should it tie the multiple addresses together into one view of the subscriber? An implementation MAY do either. Attention should be paid to IPv6 and the fact that it is expected for a device to have multiple IPv6 addresses on a single link. In such cases, identification could be"}
{"id": "q-en-capport-wg-architecture-ef1fb5e881f0d81563cacece8a214300eb5a52913c2e4cc4685f3ef94e829192", "old_text": "2.6. The following diagram shows the communication between each component. In the diagram:", "comments": "Whether or not the user visits the User Portal is optional: the user could choose not to, or the Captive Portal may not provide one. Clarify that the case presented in the component diagram is the one where the User Portal is present, and where the user chooses to visit it.", "new_text": "2.6. The following diagram shows the communication between each component in the case where the Captive Network has a User Portal, and the User Equipment chooses to visit the User Portal in response to discovering and interacting with the API Server. In the diagram:"}
{"id": "q-en-capport-wg-architecture-18958366e9c15061e0eff49aa1246b524041557ca209bb416464acd393b84928", "old_text": "Captive Portal API Server: Also known as API Server. A server hosting the Captive Portal API. Captive Portal Signal: A notification from the network used to inform the User Equipment that the state of its captivity could have changed. Captive Portal Signaling Protocol: Also known as Signaling Protocol.", "comments": "A reviewer pointed out that we used notifying, informing, signalling, and alerting to mean the same thing: sending the Captive Portal Signal. I've chosen to use signal as the verb of choice. The phrasing seems a bit awkward in a few places, so I'm not 100% confident in my choice here.\nAlvaro writes, Let's make this consistent across the entire document. Doesn't have to be alert, but should be consistent.\nAre we using User Equipment of end-user device?", "new_text": "Captive Portal API Server: Also known as API Server. A server hosting the Captive Portal API. Captive Portal Signal: A notification from the network used to signal to the User Equipment that the state of its captivity could have changed. Captive Portal Signaling Protocol: Also known as Signaling Protocol."}
{"id": "q-en-capport-wg-architecture-18958366e9c15061e0eff49aa1246b524041557ca209bb416464acd393b84928", "old_text": "7.5. The Captive Portal Signal could inform the User Equipment that it is being held captive. There is no requirement that the User Equipment do something about this. Devices MAY permit users to disable automatic reaction to Captive Portal Signals indications for privacy reasons. However, there would be the trade-off that the user doesn't get notified when network access is restricted. Hence, end-user devices MAY allow users to manually control captive portal interactions, possibly on the granularity of Provisioning Domains.", "comments": "A reviewer pointed out that we used notifying, informing, signalling, and alerting to mean the same thing: sending the Captive Portal Signal. I've chosen to use signal as the verb of choice. The phrasing seems a bit awkward in a few places, so I'm not 100% confident in my choice here.\nAlvaro writes, Let's make this consistent across the entire document. Doesn't have to be alert, but should be consistent.\nAre we using User Equipment of end-user device?", "new_text": "7.5. The Captive Portal Signal could signal to the User Equipment that it is being held captive. There is no requirement that the User Equipment do something about this. Devices MAY permit users to disable automatic reaction to Captive Portal Signals indications for privacy reasons. However, there would be the trade-off that the user doesn't get notified when network access is restricted. Hence, end- user devices MAY allow users to manually control captive portal interactions, possibly on the granularity of Provisioning Domains."}
{"id": "q-en-capport-wg-architecture-969294612ec558dc6397d4f71191c2164a35ced30c562855230bb52d118c90d8", "old_text": "This document describes a captive portal architecture. Network provisioning protocols such as DHCP or Router Advertisements (RAs), an optional signaling protocol, and an HTTP API are used to provide the solution. The role of Provisioning Domains (PvDs) is described. 1.", "comments": "The abstract seems weak. It should mention the problem domain and all of the components. Not sure if it needs to mention PvD at all.\nPersonally, I think that aside from the PvD mention, which can be removed, the current is mostly OK. Brief is good.", "new_text": "This document describes a captive portal architecture. Network provisioning protocols such as DHCP or Router Advertisements (RAs), an optional signaling protocol, and an HTTP API are used to provide the solution. 1."}
{"id": "q-en-capport-wg-architecture-bbf38e460f61f113a3a126cf77228a04e4f1b74a10aea2d20950f3bbe54be2e1", "old_text": "The User Equipment's access to the outside network continues uninterrupted 5. The authors thank Lorenzo Colitti for providing the majority of the", "comments": "Just realized that, this section may also apply to the case of initial provisioning of Portal URI if multiple ways are used, such as: DHCPv4/DHCPv6/RA could be used at the same time, and they return different values; to quote 7710bis: However, if the URIs learned are not in fact all identical the captive device MUST prioritize URIs learned from network provisioning or configuration mechanisms before all other URIs. Specifically, URIs learned via any of the options in Section 2 should take precedence over any URI learned via some other mechanism, such as a redirect. Shall we mention anything on the initial provisioning case too here? or that will only in 7710bis?\nProbably not a lot, except that future requests might go there. Because we don't hold any state for these resources, that's probably OK.\nNAME do you think that you could take a swing at this?\nBy portal URI, you meant the URL to the json file? If so, it seems the ways how it could be changed are: In the case of DHCPv4/v6,in a renewal/rebind of DHCPv4/v6 lease, the end user equipment could receive a new portal URI in the DHCP renew/rebind response. In the case of RA, the end user device may receive a new portal URI in a new RA message. (In RA case, is it possible that there are multiple RA servers sending out different portal URI?) Is there any other possibilities? Do I understand correctly that you are suggesting add something like the following? If a new Portal URI is provided via DHCPv4/v6 renew/rebind or RA, use the new one. Where shall we add this section please? a new paragraph under 2.2.1. DHCP or Router Advertisements? or a new sub section 4.3 under 4 Solution Workflow?\nRight. I forget whether this was about the API URL (the JSON) or the user portal URL. In either case, they can change and I think that's OK. The RA case is interesting in the sense that it might imply a different router advertising different coordinates, as you say, but if it is the same router, then that should be regarded as a change. This came up in discussion and some people felt that it needed to be acknowledged. Your suggested outcome is probably the only sensible one: if you need to contact the API, use the most recent value you have. As for location in the document, a new Section 4.x describing how updated details can be acquired seems right.\nFixed in\nThanks for doing this. I have a few small suggestions.Looks good. Thanks!", "new_text": "The User Equipment's access to the outside network continues uninterrupted 4.3. A different Captive Portal API URI could be returned in the following cases: If DHCP is used, a lease renewal/rebind may return a different Captive Portal API URI. If RA is used, a new Captive Portal API URI may be specified in a new RA message received by end user equipment. Whenever a new Portal URI is received by end user equipment, it SHOULD discard the old URI and use the new one for future requests to the API. 5. The authors thank Lorenzo Colitti for providing the majority of the"}
{"id": "q-en-capport-wg-architecture-93ab77d80b7036706a9f2f8a126156027806ab54a3c18714722cbe1abf8cd751", "old_text": "addresses on a single link. In such cases, identification could be performed by subnet, such as the /64 to which the IP belongs. 3.4.3. The URIs provided in the API SHOULD contain all the information necessary to render the resources requested: the resources should not depend on ambient information, such as remote address on the connection. This is to ensure that the content served from these URIs is correct and meaningful to the User Equipment, even when accessed from a network other than the one that contains the captive portal. One consequence of this is that URIs provided in the API are expected to be resolved using public global DNS (as defined in Section 2 of RFC8499). Though a URI might still correctly resolve when the UE makes the request from a different network, it is possible that some functions", "comments": "This rewrites the context-free URI section to give more context. It also moves the section out from underneath the example identifiers section, and into a proper sub-section of the top level \"User Equipment Identity\" section. Closes issue\nNAME , I mostly took your text word-for-word. Thanks for it!\nA reviewer writes, I think this is trying to say the server should not be looking at Ethernet addresses, for example, because the server is probably not on the same subnet as the User Equipment. So the info needs to be in the URL. Figure out what we're trying to say, and whether this is the right section for it.\nThat's exactly what it's talking about. That said, while it is related to identifying the UE, it's not an example identifier. It has been a while since I thought about this, and I wasn't at the meeting where it was discussed, but I think the intention here was to limit any URIs provide in the API to these context-free URIs. That is, the resource pointed to by the URI cannot depend on information not present in the URI. For example, the API draft (URL) provides a 'user-portal-url'. Suppose that URL is \"captive-URL\". That resource could be implemented by checking the source ethernet address of the connection to see if you're already logged in. But, that won't be correct if you're not in the same subnet. Instead, the URL should be something like: \"captive-URL\" So, this is more of a constraint on the API and how it uses identifiers than an example of an identifier. Did we just get the indentation wrong? Or was this a misunderstanding of what the working group requested?\nMaybe all this needs is a preface explaining how some systems operate. The last two paragraphs are a direct copy of existing text.\nHardly surprising that I think this is OK :)", "new_text": "addresses on a single link. In such cases, identification could be performed by subnet, such as the /64 to which the IP belongs. 3.5. A Captive Portal API needs to present information to clients that is unique to that client. To do this, some systems use information from the context of a request, such as the source address, to identify the UE. Using information from context rather than information from the URI allows the same URI to be used for different clients. However, it also means that the resource is unable to provide relevant information if the UE makes a request using a different network path. This might happen when UE has multiple network interfaces. It might also happen if the address of the API provided by DNS depends on where the query originates (as in split DNS RFC8499). Accessing the API MAY depend on contextual information. However, the URIs provided in the API SHOULD be unique to the UE and not dependent on contextual information to function correctly. Though a URI might still correctly resolve when the UE makes the request from a different network, it is possible that some functions"}
{"id": "q-en-capport-wg-architecture-9dada0de0daa808ca7a948d10ad4a897c5168b22ac2b48d7a30e133f86734c3c", "old_text": "the equipment. At minimum, the API MUST provide: (1) the state of captivity and (2) a URI for the Captive Portal Server. The API SHOULD provide evidence to the caller that it supports the present architecture. When User Equipment receives Captive Portal Signals, the User Equipment MAY query the API to check the state. The User Equipment", "comments": "Everyone was confused by the phrasing that the API needed to provide evidence it supported the architecture. Since it's a proper component of the architecture, it doesn't really need to; that is assumed. Rather, it needs to inform the client of versions/types/etc so that the client can determine whether it supports the API. Rephrase accordingly. Fixes issue\nThanks again for the suggested text, NAME\nA reviewer says, Suggestion: state that a version of the protocol should be included in the API response.\nIsn't the content-type of the API response sufficient indication? That implies that we might need a new content-type if we want to make incompatible changes in a subsequent revision, but I think that is a good plan.\nAt the architecture level, can we state the point more clearly? Maybe better to state what the User Equipment should do if the API response doesn't make sense?\nSo, the fact that we've discovered the API likely means that we're interacting with the architecture -- i.e. we got there from PvD or the dhcp option. So, I'm not sure what it would mean to not support the architecture in that case. It would be a provisioning mistake, I guess? We should suggest what the UE could do in that case, though, for robustness. So, maybe mention that the API should have a content-type in the response to indicate that it is indeed a captive portal API, and then what to do if the UE doesn't recognize that?\nSo I think that the reviewer here was simply confused about the choice of words and phrasing. How about: A caller to the API needs to be presented with evidence that the content it is receiving is for a version of the API that it supports. For an HTTP-based interaction, such as in {{API}} this might be achieved by using a content type that is unique to the protocol.", "new_text": "the equipment. At minimum, the API MUST provide: (1) the state of captivity and (2) a URI for the Captive Portal Server. A caller to the API needs to be presented with evidence that the content it is receiving is for a version of the API that it supports. For an HTTP-based interaction, such as in I-D.ietf-capport-api this might be achieved by using a content type that is unique to the protocol. When User Equipment receives Captive Portal Signals, the User Equipment MAY query the API to check the state. The User Equipment"}
{"id": "q-en-certificate-compression-7661445a968875726503f2603802fa0a2863b9f5d7ae946d12f1cd8defbd1efe", "old_text": "The length of the Certificate message once it is uncompressed. If after decompression the specified length does not match the actual length, the party receiving the invalid message MUST abort the connection with the \"bad_certificate\" alert. The compressed body of the Certificate message, in the same format as it would normally be expressed in. The compression algorithm", "comments": "State explicitly that we do not define any compression dictionaries. Explain the rationale behind the uncompressed_length field. Addresses comments from John Mattsson on the list.\nActually, I have a small nit.", "new_text": "The length of the Certificate message once it is uncompressed. If after decompression the specified length does not match the actual length, the party receiving the invalid message MUST abort the connection with the \"bad_certificate\" alert. The presence of this field allows the receiver to pre-allocate the buffer for the uncompressed Certificate message and to enforce limits on the message size before performing decompression. The compressed body of the Certificate message, in the same format as it would normally be expressed in. The compression algorithm"}
{"id": "q-en-certificate-compression-7661445a968875726503f2603802fa0a2863b9f5d7ae946d12f1cd8defbd1efe", "old_text": "brotli, the Certificate message MUST be compressed with the Brotli compression algorithm as defined in RFC7932. If the received CompressedCertificate message cannot be decompressed, the connection MUST be torn down with the \"bad_certificate\" alert.", "comments": "State explicitly that we do not define any compression dictionaries. Explain the rationale behind the uncompressed_length field. Addresses comments from John Mattsson on the list.\nActually, I have a small nit.", "new_text": "brotli, the Certificate message MUST be compressed with the Brotli compression algorithm as defined in RFC7932. It is possible to define a certificate compression algorithm that uses a pre-shared dictionary to achieve higher compression ratio. This document does not define any such algorithms. If the received CompressedCertificate message cannot be decompressed, the connection MUST be torn down with the \"bad_certificate\" alert."}
{"id": "q-en-certificate-compression-6d7d57a8f9ad445c99f71bd9d9a449af05e7080140d66ba1a8d66396bf8270d1", "old_text": "message MUST be compressed with the ZLIB compression algorithm, as defined in RFC1950. If the specified compression algorithm is brotli, the Certificate message MUST be compressed with the Brotli compression algorithm as defined in RFC7932. It is possible to define a certificate compression algorithm that uses a pre-shared dictionary to achieve higher compression ratio.", "comments": "Parking this here; if we have a consensus on the list to add it, we should merge this.", "new_text": "message MUST be compressed with the ZLIB compression algorithm, as defined in RFC1950. If the specified compression algorithm is brotli, the Certificate message MUST be compressed with the Brotli compression algorithm as defined in RFC7932. If the specified compression algorithm is zstd, the Certificate message MUST be compressed with the Zstandard compression algorithm as defined in RFC8478. It is possible to define a certificate compression algorithm that uses a pre-shared dictionary to achieve higher compression ratio."}
{"id": "q-en-coap-tcp-tls-f823ec3f6e07113f31105cfef778f332767aa903fca48f9ff37a4ca633e2adb2", "old_text": "4.3.1. An endpoint can use the critical Default-Uri-Host Option to indicate the default value for the Uri-Host Options in the messages that it sends to the other endpoint. Its value MUST be a valid value for the Uri-Host Option (Section 5.10.1 of RFC7252). For TLS, the base value for the Default-Uri-Host Option in the direction from the Connection Initiator to the Connection Acceptor is given by the SNI value. For WebSockets, the base value for the Default-Uri-Host Option in the direction from the Connection Initiator to the Connection Acceptor is given by the HTTP Host header field. The active value of the Default-Uri-Host Option is replaced each time the option is sent with a modified value. Its starting value is its base value (if available). 4.3.2. The sender can use the elective Max-Message-Size Option to indicate the maximum message size in bytes that it can receive.", "comments": "The next pass will add text related to the default URI-Host for TLS SNI cases.", "new_text": "4.3.1. The sender can use the elective Max-Message-Size Option to indicate the maximum message size in bytes that it can receive."}
{"id": "q-en-coap-tcp-tls-f823ec3f6e07113f31105cfef778f332767aa903fca48f9ff37a4ca633e2adb2", "old_text": "the option is sent with a modified value. Its starting value is its base value. 4.3.3. A sender can use the elective Block-wise Transfer Option to indicate that it supports the block-wise transfer protocol RFC7959.", "comments": "The next pass will add text related to the default URI-Host for TLS SNI cases.", "new_text": "the option is sent with a modified value. Its starting value is its base value. 4.3.2. A sender can use the elective Block-wise Transfer Option to indicate that it supports the block-wise transfer protocol RFC7959."}
{"id": "q-en-coap-tcp-tls-f823ec3f6e07113f31105cfef778f332767aa903fca48f9ff37a4ca633e2adb2", "old_text": "Release messages can indicate one or more reasons using elective options. The following options are defined: The elective Bad-Default-Uri-Host Option indicates that the default indicated by the CSM Default-Uri-Host Option is unlikely to be useful for this server. The elective Alternative-Address Option requests the peer to instead open a connection of the same scheme as the present connection to the alternative transport address given. Its value is in the form", "comments": "The next pass will add text related to the default URI-Host for TLS SNI cases.", "new_text": "Release messages can indicate one or more reasons using elective options. The following options are defined: The elective Alternative-Address Option requests the peer to instead open a connection of the same scheme as the present connection to the alternative transport address given. Its value is in the form"}
{"id": "q-en-coap-tcp-tls-f823ec3f6e07113f31105cfef778f332767aa903fca48f9ff37a4ca633e2adb2", "old_text": "successful connection to the Alternative-Address inherits all the security properties of the current connection. SNI vs. Default-Uri-Host: Any security negotiated in the TLS handshake is for the SNI name exchanged in the TLS handshake and checked against the certificate provided by the server. The Default-Uri-Host Option cannot be used to extend these security properties to the additional server name. 9. 9.1.", "comments": "The next pass will add text related to the default URI-Host for TLS SNI cases.", "new_text": "successful connection to the Alternative-Address inherits all the security properties of the current connection. 9. 9.1."}
{"id": "q-en-coap-tcp-tls-efd240c2c38b06fed2c6211767a338fc1553830e362008e558a7c8d833a5975a", "old_text": "environments, CoAP needs to support different transport protocols, namely TCP RFC0793, in some situations secured by TLS RFC5246. In addition, some corporate networks only allow Internet access via a HTTP proxy. In this case, the best transport for CoAP might be the RFC6455. The WebSocket protocol provides two-way communication between a WebSocket client and a WebSocket server after upgrading an RFC7230 connection and may be available in an environment that blocks CoAP over UDP. Another scenario for CoAP over WebSockets is a CoAP application running inside a web browser without access to connectivity other than HTTP and WebSockets. This document specifies how to access resources using CoAP requests and responses over the TCP, TLS and WebSocket protocols. This allows", "comments": "I realize that the specific review feedback was for the Introduction - but might we also want to \"minimize\" the number of WS configurations discussed in Section 4 to focus on the browser motivation?\nWhile the third configuration is not mentioned in the introduction, all three are motivated by in-browser app use cases. So I don\u2019t see a need to cut down the discussion in Section 4. Gr\u00fc\u00dfe, Carsten\n: The introduction motivates the layering of COAP over WebSockets to allow it to be used in a network that only allows external access through a HTTP proxy. In fact, it is not necessary to use WebSockets to do so; HTTP CONNECT provides a tunnel in this scenario. WebSockets is a specialised protocol that allows bidirectional, stream-based transport from a Web browser -- it is effectively TCP wrapped into the Web browser's security model, to make doing so safe. Using it for other purposes (i.e., non-browser use cases) isn't really appropriate. I'd suggest either removing the WebSockets binding, or changing its motivation in the Introduction. For reference, the original URL motivation:", "new_text": "environments, CoAP needs to support different transport protocols, namely TCP RFC0793, in some situations secured by TLS RFC5246. CoAP applications running inside a web browser without access to connectivity other than HTTP and the RFC6455 may cross-proxy their CoAP requests via HTTP to a HTTP-to-CoAP cross-proxy or transport them via the the WebSocket protocol, which provides two-way communication between a WebSocket client and a WebSocket server after upgrading an RFC7230 connection. This document specifies how to access resources using CoAP requests and responses over the TCP, TLS and WebSocket protocols. This allows"}
{"id": "q-en-coap-tcp-tls-a2cf8951b6cd9360241cd0e3f14d0bc8924270bf8b8874bb8e5dd777f0069039", "old_text": "can send a CoAP Ping Signaling message (see sec-ping) to test the connection and verify that the CoAP server is responsive. 4. CoAP over WebSockets is intentionally similar to CoAP over TCP;", "comments": "1: It is not clear how the protocol reacts the errors from transport layers (e.g. connection failure). The protocol will just inform apps of the events and the app will decide what to do or the protocol itself will do something? The WebSockets case is addressed by RFC6455: -and- After review with the authors, we'll append a clarification for CoAP over TCP in its Connection Health section.", "new_text": "can send a CoAP Ping Signaling message (see sec-ping) to test the connection and verify that the CoAP server is responsive. When the underlying TCP connection is closed or reset, the signaling state and any observation state (see observe-cancel) associated with the reliable connection are removed. In flight messages may or may not be lost. 4. CoAP over WebSockets is intentionally similar to CoAP over TCP;"}
{"id": "q-en-coap-tcp-tls-a4f5e8912cf8bce90ed23561cb8c0764e0bc42ee8533a2640277ea6e7499cd1a", "old_text": "RFC2119. This document assumes that readers are familiar with the terms and concepts that are used in RFC6455 and RFC7252. The term \"reliable transport\" only refers to stream-based transport protocols such as TCP.", "comments": "Added tag for updates: Removed Observe references scattered throughout the draft Added appendix to collect clarifications to RFC7641 in one place\nCaptured from : There needs to be a section on how resources can be observed over TCP/TLS. Questions include: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up?\nURL In a conference call with Klaus, Simon, Bill and myself we discussed the issue and the raised questions. QUESTION 1: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? The text from the CoAP over Websockets could be re-used. Here is what it could say: Since the TCP provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications therefore MAY be empty on transmission and MUST be ignored on reception. The Observe Option still has to be sent since it contains other information as well. QUESTION 2: Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up? We believe it is enough to send keep-alive message for the CoAP over TCP draft (as opposed to send a re-registration message when used over UDP). QUESTION 3: How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? TCP offers a flow control mechanism as a means for the receiver to govern the amount of data sent by the sender. This is functionality that can be re-used.\nURL (1) -- I agree with the proposed solution. (2) -- we need text that empty messages are no-ops and can be used as a keep-alive mechanism. (3) -- this is a problem that most applications that start using reliable streams run into. Early implementations sometimes ignore the need for proper flow control. That is not a reason to add more mechanism (that would also not be in those early implementations), but the solution should be to use the flow control mechanisms we already have. Maybe an LWIG document could have information about flow control support in implementations, e.g., \u200bURL\nURL Has someone thought about the following setup, that is, will it work with keep-alives only? There is a cluster behind a load balancer. I client sets up a connection, is assigned to one node and starts observing. For some reason the node fails and through a fail-over strategy the client is handed over to another node. All further requests are answered transparently be the new node. However, the Observe relationship got lost...\nURL Subissue 1 is covered by the Websockets draft; close this when merged. Subissue 2 is covered by text in the Websockets draft; the failover problem described can be addressed by proper implementation of failover. This may however be costly, so we should have some more discussion about this. Subissue 3 is an implementation quality issue. Just don't implement stream-based without some flow control.\nOn the 7/6 call with the authors, NAME agreed to review and determine whether all issues were addressed.\nThis issue does not include a question about the use of confirmable messages for notifications in to determine client aliveness and interest: A notification can be confirmable or non-confirmable ... An acknowledgement message signals to the server that the client is alive and interested in receiving further notifications; if the server does not receive an acknowledgement in reply to a confirmable notification, it will assume that the client is no longer interested and will eventually remove the associated entry from the list of observers This is inappropriate for CoAP over reliable transports.\nThe phrasing is a bit unfortunate. What is meant is \"if the confirmable notification times out\". The same would be true in TCP -- if the connection times out, you remove the interests. We could clarify this in a section of CoAP-TCP about the impact of no longer sending back and forth acknowledgements.\nDiscussing the scope of this issue with the authors. This probably needs a more complete treatment rather than small references like: Since the TCP protocol provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications MAY be empty on transmission and MUST be ignored on reception.\nMy reading of the discussion thread gives me the impression that we had closed the initial set of questions. The subsequently raised issue regarding the wording in RFC 7641 I am wondering whether there is any change needed. We definitely are not going to change RFC 7641. I personally don't think that the highlighted text from RFC7641 will be wrongly implemented. I would suggest to close issue.\nNAME - this would be - any comments on that pull request?", "new_text": "RFC2119. This document assumes that readers are familiar with the terms and concepts that are used in RFC6455, RFC7252, and RFC7641. The term \"reliable transport\" only refers to stream-based transport protocols such as TCP."}
{"id": "q-en-coap-tcp-tls-a4f5e8912cf8bce90ed23561cb8c0764e0bc42ee8533a2640277ea6e7499cd1a", "old_text": "Retransmission and deduplication of messages is provided by the TCP/ TLS protocol. Since the TCP protocol provides ordered delivery of messages, the mechanism for reordering detection when RFC7641 is not needed. The value of the Observe Option in notifications MAY be empty on transmission and MUST be ignored on reception. 3. CoAP over WebSockets can be used in a number of configurations. The", "comments": "Added tag for updates: Removed Observe references scattered throughout the draft Added appendix to collect clarifications to RFC7641 in one place\nCaptured from : There needs to be a section on how resources can be observed over TCP/TLS. Questions include: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up?\nURL In a conference call with Klaus, Simon, Bill and myself we discussed the issue and the raised questions. QUESTION 1: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? The text from the CoAP over Websockets could be re-used. Here is what it could say: Since the TCP provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications therefore MAY be empty on transmission and MUST be ignored on reception. The Observe Option still has to be sent since it contains other information as well. QUESTION 2: Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up? We believe it is enough to send keep-alive message for the CoAP over TCP draft (as opposed to send a re-registration message when used over UDP). QUESTION 3: How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? TCP offers a flow control mechanism as a means for the receiver to govern the amount of data sent by the sender. This is functionality that can be re-used.\nURL (1) -- I agree with the proposed solution. (2) -- we need text that empty messages are no-ops and can be used as a keep-alive mechanism. (3) -- this is a problem that most applications that start using reliable streams run into. Early implementations sometimes ignore the need for proper flow control. That is not a reason to add more mechanism (that would also not be in those early implementations), but the solution should be to use the flow control mechanisms we already have. Maybe an LWIG document could have information about flow control support in implementations, e.g., \u200bURL\nURL Has someone thought about the following setup, that is, will it work with keep-alives only? There is a cluster behind a load balancer. I client sets up a connection, is assigned to one node and starts observing. For some reason the node fails and through a fail-over strategy the client is handed over to another node. All further requests are answered transparently be the new node. However, the Observe relationship got lost...\nURL Subissue 1 is covered by the Websockets draft; close this when merged. Subissue 2 is covered by text in the Websockets draft; the failover problem described can be addressed by proper implementation of failover. This may however be costly, so we should have some more discussion about this. Subissue 3 is an implementation quality issue. Just don't implement stream-based without some flow control.\nOn the 7/6 call with the authors, NAME agreed to review and determine whether all issues were addressed.\nThis issue does not include a question about the use of confirmable messages for notifications in to determine client aliveness and interest: A notification can be confirmable or non-confirmable ... An acknowledgement message signals to the server that the client is alive and interested in receiving further notifications; if the server does not receive an acknowledgement in reply to a confirmable notification, it will assume that the client is no longer interested and will eventually remove the associated entry from the list of observers This is inappropriate for CoAP over reliable transports.\nThe phrasing is a bit unfortunate. What is meant is \"if the confirmable notification times out\". The same would be true in TCP -- if the connection times out, you remove the interests. We could clarify this in a section of CoAP-TCP about the impact of no longer sending back and forth acknowledgements.\nDiscussing the scope of this issue with the authors. This probably needs a more complete treatment rather than small references like: Since the TCP protocol provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications MAY be empty on transmission and MUST be ignored on reception.\nMy reading of the discussion thread gives me the impression that we had closed the initial set of questions. The subsequently raised issue regarding the wording in RFC 7641 I am wondering whether there is any change needed. We definitely are not going to change RFC 7641. I personally don't think that the highlighted text from RFC7641 will be wrongly implemented. I would suggest to close issue.\nNAME - this would be - any comments on that pull request?", "new_text": "Retransmission and deduplication of messages is provided by the TCP/ TLS protocol. 3. CoAP over WebSockets can be used in a number of configurations. The"}
{"id": "q-en-coap-tcp-tls-a4f5e8912cf8bce90ed23561cb8c0764e0bc42ee8533a2640277ea6e7499cd1a", "old_text": "distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. Since the WebSocket protocol provides ordered delivery of messages, the mechanism for reordering detection when RFC7641 is not needed. The value of the Observe Option in notifications MAY be empty on transmission and MUST be ignored on reception. 3.4. When a client does not receive any response for some time after", "comments": "Added tag for updates: Removed Observe references scattered throughout the draft Added appendix to collect clarifications to RFC7641 in one place\nCaptured from : There needs to be a section on how resources can be observed over TCP/TLS. Questions include: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up?\nURL In a conference call with Klaus, Simon, Bill and myself we discussed the issue and the raised questions. QUESTION 1: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? The text from the CoAP over Websockets could be re-used. Here is what it could say: Since the TCP provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications therefore MAY be empty on transmission and MUST be ignored on reception. The Observe Option still has to be sent since it contains other information as well. QUESTION 2: Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up? We believe it is enough to send keep-alive message for the CoAP over TCP draft (as opposed to send a re-registration message when used over UDP). QUESTION 3: How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? TCP offers a flow control mechanism as a means for the receiver to govern the amount of data sent by the sender. This is functionality that can be re-used.\nURL (1) -- I agree with the proposed solution. (2) -- we need text that empty messages are no-ops and can be used as a keep-alive mechanism. (3) -- this is a problem that most applications that start using reliable streams run into. Early implementations sometimes ignore the need for proper flow control. That is not a reason to add more mechanism (that would also not be in those early implementations), but the solution should be to use the flow control mechanisms we already have. Maybe an LWIG document could have information about flow control support in implementations, e.g., \u200bURL\nURL Has someone thought about the following setup, that is, will it work with keep-alives only? There is a cluster behind a load balancer. I client sets up a connection, is assigned to one node and starts observing. For some reason the node fails and through a fail-over strategy the client is handed over to another node. All further requests are answered transparently be the new node. However, the Observe relationship got lost...\nURL Subissue 1 is covered by the Websockets draft; close this when merged. Subissue 2 is covered by text in the Websockets draft; the failover problem described can be addressed by proper implementation of failover. This may however be costly, so we should have some more discussion about this. Subissue 3 is an implementation quality issue. Just don't implement stream-based without some flow control.\nOn the 7/6 call with the authors, NAME agreed to review and determine whether all issues were addressed.\nThis issue does not include a question about the use of confirmable messages for notifications in to determine client aliveness and interest: A notification can be confirmable or non-confirmable ... An acknowledgement message signals to the server that the client is alive and interested in receiving further notifications; if the server does not receive an acknowledgement in reply to a confirmable notification, it will assume that the client is no longer interested and will eventually remove the associated entry from the list of observers This is inappropriate for CoAP over reliable transports.\nThe phrasing is a bit unfortunate. What is meant is \"if the confirmable notification times out\". The same would be true in TCP -- if the connection times out, you remove the interests. We could clarify this in a section of CoAP-TCP about the impact of no longer sending back and forth acknowledgements.\nDiscussing the scope of this issue with the authors. This probably needs a more complete treatment rather than small references like: Since the TCP protocol provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications MAY be empty on transmission and MUST be ignored on reception.\nMy reading of the discussion thread gives me the impression that we had closed the initial set of questions. The subsequently raised issue regarding the wording in RFC 7641 I am wondering whether there is any change needed. We definitely are not going to change RFC 7641. I personally don't think that the highlighted text from RFC7641 will be wrongly implemented. I would suggest to close issue.\nNAME - this would be - any comments on that pull request?", "new_text": "distinction between Confirmable or Non-Confirmable messages, and does not provide Acknowledgement or Reset messages. 3.4. When a client does not receive any response for some time after"}
{"id": "q-en-coap-tcp-tls-a4f5e8912cf8bce90ed23561cb8c0764e0bc42ee8533a2640277ea6e7499cd1a", "old_text": "RFC6455. All requests for which the CoAP client has not received a response yet are cancelled when the connection is closed. If the client observes one or more resources over the WebSocket connection, then the CoAP server (or intermediary in the role of the CoAP server) MUST remove all entries associated with the client from the lists of observers when the connection is closed. 4.", "comments": "Added tag for updates: Removed Observe references scattered throughout the draft Added appendix to collect clarifications to RFC7641 in one place\nCaptured from : There needs to be a section on how resources can be observed over TCP/TLS. Questions include: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up?\nURL In a conference call with Klaus, Simon, Bill and myself we discussed the issue and the raised questions. QUESTION 1: Do servers need to include a sequence number in notifications, given that TCP/TLS provides reliable, in-order delivery? The text from the CoAP over Websockets could be re-used. Here is what it could say: Since the TCP provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications therefore MAY be empty on transmission and MUST be ignored on reception. The Observe Option still has to be sent since it contains other information as well. QUESTION 2: Does a client have to re-register its interest in a resource if it hasn't received a notification for a while but the connection is still up? We believe it is enough to send keep-alive message for the CoAP over TCP draft (as opposed to send a re-registration message when used over UDP). QUESTION 3: How do servers throttle the stream of notifications when the resource changes its state more frequently than the server can send notifications? TCP offers a flow control mechanism as a means for the receiver to govern the amount of data sent by the sender. This is functionality that can be re-used.\nURL (1) -- I agree with the proposed solution. (2) -- we need text that empty messages are no-ops and can be used as a keep-alive mechanism. (3) -- this is a problem that most applications that start using reliable streams run into. Early implementations sometimes ignore the need for proper flow control. That is not a reason to add more mechanism (that would also not be in those early implementations), but the solution should be to use the flow control mechanisms we already have. Maybe an LWIG document could have information about flow control support in implementations, e.g., \u200bURL\nURL Has someone thought about the following setup, that is, will it work with keep-alives only? There is a cluster behind a load balancer. I client sets up a connection, is assigned to one node and starts observing. For some reason the node fails and through a fail-over strategy the client is handed over to another node. All further requests are answered transparently be the new node. However, the Observe relationship got lost...\nURL Subissue 1 is covered by the Websockets draft; close this when merged. Subissue 2 is covered by text in the Websockets draft; the failover problem described can be addressed by proper implementation of failover. This may however be costly, so we should have some more discussion about this. Subissue 3 is an implementation quality issue. Just don't implement stream-based without some flow control.\nOn the 7/6 call with the authors, NAME agreed to review and determine whether all issues were addressed.\nThis issue does not include a question about the use of confirmable messages for notifications in to determine client aliveness and interest: A notification can be confirmable or non-confirmable ... An acknowledgement message signals to the server that the client is alive and interested in receiving further notifications; if the server does not receive an acknowledgement in reply to a confirmable notification, it will assume that the client is no longer interested and will eventually remove the associated entry from the list of observers This is inappropriate for CoAP over reliable transports.\nThe phrasing is a bit unfortunate. What is meant is \"if the confirmable notification times out\". The same would be true in TCP -- if the connection times out, you remove the interests. We could clarify this in a section of CoAP-TCP about the impact of no longer sending back and forth acknowledgements.\nDiscussing the scope of this issue with the authors. This probably needs a more complete treatment rather than small references like: Since the TCP protocol provides ordered delivery of messages, the mechanism for reordering detection when observing resources [RFC7641] is not needed. The value of the Observe Option in notifications MAY be empty on transmission and MUST be ignored on reception.\nMy reading of the discussion thread gives me the impression that we had closed the initial set of questions. The subsequently raised issue regarding the wording in RFC 7641 I am wondering whether there is any change needed. We definitely are not going to change RFC 7641. I personally don't think that the highlighted text from RFC7641 will be wrongly implemented. I would suggest to close issue.\nNAME - this would be - any comments on that pull request?", "new_text": "RFC6455. All requests for which the CoAP client has not received a response yet are cancelled when the connection is closed. 4."}
{"id": "q-en-cose-spec-3f8676c05b600c2d77464b39f8d7b96bcf3909b182478623e21e81e3a447502c", "old_text": "method allowed for doing MAC-ed messages. In COSE, all of the key management methods can be used for MAC-ed messages. 5.2.2. In key wrapping mode, the CEK is randomly generated and that key is", "comments": "Correct descriptions of Direct and Key Agreement Direct Change examples in the back to have \"real\" examples - they are JSON rather than CBOR, but that makes them shorter and easier to read.", "new_text": "method allowed for doing MAC-ed messages. In COSE, all of the key management methods can be used for MAC-ed messages. The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'iv', 'aad', 'ciphertext' and 'recipients' fields MUST be null. The plain text to be encrypted is the key from next layer down (usually the content layer). At a minimum, the 'unprotected' field SHOULD contain the 'alg' parameter as well as a parameter identifying the shared secret. 5.2.2. In key wrapping mode, the CEK is randomly generated and that key is"}
{"id": "q-en-cose-spec-3f8676c05b600c2d77464b39f8d7b96bcf3909b182478623e21e81e3a447502c", "old_text": "The COSE_encrypt structure for the recipient is organized as follows: 5.2.5. Key Agreement with Key Wrapping uses a randomly generated CEK. The", "comments": "Correct descriptions of Direct and Key Agreement Direct Change examples in the back to have \"real\" examples - they are JSON rather than CBOR, but that makes them shorter and easier to read.", "new_text": "The COSE_encrypt structure for the recipient is organized as follows: The 'protected', 'aad', 'iv', and 'tag' fields all use the 'null' value. At a minimum, the 'unprotected' field SHOULD contain the 'alg' parameter as well as a parameter identifying the asymmetric key. The 'unprotected' field MUST contain the 'epk' parameter. 5.2.5. Key Agreement with Key Wrapping uses a randomly generated CEK. The"}
{"id": "q-en-cose-spec-70798cdb730ea6dbb8d73307bc10d27d5cf2a6db34e527fff50e611d474158b0", "old_text": "This parameter holds a part of the IV value. When using the COSE_Encrypted structure, frequently a portion of the IV is part of the context associated with the key value. This field is used to carry the portion of the IV that changes for each message. As the IV is authenticated by the encryption process, this value can be placed in the unprotected header bucket. The 'Initialization Vector' and 'Partial Initialization Vector' parameters MUST NOT be present in the same security layer. The final IV is generated by concatenating the fixed portion of the IV, a zero string and the changing portion of the IV. The length of the zero string is computed by taking the required IV length and subtracting the lengths of the fixed and changing IV portions. This parameter holds a counter signature value. Counter signatures provide a method of having a second party sign some", "comments": "This was inspired by the IV computation for TLS. It may or may not make things more secure. The old behavior is easy to get by making the context IV have trailing zeros.", "new_text": "This parameter holds a part of the IV value. When using the COSE_Encrypted structure, frequently a portion of the IV is part of the context associated with the key value. This field is used to carry a value that causes the IV to be changed for each message. As the IV is authenticated by the encryption process, this value can be placed in the unprotected header bucket. The 'Initialization Vector' and 'Partial Initialization Vector' parameters MUST NOT be present in the same security layer. The message IV is generated by the following steps: Left pad the partial IV with zeros to the length of IV. XOR the padded partial IV with the context IV. This parameter holds a counter signature value. Counter signatures provide a method of having a second party sign some"}
{"id": "q-en-cose-spec-d217840cb387fed0d6159ea2a053797eedb6873f0d7f499c791d2037e87e50d0", "old_text": "If the 'alg' field present, it MUST match the HMAC algorithm being used. If the 'key_ops' field is present, it MUST include 'sign' when creating an HMAC authentication tag. If the 'key_ops' field is present, it MUST include 'verify' when verifying an HMAC authentication tag. Implementations creating and validating MAC values MUST validate that the key type, key length, and algorithm are correct and appropriate", "comments": "Create MAC as separate key operations per issue\nShould we attempt to add the MACverify and MACcreate elements to the key_ops field since we have decided to distinguish between a signature and a MAC - or do we keep the same signature/verify tags that are currently used in the JWK document?", "new_text": "If the 'alg' field present, it MUST match the HMAC algorithm being used. If the 'key_ops' field is present, it MUST include 'MAC create' when creating an HMAC authentication tag. If the 'key_ops' field is present, it MUST include 'MAC verify' when verifying an HMAC authentication tag. Implementations creating and validating MAC values MUST validate that the key type, key length, and algorithm are correct and appropriate"}
{"id": "q-en-cose-spec-d217840cb387fed0d6159ea2a053797eedb6873f0d7f499c791d2037e87e50d0", "old_text": "If the 'alg' field present, it MUST match the AES-MAC algorithm being used. If the 'key_ops' field is present, it MUST include 'sign' when creating an AES-MAC authentication tag. If the 'key_ops' field is present, it MUST include 'verify' when verifying an AES-MAC authentication tag. 9.2.1.", "comments": "Create MAC as separate key operations per issue\nShould we attempt to add the MACverify and MACcreate elements to the key_ops field since we have decided to distinguish between a signature and a MAC - or do we keep the same signature/verify tags that are currently used in the JWK document?", "new_text": "If the 'alg' field present, it MUST match the AES-MAC algorithm being used. If the 'key_ops' field is present, it MUST include 'MAC create' when creating an AES-MAC authentication tag. If the 'key_ops' field is present, it MUST include 'MAC verify' when verifying an AES-MAC authentication tag. 9.2.1."}
{"id": "q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16", "old_text": "cryptographic key, we use the term \"label\" for this usage of either an integer or a string to identify map keys and choice data items. 2. The COSE_MSG structure is a top level CBOR object that corresponds to", "comments": "MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one).", "new_text": "cryptographic key, we use the term \"label\" for this usage of either an integer or a string to identify map keys and choice data items. 1.5. One of the standard issues that is specified in IETF cryptographic algorithms is a requirement that a standard specify a set of minimal algorithms that are required to be implemented. This is done to promote interoperability as it provides a minimal set of algorithms that all devices can be sure will exist at both ends. However, we have elected not to specify a set of mandatory algorithms in this document. It is expected that COSE is going to be used in a wide variety of applications and on a wide variety of devices. Many of the constrained devices are going to be setup to used a small fixed set of algorithms, and this set of algorithms may not match those available on a device. We therefore have deferred to the application protocols the decision of what to specify for mandatory algorithms. Since the set of algorithms in an environment of constrained devices may depend on what the set of devices are and how long they have been in operation, we want to highlight that application protocols will need to specify some type of discovery method of algorithm capabilities. The discovery method may be as simple as requiring preconfiguration of the set of algorithms to providing a discovery method built into the protocol. S/MIME provided a number of different ways to approach the problem: Advertising in the message (S/MIME capabilities) Advertising in the certificate (capabilities extension) Minimum requirements for the S/MIME which have been updated over time 2. The COSE_MSG structure is a top level CBOR object that corresponds to"}
{"id": "q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16", "old_text": "9. 9.1. 9.2. ECDSA DSS defines a signature algorithm using ECC.", "comments": "MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one).", "new_text": "9. M00TODO: Put in generalities about sigature algorithms and parameters. 9.1. ECDSA DSS defines a signature algorithm using ECC."}
{"id": "q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16", "old_text": "Signature = I2OSP(R, n) | I2OSP(S, n) where n = ceiling(key_length / 8) 9.2.1. On of the issues that needs to be discussed is substitution attacks. There are two different things that can potentially be substituted in", "comments": "MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one).", "new_text": "Signature = I2OSP(R, n) | I2OSP(S, n) where n = ceiling(key_length / 8) 9.1.1. On of the issues that needs to be discussed is substitution attacks. There are two different things that can potentially be substituted in"}
{"id": "q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16", "old_text": "including the signature algorithm identifier in the data to be signed. 10. Message Authentication Codes (MACs) provide data authentication and", "comments": "MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one).", "new_text": "including the signature algorithm identifier in the data to be signed. 9.2. The RSASSA-PSS signature algorithm is defined in RFC3447. The RSASSA-PSS signature algorithm is parametized with a hash function, a mask generation function and a salt length (sLen). For this specification, the mask generation function is fixed to be MGF1 as defined in RFC3447. It has been recommended that the same hash function be used for hashing the data as well as in the mask generation function, for this specification we following this recommendation. The salt length is the same length as the hash function output. Three algorithms are defined in this document. These algorithms are: This uses the hash algorithm SHA-256 for signature processing. The value used for this algorithm is -10. The key type used for this algorithm is 'RSA'. This uses the hash algorithm SHA-384 for signature processing. The value used for this algorithm is \"PS384\". The key type used for this algorithm is 'RSA'. This uses the hash algorithm SHA-512 for signature processing. The value used for this algorithm is -11. The key type used for this algorithm is 'RSA'. There are no algorithm parameters defined for these signature algorithms. A summary of the algorithm definitions can be found in table-rsa-algs. 9.2.1. Key size. is there a MUST for 2048? or do we need to specify a minimum here? 10. Message Authentication Codes (MACs) provide data authentication and"}
{"id": "q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16", "old_text": "the future and private curves can be used as well. contains the x coordinate for the EC point. The integer is converted to an octet string as defined in SEC1. Zero octets MUST NOT be removed from the front of the octet string. [CREF21] contains either the sign bit or the value of y coordinate for the EC point. For the value, the integer is converted to an octet string as defined in SEC1. Zero octets MUST NOT be removed from the front of the octet string. For the sign bit, the value is true if the value of y is positive. contains the private key.", "comments": "MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one).", "new_text": "the future and private curves can be used as well. contains the x coordinate for the EC point. The integer is converted to an octet string use ???. Note that the octet string represents a little-endian encoding of x. [CREF21] contains the private key."}
{"id": "q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16", "old_text": "This contains a pointer to the public specification for the field if one exists This registry will be initially populated by the values in COSE_KEY_PARAM_KEYS. The specification column for all of these entries will be this document. 15.8.", "comments": "MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one).", "new_text": "This contains a pointer to the public specification for the field if one exists This registry will be initially populated by the values in --Multiple Tables--. The specification column for all of these entries will be this document. 15.8."}
{"id": "q-en-cose-spec-5617b04aad226fc780d9e5bd5238a1dc2256de49a5f0bb6a78bf188633f1ee16", "old_text": "[CREF20] JLS: Do we create a registry for curves? Is is the same registry for both EC1 and EC2? [CREF21] JLS: Should we use the integer encoding for x, y and d instead of bstr? [CREF22] JLS: Should we use the integer encoding for x, y and d instead of bstr?", "comments": "MTI Algorithm requirements Expand RSASSA-PSS section Correct one coordinate EC for endian Hint to get binary versions of examples Correct kid to binary in the examples (except for static public key one).", "new_text": "[CREF20] JLS: Do we create a registry for curves? Is is the same registry for both EC1 and EC2? [CREF21] JLS: Should we use the integer encoding for x and d instead of bstr? [CREF22] JLS: Should we use the integer encoding for x, y and d instead of bstr?"}
{"id": "q-en-data-plane-drafts-d4531563203a37f7d570208da5e346b1df46b1389f0b2a22dbc03804a586f861", "old_text": "bounded end-to-end delivery latency. General background and concepts of DetNet can be found in the DetNet Architecture RFC8655. The DetNet Architecture models the DetNet related data plane functions decomposed into two sub-layers: a service sub-layer and a forwarding sub-layer. The service sub-layer is used to provide DetNet service functions such as protection and reordering. The forwarding sub-layer is used to provide forwarding assurance (low loss, assured latency, and limited out-of-order delivery). This document specifies the DetNet data plane operation and the on- wire encapsulation of DetNet flows over an MPLS-based Packet Switched", "comments": "Proposed changes: 0, fixing Roman Danyliw comment 1, change introduction to highlight additional functions of data plane and the focus of the draft 2, highlight that implementation of PREOF is out of scope, pointing to related IEEE work (i.e., 802.1CB) 3, fixing the POF section and its configuration via controller plane", "new_text": "bounded end-to-end delivery latency. General background and concepts of DetNet can be found in the DetNet Architecture RFC8655. The purpose of this document is to describe the use of the MPLS data plane to establish and support DetNet flows. The DetNet Architecture models the DetNet related data plane functions decomposed into two sub-layers: a service sub-layer and a forwarding sub-layer. The service sub-layer is used to provide DetNet service functions such as protection and reordering. At the DetNet data plane a new set of functions (PREOF) provide the service sub-layer specific tasks. The forwarding sub-layer is used to provide forwarding assurance (low loss, assured latency, and limited out-of-order delivery) via already existing Traffic Engineering and queuing tools. The use of the functionalities of the DetNet service sub-layer and the DetNet forwarding sub-layer require careful design and control by the controller plane in addition to the DetNet specific use of MPLS encapsulation as specified by this document. This document specifies the DetNet data plane operation and the on- wire encapsulation of DetNet flows over an MPLS-based Packet Switched"}
{"id": "q-en-data-plane-drafts-d4531563203a37f7d570208da5e346b1df46b1389f0b2a22dbc03804a586f861", "old_text": "encapsulations for consistency but there is no hard requirement in this regard. 4.2.2.1. The Packet Replication Function (PRF) function MAY be supported by an", "comments": "Proposed changes: 0, fixing Roman Danyliw comment 1, change introduction to highlight additional functions of data plane and the focus of the draft 2, highlight that implementation of PREOF is out of scope, pointing to related IEEE work (i.e., 802.1CB) 3, fixing the POF section and its configuration via controller plane", "new_text": "encapsulations for consistency but there is no hard requirement in this regard. Implementation details of PREOF functions is out of scope for this document. IEEE802.1CB-2017 specifies replication and elimination specific aspects, which can be applied to PRF and PEF. 4.2.2.1. The Packet Replication Function (PRF) function MAY be supported by an"}
{"id": "q-en-data-plane-drafts-d4531563203a37f7d570208da5e346b1df46b1389f0b2a22dbc03804a586f861", "old_text": "basis. The number of out of order packets that can be processed also impacts the latency of a flow. 4.2.3. F-Labels support the DetNet forwarding sub-layer. F-Labels are used", "comments": "Proposed changes: 0, fixing Roman Danyliw comment 1, change introduction to highlight additional functions of data plane and the focus of the draft 2, highlight that implementation of PREOF is out of scope, pointing to related IEEE work (i.e., 802.1CB) 3, fixing the POF section and its configuration via controller plane", "new_text": "basis. The number of out of order packets that can be processed also impacts the latency of a flow. The latency impact on the system resources needed to support a specific DetNet flow will need to be evaluated by the controller plane based on that flow's traffic specification. An example traffic specification that can be used with MPLS with Traffic Engineering (MPLS-TE) can be found in RFC2212. DetNet uses flow-specific requirements (e.g., maximum number of out- of-order packets, maximum latency of the flow) for configuration of POF-related buffers. POF implementation details are out-of-scope for this document and POF configuration parameters are implementation specific. The Controller Plane determines and sets the POF configuration parameters. If PEF is combined with POF, then their configurations need to be coordinated to ensure the combined functionality. 4.2.3. F-Labels support the DetNet forwarding sub-layer. F-Labels are used"}
{"id": "q-en-data-plane-drafts-d4531563203a37f7d570208da5e346b1df46b1389f0b2a22dbc03804a586f861", "old_text": "protect against DOS attacks, excess traffic due to malicious or malfunctioning devices is prevented or mitigated through the use of existing mechanisms, for example by policing and shaping incoming traffic. To prevent DetNet packets from being delayed by an entity external to a DetNet domain, DetNet technology definition can allow for the mitigation of on-path attackers, for example through use of authentication and authorization of devices within the DetNet domain. 7.", "comments": "Proposed changes: 0, fixing Roman Danyliw comment 1, change introduction to highlight additional functions of data plane and the focus of the draft 2, highlight that implementation of PREOF is out of scope, pointing to related IEEE work (i.e., 802.1CB) 3, fixing the POF section and its configuration via controller plane", "new_text": "protect against DOS attacks, excess traffic due to malicious or malfunctioning devices is prevented or mitigated through the use of existing mechanisms, for example by policing and shaping incoming traffic. To prevent DetNet packets having their delay manipulated by an external entity, precautions need to be taken to ensure that all devices on an LSP are those intended to be there by the network operator and that they are well behaved. In addition to physical security, technical methods such as authentication and authorization of network equipment and the instrumentation and monitoring of the LSP packet delay may be used. If a delay attack is suspected, traffic may be moved to an alternate path, for example through changing the LSP or management of the PREOF configuration. 7."}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "General background and concepts of DetNet can be found in the DetNet Architecture RFC8655. I-D.ietf-detnet-ip specifies the DetNet data plane operation for IP hosts and routers that provide DetNet service to IP encapsulated data. This document focuses on the scenario where DetNet IP nodes are interconnected by a TSN sub-network. The DetNet Architecture decomposes the DetNet related data plane functions into two sub-layers: a service sub-layer and a forwarding sub-layer. The service sub-layer is used to provide DetNet service protection and reordering. The forwarding sub-layer is used to provides congestion protection (low loss, assured latency, and limited reordering). As described in I-D.ietf-detnet-ip no DetNet specific headers are added to support DetNet IP flows, only the forwarding sub-layer functions are supported inside the DetNet domain. Service protection can be provided on a per sub-network basis as shown here for the IEEE802.1 TSN sub-network scenario. 2.", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "General background and concepts of DetNet can be found in the DetNet Architecture RFC8655. RFC8939 specifies the DetNet data plane operation for IP hosts and routers that provide DetNet service to IP encapsulated data. This document focuses on the scenario where DetNet IP nodes are interconnected by a TSN sub-network. The DetNet Architecture decomposes the DetNet related data plane functions into two sub-layers: a service sub-layer and a forwarding sub-layer. The service sub-layer is used to provide DetNet service protection and reordering. The forwarding sub-layer is used to provides congestion protection (low loss, assured latency, and limited reordering). As described in RFC8939 no DetNet specific headers are added to support DetNet IP flows, only the forwarding sub-layer functions are supported inside the DetNet domain. Service protection can be provided on a per sub-network basis as shown here for the IEEE802.1 TSN sub-network scenario. 2."}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "Time-Sensitive Networking, TSN is a Task Group of the IEEE 802.1 Working Group. 2.3. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. 3. I-D.ietf-detnet-ip describes how IP is used by DetNet nodes, i.e., hosts and routers, to identify DetNet flows and provide a DetNet service. From a data plane perspective, an end-to-end IP model is followed. DetNet uses \"6-tuple\" based flow identification, where \"6-tuple\" refers to information carried in IP and higher layer protocol headers. DetNet flow aggregation may be enabled via the use of wildcards, masks, prefixes and ranges. IP tunnels may also be used to support", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "Time-Sensitive Networking, TSN is a Task Group of the IEEE 802.1 Working Group. 3. RFC8939 describes how IP is used by DetNet nodes, i.e., hosts and routers, to identify DetNet flows and provide a DetNet service. From a data plane perspective, an end-to-end IP model is followed. DetNet uses \"6-tuple\" based flow identification, where \"6-tuple\" refers to information carried in IP and higher layer protocol headers. DetNet flow aggregation may be enabled via the use of wildcards, masks, prefixes and ranges. IP tunnels may also be used to support"}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "configure DetNet IP over TSN: DetNet IP related configuration information according to the DetNet role of the DetNet IP node, as per I-D.ietf-detnet-ip. TSN related configuration information according to the TSN role of the DetNet IP node, as per IEEE8021Q, IEEE8021CB and IEEEP8021CBdb. Mapping between DetNet IP flow(s) (as flow identification defined in I-D.ietf-detnet-ip, it is summarized in Section 5.1 of that document, and includes all wildcards, port ranges and the ability to ignore specific IP fields) and TSN Stream(s) (as stream identification information defined in IEEE8021CB and IEEEP8021CBdb). Note, that managed objects for TSN Stream identification can be found in IEEEP8021CBcv. This information must be provisioned per DetNet flow.", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "configure DetNet IP over TSN: DetNet IP related configuration information according to the DetNet role of the DetNet IP node, as per RFC8939. TSN related configuration information according to the TSN role of the DetNet IP node, as per IEEE8021Q, IEEE8021CB and IEEEP8021CBdb. Mapping between DetNet IP flow(s) (as flow identification defined in RFC8939, it is summarized in Section 5.1 of that document, and includes all wildcards, port ranges and the ability to ignore specific IP fields) and TSN Stream(s) (as stream identification information defined in IEEE8021CB and IEEEP8021CBdb). Note, that managed objects for TSN Stream identification can be found in IEEEP8021CBcv. This information must be provisioned per DetNet flow."}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "Security considerations for DetNet are described in detail in I- D.ietf-detnet-security. General security considerations are described in RFC8655. DetNet IP data plane specific considerations are summarized in I-D.ietf-detnet-ip. This section considers exclusively security considerations which are specific to the DetNet IP over TSN sub-network scenario. The sub-network between DetNet nodes needs to be subject to appropriate confidentiality. Additionally, knowledge of what DetNet/", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "Security considerations for DetNet are described in detail in I- D.ietf-detnet-security. General security considerations are described in RFC8655. DetNet IP data plane specific considerations are summarized in RFC8939. This section considers exclusively security considerations which are specific to the DetNet IP over TSN sub-network scenario. The sub-network between DetNet nodes needs to be subject to appropriate confidentiality. Additionally, knowledge of what DetNet/"}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "Time-Sensitive Network. 2.3. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. 3. The basic approach defined in I-D.ietf-detnet-mpls supports the", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "Time-Sensitive Network. 3. The basic approach defined in I-D.ietf-detnet-mpls supports the"}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "2.1. This document uses the terminology and concepts established in the DetNet architecture RFC8655 and I-D.ietf-detnet-data-plane-framework, and I-D.ietf-detnet-mpls. The reader is assumed to be familiar with these documents and their terminology. 2.2.", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "2.1. This document uses the terminology and concepts established in the DetNet architecture RFC8655 and RFC8938, and I-D.ietf-detnet-mpls. The reader is assumed to be familiar with these documents and their terminology. 2.2."}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "from TSN (L2) to PW (MPLS) encapsulation. However, such interworking is out-of-scope in this document and left for further study. A MPLS DetNet flow is configured to carry any number of TSN flows. The DetNet flow specific bandwidth profile SHOULD match the required bandwidth of the App-flow aggregate. 5.3. In the design of I-D.ietf-detnet-mpls an MPLS service label (the", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "from TSN (L2) to PW (MPLS) encapsulation. However, such interworking is out-of-scope in this document and left for further study. 5.3. In the design of I-D.ietf-detnet-mpls an MPLS service label (the"}
{"id": "q-en-data-plane-drafts-82dd900a53f1f17e2d9c43757e1fbf0c172b9398156c0ab9b16aba7514fe41f1", "old_text": "(DetNet) Edge node terminates all related information elements encoded in the MPLS labels. The LSP used to forward the DetNet packet may be of any type (MPLS- LDP, MPLS-TE, MPLS-TP RFC5921, or MPLS-SR RFC8660). The LSP (F-Label) label and/or the S-Label may be used to indicate the queue processing as well as the forwarding parameters. When a PE receives a packet from the Service Proxy function it MUST add to the packet the DetNet flow-ID specific S-label and create a d-CW. The PE MUST forward the packet according to the configured DetNet Service and Forwarding sub-layer rules to other PE nodes. When a PE receives an MPLS packet from a remote PE, then, after processing the MPLS label stack, if the top MPLS label ends up being", "comments": "ip-over-tsn (nits) mpls-over-tsn (nits) tsn-vpn-over-mpls (nits, ch5.2, ch5.3)", "new_text": "(DetNet) Edge node terminates all related information elements encoded in the MPLS labels. When a PE receives a packet from the Service Proxy function it MUST handle the packet as defined in I-D.ietf-detnet-mpls. When a PE receives an MPLS packet from a remote PE, then, after processing the MPLS label stack, if the top MPLS label ends up being"}
{"id": "q-en-data-plane-drafts-9fa1852480ee56c37aff2b999f9303dc57f9792cc5d2290750baf64fdcdd0eb2", "old_text": "3. This document builds on the specification of MPLS over UDP and IP defined in RFC7510. It replaces the F-Label(s) used in I-D.ietf- detnet-mpls with UDP and IP headers. The UDP and IP header information is used to identify DetNet flows, including member flows, per I-D.ietf-detnet-ip. The resulting encapsulation is shown in IP- encap-dn. Note that this encapsulation works equally well with IPv4, IPv6, and IPv6-based Segment Routing I-D.ietf-6man-segment-routing-header.", "comments": "Based on 06.17 discussions. F-lables(s) added to Fig1 and text fixed. Adding procedures sections. Multiple notes added for clarifications. Adding controller plane section.\nreviewed in Jun 24 meeting", "new_text": "3. This document builds on the specification of MPLS over UDP and IP defined in RFC7510. It may replace partly or entirely the F-Label(s) used in I-D.ietf-detnet-mpls with UDP and IP headers. The UDP and IP header information is used to identify DetNet flows, including member flows, per I-D.ietf-detnet-ip. The resulting encapsulation is shown in IP-encap-dn. There may be zero or more F-label(s) between the S-label and the UDP header. Note that this encapsulation works equally well with IPv4, IPv6, and IPv6-based Segment Routing I-D.ietf-6man-segment-routing-header."}
{"id": "q-en-data-plane-drafts-9fa1852480ee56c37aff2b999f9303dc57f9792cc5d2290750baf64fdcdd0eb2", "old_text": "are not modified by this document. In case of aggregates the A-Label is treated as an S-Label and it is not modified as well. To support outgoing DetNet MPLS over IP, an implementation MUST support the provisioning of IP/UDP header information in place of sets of F-Labels. Note that multiple sets of F-Labels can be provisioned to support PRF on transmitted DetNet flows and therefore, when PRF is supported, multiple IP/UDP headers MAY be provisioned. When multiple IP/UDP headers are provisioned for a particular outgoing DetNet IP flow, a copy of the outgoing packet, including the pushed S-Label, MUST be made for each. The headers for each outgoing packet MUST be based on the configuration information and as defined in RFC7510, with one exception. The one exception is that the UDP Source Port value MUST be set to uniquely identify the DetNet (forwarding sub-layer) flow. The packet MUST then be handed as a DetNet IP packet, per I-D.ietf-detnet-ip. To support receive processing an implementation MUST also support the provisioning of received IP/UDP header information. When S-Labels are taken from platform label space, all that is required is to provision that receiving IP/UDP encapsulated DetNet MPLS packets is permitted. Once the IP/UDP header is stripped, the S-label uniquely identifies the app-flow. When S-Labels are not taken from platform label space, IP/UDP header information MUST be provisioned. The provisioned information MUST then be used to identify incoming app- flows based on the combination of S-Label and incoming IP/UDP header. Normal receive processing, including PEOF can then take place. 4. The security considerations of DetNet in general are discussed in I- D.ietf-detnet-architecture and I-D.ietf-detnet-security. MPLS and IP", "comments": "Based on 06.17 discussions. F-lables(s) added to Fig1 and text fixed. Adding procedures sections. Multiple notes added for clarifications. Adding controller plane section.\nreviewed in Jun 24 meeting", "new_text": "are not modified by this document. In case of aggregates the A-Label is treated as an S-Label and it is not modified as well. 4. To support outgoing DetNet MPLS over UDP/IP, an implementation MUST support the provisioning of UDP/IP header information in addition or in place of sets of F-Label(s). Note that multiple sets of F-Labels can be provisioned to support PRF on transmitted DetNet MPLS flows and therefore, when PRF is supported, multiple UDP/IP headers MAY be provisioned. [Note: PRF is an MPLS function. It produces member flows. Member flows have 1to1 mapping to UDP/IP headers. So why are multiple headers provisioned? Delete next sentence?] When multiple UDP/IP headers are provisioned for a particular outgoing DetNet IP flow, a copy of the outgoing packet, including the pushed S-Label (and optional F-label(s)), MUST be made for each. The headers for each outgoing packet MUST be based on the configuration information and as defined in RFC7510, with one exception. The one exception is that the UDP Source Port value MUST be set to uniquely identify the DetNet (forwarding sub-layer) flow. The packet MUST then be handed as a DetNet IP packet, per I-D.ietf-detnet-ip. [Note: What about IPv4 Type of Service and IPv6 Traffic Class Fields? Do we intend to set them?] To support receive processing an implementation MUST also support the provisioning of received UDP/IP header information. [Note: Are the receiving nodes T-PE nodes? Why?] When S-Labels are taken from platform label space, all that is required is to provision that receiving UDP/IP encapsulated DetNet MPLS packets is permitted. Once the UDP/IP header is stripped, the S-label uniquely identifies the app-flow. When S-Labels are not taken from platform label space, UDP/IP header information MUST be provisioned. The provisioned information MUST then be used to identify incoming app-flows based on the combination of S-Label and incoming IP/UDP header. Normal receive processing, including PEOF can then take place. 5. The following summarizes the set of information that is needed to identify individual and aggregated DetNet flows: IPv4 and IPv6 source address field. IPv4 and IPv6 source address prefix length, where a zero (0) value effectively means that the address field is ignored. IPv4 and IPv6 destination address field. IPv4 and IPv6 destination address prefix length, where a zero (0) effectively means that the address field is ignored. IPv4 protocol field. Only UDP is allowed, and the ability to ignore this field, e.g., via configuration of the value zero (0), is desirable. IPv6 next header field. Only UDP is allowed, and the ability to ignore this field, e.g., via configuration of the value zero (0), is desirable. IPv4 Type of Service and IPv6 Traffic Class Fields. IPv4 Type of Service and IPv6 Traffic Class Field Bitmask, where a zero (0) effectively means that theses fields are ignored. UDP Source Port. Exact and wildcard matching is required. Port ranges can optionally be used. This information MUST be provisioned per DetNet flow via configuration, e.g., via the controller or management plane. Information identifying a DetNet flow is ordered and implementations use the first match. This can, for example, be used to provide a DetNet service for a specific UDP flow, with unique Source and Destination Port field values, while providing a different service for the aggregate of all other flows with that same UDP Destination Port value. It is the responsibility of the DetNet controller plane to properly provision both flow identification information and the flow specific resources needed to provided the traffic treatment needed to meet each flow's service requirements. This applies for aggregated and individual flows. 6. The security considerations of DetNet in general are discussed in I- D.ietf-detnet-architecture and I-D.ietf-detnet-security. MPLS and IP"}
{"id": "q-en-data-plane-drafts-9fa1852480ee56c37aff2b999f9303dc57f9792cc5d2290750baf64fdcdd0eb2", "old_text": "mpls and I-D.ietf-detnet-ip. This draft does not have additional security considerations. 5. This document makes no IANA requests.", "comments": "Based on 06.17 discussions. F-lables(s) added to Fig1 and text fixed. Adding procedures sections. Multiple notes added for clarifications. Adding controller plane section.\nreviewed in Jun 24 meeting", "new_text": "mpls and I-D.ietf-detnet-ip. This draft does not have additional security considerations. 7. This document makes no IANA requests."}
{"id": "q-en-dnssec-chain-extension-900a33e0218a06845f7ec2ef72b72b81740c98b0b01235988e08cca0303b9c7c", "old_text": "This sequence of native DNS wire format records enables easier generation of the data structure on the server and easier verification of the data on client by means of existing DNS library functions. However this document describes the data structure in sufficient detail that implementers if they desire can write their own code to do this. Each RRset in the chain is composed of a sequence of wire format DNS resource records. The format of the resource record is described in RFC1035, Section 3.2.1. Each RRset in the sequence is followed by its associated RRsig record set. The RRsig record wire format is described in RFC4034, Section 3.1. The signature portion of the RDATA, as described in the same section, is the following: where RRSIG_RDATA is the wire format of the RRSIG RDATA fields with the Signer's Name field in canonical form and the signature field", "comments": "Remove statement that the document describes the data structure in sufficient detail for implementers and describe more clearly how individual RRsets and the successive RRSig record set are identified by referencing the relevant sections in RFC's; and remove the in response to Eric Rescorla's DISCUSS point: IMPORTANT: I'm not sure that this is actually sufficient to allow an independent implementation without referring to the other documents. I mean, I think I pretty clearly can't validate this chain from the above. Similarly, although I think this is enough to break apart the RRSETs into RRs, it doesn't tell me how to separate the RRSETs from each other. I think you need to either make this a lot more complete or alternately stop saying it's sufficient.", "new_text": "This sequence of native DNS wire format records enables easier generation of the data structure on the server and easier verification of the data on client by means of existing DNS library functions. Each RRset in the chain is composed of a sequence of wire format DNS resource records. The format of the resource record is described in RFC1035, Section 3.2.1. The resource records that make up a RRset all have the same owner, type and class, but different RDATA as specified RFC2181, Section 5. Each RRset in the sequence is followed by its associated RRsig record set. This RRset has the same owner and class as the preceding RRset, but has type RRSIG. The Type Covered field in the RDATA of the RRsigs identifies the type of the preceding RRset as described in RFC4034, Section 3. The RRsig record wire format is described in RFC4034, Section 3.1. The signature portion of the RDATA, as described in the same section, is the following: where RRSIG_RDATA is the wire format of the RRSIG RDATA fields with the Signer's Name field in canonical form and the signature field"}
{"id": "q-en-documentsigning-eku-c2861e462192a27b75cfdf35f1f5df53cad3d4144e2f0153119e2b72a4e0da45", "old_text": "1. RFC5280 specifies several extended key purpose identifiers for X.509 certificates. In addition, several extended key purpose identifiers were addedRFC7299 as public OIDs under the IANA PKIX Extended key purpose identifiers arc. While usage of the any extended key usage value is bad practice for publicly trusted certificates, there is no public and general extended key purpose identifier explicitly assigned for Document Signing certificates. The current practice is to use id-kp-emailProtection, id-kp-codeSigning or vendor defined Object IDs for general document signing purposes. In circumstances where code signing and S/MIME certificates are also widely used for document signing, the technical or policy changes", "comments": "I reviewed and approved this pull request.\nUpon further reflection it seems like RFC 7299 and 8358 are informative instead of normative references.", "new_text": "1. RFC5280 specifies several extended key usages for X.509 certificates. In addition, several extended key usage had been addedRFC7299 as public OID under the IANA repository. While usage of any extended key usage is bad practice for publicly trusted certificates, there are no public and general extended key usage explicitly assigned for Document Signing certificates. The current practice is to use id-kp- emailProtection, id-kp-codeSigning or vendor defined Object ID for general document signing purposes. In circumstances where code signing and S/MIME certificates are also widely used for document signing, the technical or policy changes"}
{"id": "q-en-draft-ietf-add-ddr-4f6144652abed222f817c5e74f9a51343122ab6ed7d34c02747353b4fa8eeba5", "old_text": "5. A DNS client that already knows the name of an Encrypted Resolver can use DEER to discover details about all supported encrypted DNS protocols. This situation can arise if a client has been configured to use a given Encrypted Resolver, or if a network provisioning protocol (such as DHCP or IPv6 Router Advertisements) provides a name", "comments": "Changed 2 instances of DEER to DDR\nI was so focused on finding \"equivalent\" that I missed a couple deer apparently. Thank you!", "new_text": "5. A DNS client that already knows the name of an Encrypted Resolver can use DDR to discover details about all supported encrypted DNS protocols. This situation can arise if a client has been configured to use a given Encrypted Resolver, or if a network provisioning protocol (such as DHCP or IPv6 Router Advertisements) provides a name"}
{"id": "q-en-draft-ietf-add-ddr-4f6144652abed222f817c5e74f9a51343122ab6ed7d34c02747353b4fa8eeba5", "old_text": "6. Resolver deployments that support DEER are advised to consider the following points. 6.1.", "comments": "Changed 2 instances of DEER to DDR\nI was so focused on finding \"equivalent\" that I missed a couple deer apparently. Thank you!", "new_text": "6. Resolver deployments that support DDR are advised to consider the following points. 6.1."}
{"id": "q-en-draft-ietf-add-ddr-411fe2a0029a5109cbc237d96d5a74cb16a102e4e701f4bc464f18370f730a58", "old_text": "6.1. If a caching forwarder consults multiple resolvers, it may be possible for it to cache records for the \"resolver.arpa\" Special Use Domain Name (SUDN) for multiple resolvers. This may result in clients sending queries intended to discover Designated Resolvers for resolver \"foo\" and receiving answers for resolvers \"foo\" and \"bar\". A client will successfully reject unintended connections because the authenticated discovery will fail or the resolver addresses do not match. Clients that attempt unauthenticated connections to resolvers discovered through SVCB queries run the risk of connecting to the wrong server in this scenario. To prevent unnecessary traffic from clients to incorrect resolvers, DNS caching resolvers SHOULD NOT cache results for the \"resolver.arpa\" SUDN other than for Designated Resolvers under their control. 6.2.", "comments": "Addressing issues and to make the SUDN forwarding a SHOULD NOT unless the operator understands and accepts the implications to authenticated discovery.\nThis is exactly what was intended, so we're on the same page.\nMigrated from original DEER issue: URL\nThe Special Use Domain Name (SUDN) URL needs more clarification. Specifically: Queries against the .arpa nameservers The draft should mention that .arpa is a delegated zone, currently served by the root nameservers. Clients that query a resolver that is unaware of this requirement will \"leak\" queries to the root nameservers. When URL is not present in a resolver configuration Section 6.1 discusses how a forwarder should behave and suggests that results should not be cached other than for Equivalent Encrypted Resolvers under their control. How is that determined? There is no discussion around a non-forwarding resolver's behaviour. Should it return NODATA? Forwarding queries to URL will attempt to attach a client to an upstream resource's equivalent resolver. If that equivalent resolver's cert doesn't include the forwarder, the client must reject the answer. Is it not therefore the case that the forwarder caching behaviour is irrelevant? All valid responses to this query must be equivalent to the forwarder itself. When URL is present in a resolver configuration I would suggest some clarification wording something like this: The expectation is that this record set will detail equivalent resolvers. A client that chooses not to interact with any of these equivalent resolvers SHOULD continue to use the unencrypted resolver, honouring the SVCB TTL. When the SVCB TTL expires, the client may choose to re-query in an attempt to discover newly available equivalents. I would suggest (prefer?) changing the SUDN behaviour so that queries against it are never sent upstream. Resolvers (and forwarders) that do not have a configured URL RRset should return NODATA. This would mean that URL SUDN RRsets are always specific to the IP queried.\nIt may be good to model the advice after RFC8880, which defines URL\nI am leaving this open so I can re-review the text after addressing other PRs; PR 14 addressed some of the points raised in this issue but not all of them.\nMigrated from original DEER issue: URL\nThe text says This is not true of any reasonable forwarder. RRsets are atomic collections, and cannot be mixed from multiple sources. Doing so would entirely break DNSSEC, for example. Unless you know of a forwarder that exhibits this seriously broken behavior, I don't think it bears mentioning.\nAddressed with PR 14\nTypo, but this seems like a reasonable tweak. The description on list was a bit worrying regarding IP address, but this just says don't forward unless it is deliberate. And by deliberate, I mean that the forwarder knows it will work or that clients might not care. That seems ok.", "new_text": "6.1. A DNS forwarder SHOULD NOT forward queries for \"resolver.arpa\" upstream. This prevents a client from receiving an SVCB record that will fail to authenticate because the forwarder's IP address is not in the upstream resolver's Designated Resolver's TLS certificate SAN field. A DNS forwarder which already acts as a completely blind forwarder MAY choose to forward these queries when the operator expects that this does not apply, either because the operator knows the upstream resolver does have the forwarder's IP address in its TLS certificate's SAN field or that the operator expects clients of the unencrypted resolver to use the SVCB information opportunistically. Operators who choose to forward queries for \"resolver.arpa\" upstream should note that client behavior is never guaranteed and use of DDR by a resolver does not communicate a requirement for clients to use the SVCB record when it cannot be authenticated. 6.2."}
{"id": "q-en-draft-ietf-add-ddr-410063f073b488a4d3b3f4a98cc953ec8b8e69f898a2ae33d0b34e49769bb94c", "old_text": "mechanism for DNS clients to use DNS records to discover a resolver's encrypted DNS configuration. This mechanism can be used to move from unencrypted DNS to encrypted DNS when only the IP address of a resolver is known. It can also be used to discover support for encrypted DNS protocols when the name of an encrypted resolver is known. This mechanism is designed to be limited to cases where unencrypted resolvers and their designated resolvers are operated by the same entity or cooperating entities. 1.", "comments": "Three changes Re-ordered statements in the abstract to place the restriction on unencrypted bootstrapping next to the unencrypted statement (instead of after the encrypted bootstrapping statement, which makes the restriction seem like a conflict where it excludes bootstrapping from a name instead of an IP address) Re-ordered security considerations (placed the \"clients can opt to do other things\" statement at the end after the MUSTs/SHOULDs for this doc) s/client/clients for grammatical correctness", "new_text": "mechanism for DNS clients to use DNS records to discover a resolver's encrypted DNS configuration. This mechanism can be used to move from unencrypted DNS to encrypted DNS when only the IP address of a resolver is known. This mechanism is designed to be limited to cases where unencrypted resolvers and their designated resolvers are operated by the same entity or cooperating entities. It can also be used to discover support for encrypted DNS protocols when the name of an encrypted resolver is known. 1."}
{"id": "q-en-draft-ietf-add-ddr-410063f073b488a4d3b3f4a98cc953ec8b8e69f898a2ae33d0b34e49769bb94c", "old_text": "7. Since client can receive DNS SVCB answers over unencrypted DNS, on- path attackers can prevent successful discovery by dropping SVCB packets. Clients should be aware that it might not be possible to distinguish between resolvers that do not have any Designated", "comments": "Three changes Re-ordered statements in the abstract to place the restriction on unencrypted bootstrapping next to the unencrypted statement (instead of after the encrypted bootstrapping statement, which makes the restriction seem like a conflict where it excludes bootstrapping from a name instead of an IP address) Re-ordered security considerations (placed the \"clients can opt to do other things\" statement at the end after the MUSTs/SHOULDs for this doc) s/client/clients for grammatical correctness", "new_text": "7. Since clients can receive DNS SVCB answers over unencrypted DNS, on- path attackers can prevent successful discovery by dropping SVCB packets. Clients should be aware that it might not be possible to distinguish between resolvers that do not have any Designated"}
{"id": "q-en-draft-ietf-add-ddr-410063f073b488a4d3b3f4a98cc953ec8b8e69f898a2ae33d0b34e49769bb94c", "old_text": "but be operated by an attacker that is trying to observe or modify user queries without the knowledge of the client or network. The constraints on validation of Designated Resolvers specified here apply specifically to the automatic discovery mechanisms defined in this documents, which are referred to as Authenticated Discovery and Opportunistic Discovery. Clients MAY use some other mechanism to validate and use Designated Resolvers discovered using the DNS SVCB record. However, use of such an alternate mechanism needs to take into account the attack scenarios detailed here. If the IP address of a Designated Resolver differs from that of an Unencrypted Resolver, clients applying Authenicated Discovery (authenticated) MUST validate that the IP address of the Unencrypted", "comments": "Three changes Re-ordered statements in the abstract to place the restriction on unencrypted bootstrapping next to the unencrypted statement (instead of after the encrypted bootstrapping statement, which makes the restriction seem like a conflict where it excludes bootstrapping from a name instead of an IP address) Re-ordered security considerations (placed the \"clients can opt to do other things\" statement at the end after the MUSTs/SHOULDs for this doc) s/client/clients for grammatical correctness", "new_text": "but be operated by an attacker that is trying to observe or modify user queries without the knowledge of the client or network. If the IP address of a Designated Resolver differs from that of an Unencrypted Resolver, clients applying Authenicated Discovery (authenticated) MUST validate that the IP address of the Unencrypted"}
{"id": "q-en-draft-ietf-add-ddr-410063f073b488a4d3b3f4a98cc953ec8b8e69f898a2ae33d0b34e49769bb94c", "old_text": "to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. 8. 8.1.", "comments": "Three changes Re-ordered statements in the abstract to place the restriction on unencrypted bootstrapping next to the unencrypted statement (instead of after the encrypted bootstrapping statement, which makes the restriction seem like a conflict where it excludes bootstrapping from a name instead of an IP address) Re-ordered security considerations (placed the \"clients can opt to do other things\" statement at the end after the MUSTs/SHOULDs for this doc) s/client/clients for grammatical correctness", "new_text": "to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. The constraints on validation of Designated Resolvers specified here apply specifically to the automatic discovery mechanisms defined in this document, which are referred to as Authenticated Discovery and Opportunistic Discovery. Clients MAY use some other mechanism to validate and use Designated Resolvers discovered using the DNS SVCB record. However, use of such an alternate mechanism needs to take into account the attack scenarios detailed here. 8. 8.1."}
{"id": "q-en-draft-ietf-add-ddr-ccd7e1174853ca1af325c8fc33fce67e721d77d0d81cc24bbfb30225e1222d60", "old_text": "queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. While the IP address of the Unencrypted Resolver is often provisioned over insecure mechanisms, it can also be provisioned securely, such as via manual configuration, a VPN, or on a network with protections", "comments": "NAME does this work for you?\nOK, so I said LGTM, but how does a server know that it is such a server? Can't I just publish a random DDR record for 1.1.1.1?\nAs I mentioned in the session, this is partly covered in SVCB-DNS, but perhaps we need a tighter rule. I've opened an issue to discuss it: URL I'm not entirely sure which draft this kind of text ought to go in. If you think this logic applies beyond DDR (e.g. in ADoT cases), then it probably should go in the SVCB-DNS draft. Otherwise, maybe DDR is the place.\nNAME for security considerations, I'm OK with multiple drafts having warnings. I think this particular attack should be mentioned in DDR, since other uses of DNS SVCB could be over more trusted channels.\nNAME if your resolver responds to URL, then we expect the address of the resolver I asked to be covered in the cert. So, if I send a packet to 1.1.1.1, 1.1.1.1 has to be covered. So, if you just provide a \"random\" DDR record in response to URL, you can only point to some resolver that has your IP address in its cert, which presumably is sufficient coordination for the resolver to \"know\" it has this set up.\nRight. Good point.\nAs mentioned at the microphone, if we are to allow the query-path to be specified in the clear like this, we need to require that servers not have different responses with different query paths.\nLGTM", "new_text": "queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. DoH resolvers that allow discovery using DNS SVCB answers over unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. While the IP address of the Unencrypted Resolver is often provisioned over insecure mechanisms, it can also be provisioned securely, such as via manual configuration, a VPN, or on a network with protections"}
{"id": "q-en-draft-ietf-add-ddr-aed8c0a21128a907c4162b2165ced910f7b325c9e7b40a8fff1925cfab44bdea", "old_text": "Since clients can receive DNS SVCB answers over unencrypted DNS, on- path attackers can prevent successful discovery by dropping SVCB queries or answers. Clients should be aware that it might not be possible to distinguish between resolvers that do not have any Designated Resolver and such an active attack. To limit the impact of discovery queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. DoH resolvers that allow discovery using DNS SVCB answers over unencrypted DNS MUST NOT provide differentiated behavior based on the", "comments": "I think that the answer here is that DDR resolution is SVCB-reliant as defined in the SVCB spec. Can you make that explicit please?\nsays It sounds like you're saying that DDR should adopt this without adjustments. That seems reasonable to me.\nRight. I sort of assumed that it was SVCB-reliant, but it doesn't say that. Certainly, this is not a case where any deviation from spec is necessary. (The more of those we have, the better.)\nDDR predates when the reliant and optional terms came about, which is why it wasn\u2019t covered. I\u2019m not clear on why anything would need to be said here. Reliant and optional seem to be modes for the client to block on getting SVCB records or falling back and ignoring them if they don\u2019t arrive. Discovery based on IP addresses is done with an SVCB query for URL, and no other query types are allowed for URL By definition, this only works with SVCB. That doesn\u2019t mean that a client can\u2019t be SVCB optional for its general operation \u2014 it just means that if SVCB is blocked or fails, you don\u2019t get to discover the encrypted resolver. What do we need to say here that would actually benefit things?\nSo SVCB-optional doesn't even make sense for this specification. I was just looking for a nudge in the form of: \"The resolver/client uses SVCB in a SVCB-reliant mode.\" Just so that there is no confusion.\nYes, optional doesn\u2019t make sense. But I still think that the entire reliant/optional choice is orthogonal to how DDR works. These are the two lines that mention reliant mode in the SVCB spec: Both of these seem to be about client behavior for connection establishment to a hostname. It\u2019s specifically about connecting. It doesn\u2019t make sense to connect to URL; this is a special purpose SVCB query used for discovery, which can lead to a connection. I don\u2019t see any requirement from the SVCB spec that protocols that use SVCB need to define themselves as \u201creliant\u201d or \u201coptional\u201d. Is there something I\u2019m missing there?\nHow is this not about connecting from the point at which you feed the name into a SVCB query? A DDR client is trying to connect to a (secure) resolver. To that end, it is making a SVCB query. That it uses \"URL\" doesn't change the overall goal.\nUntil a client uses the SVCB query for discovery, it doesn\u2019t know if an encrypted resolver exists that is designated. I\u2019m not sure if saying a DDR-aware client is \u201cSVCB-reliant\u201d means that it will not use the unencrypted resolver for application queries until it discovers an encrypted resolver (which is not a requirement, obviously, since clients can be allowed to continue using unencrypted DNS if their local policy allows it), or it means that the connection to the discovered encrypted DNS server is SVCB-reliant, which is tautologically true since the existence of said server was only discovered via SVCB. If it\u2019s the former, we shouldn\u2019t say it because it\u2019s really about client policy for allowing unencrypted DNS. If it\u2019s the latter, I don\u2019t see the value of adding anything, and I\u2019m not sure it really fits the meaning of \u201creliant\u201d.\nI believe DDR clients' recommended behavior is exactly the behavior recommended in Section 8.2 of SVCB-DNS, quoted above. It is initially SVCB-optional, and becomes SVCB-reliant once SVCB resolution succeeds. Specifically: if you do discover an encrypted resolver, and you attempt connection, you SHOULD NOT fall back to unencrypted if that connection fails. Otherwise, you've added a downgrade vulnerability to attackers on the \"transport path\" (which may not be the same as the bootstrap-query path). As this is the default recommendation for SVCB-DNS, I don't think any additional normative language is strictly required. However, I think a reminder might be appropriate. This does not address the question of whether to wait for SVCB resolution before issuing any initial queries. As a practical matter, I don't think we can require that.\nOkay, I agree with the behavior you describe there of being initially optional and then becoming reliant. I can try to find a place where it makes sense to reference that section.", "new_text": "Since clients can receive DNS SVCB answers over unencrypted DNS, on- path attackers can prevent successful discovery by dropping SVCB queries or answers, and thus prevent clients from switching to use encrypted DNS. Clients should be aware that it might not be possible to distinguish between resolvers that do not have any Designated Resolver and such an active attack. To limit the impact of discovery queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. Section 8.2 of I-D.ietf-add-svcb-dns describes a second downgrade attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires, unless verification of the resolver fails. DoH resolvers that allow discovery using DNS SVCB answers over unencrypted DNS MUST NOT provide differentiated behavior based on the"}
{"id": "q-en-draft-ietf-add-ddr-067e4d4c72d70d4001842c759dbca95b070eb66eda6b644d48e8f987e9c72b1e", "old_text": "Served DNS Zones\" registry for 'resolver.arpa.' with the description \"DNS Resolver Special-Use Domain\", listing this document as the reference. ", "comments": "I have committed all request changes. Thank you for the refinements.\nThe IANA Functions Operator has a question about one of the actions requested in the IANA Considerations section of this document. The IANA Functions Operator understands that, upon approval of this document, there are two actions which we must complete. First, IANA understands that this document calls for the addition of \"URL\" to the Special-Use Domain Names (SUDN) registry established by RFC6761. RFC6761 specifically requires that any request for a Special-Use Domain Name needs to contain a subsection of the \"IANA Considerations\" section titled Domain Name Reservation Considerations. The current draft does not have such a section. IANA Question --> Could the authors either conform to the requirements of RFC6761 or provide explicit guidance as to why that section is not appropriate? Second, in the Transport-Independent Locally-Served DNS Zone Registry on the Locally-Served DNS Zones registry page located at: URL a new registration will be made as follows: Zone: URL Description: DNS Resolver Special-Use Domain Reference: [ RFC-to-be ] The IANA Functions Operator understands that these are the only actions required to be completed upon approval of this document. Note: The actions requested in this document will not be completed until the document has been approved for publication as an RFC. This message is meant only to confirm the list of actions that will be performed. For definitions of IANA review states, please see: URL", "new_text": "Served DNS Zones\" registry for 'resolver.arpa.' with the description \"DNS Resolver Special-Use Domain\", listing this document as the reference. 8.2. In accordance with Section 5 of RFC6761, the answers to the following questions are provided relative to this document: Are human users expected to recognize these names as special and use them differently? In what way? No. This name is used automatically by DNS stub resolvers running on client devices on behalf of users, and users will never see this name directly. Are writers of application software expected to make their software recognize these names as special and treat them differently? In what way? No. There is no use case where a non-DNS application (covered by the next question) would need to use this name. Are writers of name resolution APIs and libraries expected to make their software recognize these names as special and treat them differently? If so, how? Yes. DNS client implementors are expected to use this name when querying for a resolver's properties instead of records for the name itself. DNS servers are expected to respond to queries for this name with their own properties instead of checking the matching zone as it would for normal domain names. Are developers of caching domain name servers expected to make their implementations recognize these names as special and treat them differently? If so, how? Yes. Caching domain name servers should not forward queries for this name to avoid causing validation failures due to IP address mismatch. Are developers of authoritative domain name servers expected to make their implementations recognize these names as special and treat them differently? If so, how? No. DDR is designed for use by recursive resolvers. Theoretically, an authoritative server could choose to support this name if it wants to advertise support for encrypted DNS protocols over plain-text DNS, but that scenario is covered by other work in the IETF DNSOP working group. Does this reserved Special-Use Domain Name have any potential impact on DNS server operators? If they try to configure their authoritative DNS server as authoritative for this reserved name, will compliant name server software reject it as invalid? Do DNS server operators need to know about that and understand why? Even if the name server software doesn't prevent them from using this reserved name, are there other ways that it may not work as expected, of which the DNS server operator should be aware? This name is locally served, and any resolver which supports this name should never forward the query. DNS server operators should be aware that records for this name will be used by clients to modify the way they connect to their resolvers. How should DNS Registries/Registrars treat requests to register this reserved domain name? Should such requests be denied? Should such requests be allowed, but only to a specially- designated entity? IANA should hold the registration for this name. Non-IANA requests to register this name should always be denied by DNS Registries/ Registrars. "}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "should be able to use those records while safely ignoring the third record. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of RFC8484 regarding the avoidance of DNS- based references that block the completion of the TLS handshake. This document focuses on discovering DoH, DoT, and DoQ Designated Resolvers. Other protocols can also use the format defined by I-", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "should be able to use those records while safely ignoring the third record. To avoid name lookup deadlock, clients that use Designated Resolvers need to ensure that a specific Encrypted Resolver is not used for any queries that are needed to resolve the name of the resolver itself or to perform certificate revocation checks for the resolver, as described in Section 10 of RFC8484. Designated Resolvers need to ensure this deadlock is avoidable as described in Section 10 of RFC8484. This document focuses on discovering DoH, DoT, and DoQ Designated Resolvers. Other protocols can also use the format defined by I-"}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "policy or user input. Details of such policy are out of scope of this document. Clients MUST NOT automatically use a Designated Resolver without some sort of validation, such as the two methods defined in this document or a future mechanism. A client MUST NOT re-use a designation discovered using the IP address of one Unencrypted DNS Resolver in place of any other", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "policy or user input. Details of such policy are out of scope of this document. Clients MUST NOT automatically use a Designated Resolver without some sort of validation, such as the two methods defined in this document or a future mechanism. Use without validation can allow an attacker to direct traffic to an Encrypted Resolver that is unrelated to the original Unencrypted DNS Resolver, as described in security. A client MUST NOT re-use a designation discovered using the IP address of one Unencrypted DNS Resolver in place of any other"}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "address on multiple different networks, a Designated Resolver that has been discovered on one network SHOULD NOT be reused on any of the other networks without repeating the discovery process for each network. However, if a given Unencrypted DNS Resolver designates a Designated Resolver that does not use a private or local IP address and can be verified using the mechanism described in verified, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in verified. This is a tradeoff between performance (by having no delay in establishing an encrypted DNS connection on the new network) and functionality (if the Unencrypted DNS Resolver intends to designate different Designated Resolvers based on the network from which clients connect). 4.2.", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "address on multiple different networks, a Designated Resolver that has been discovered on one network SHOULD NOT be reused on any of the other networks without repeating the discovery process for each network, since the same IP address may be used for different servers on the different networks. 4.2."}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "possible. If these checks fail, the client MUST NOT automatically use the discovered Designated Resolver. Additionally, the client SHOULD suppress any further queries for Designated Resolvers using this Unencrypted DNS Resolver for the length of time indicated by the SVCB record's Time to Live (TTL) in order to avoid excessive queries that", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "possible. If these checks fail, the client MUST NOT automatically use the discovered Designated Resolver if this designation was only discovered via a \"_dns.resolver.arpa.\" query (if the designation was advertised directly by the network as described in dnr-interaction, the server can still be used). Additionally, the client SHOULD suppress any further queries for Designated Resolvers using this Unencrypted DNS Resolver for the length of time indicated by the SVCB record's Time to Live (TTL) in order to avoid excessive queries that"}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "If the Designated Resolver and the Unencrypted DNS Resolver share an IP address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (opportunistic). If resolving the name of a Designated Resolver from an SVCB record yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. 4.3.", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "If the Designated Resolver and the Unencrypted DNS Resolver share an IP address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (opportunistic). If the IP address is not shared, opportunistic use allows for attackers to redirect queries to an unrelated Encrypted Resolver, as described in security. Connections to a Designated Resolver can use a different IP address than the IP address of the Unencrypted DNS Resolver, such as if the process of resolving the SVCB service yields additional addresses. Even when a different IP address is used for the connection, the TLS certificate checks described in this section still apply for the original IP address of the Unencrypted DNS Resolver. 4.3."}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "6.4. DNS resolvers that support DDR by responding to queries for _dns.resolver.arpa SHOULD treat resolver.arpa as a locally served zone per RFC6303. In practice, this means that resolvers SHOULD respond to queries of any type other than SVCB for _dns.resolver.arpa with NODATA and queries of any type for any domain name under resolver.arpa with NODATA. 6.5.", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "6.4. DNS resolvers that support DDR by responding to queries for _dns.resolver.arpa MUST treat resolver.arpa as a locally served zone per RFC6303. In practice, this means that resolvers SHOULD respond to queries of any type other than SVCB for _dns.resolver.arpa with NODATA and queries of any type for any domain name under resolver.arpa with NODATA. 6.5."}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "Section 8.2 of I-D.ietf-add-svcb-dns describes a second downgrade attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires, unless verification of the resolver fails. DoH resolvers that allow discovery using DNS SVCB answers over unencrypted DNS MUST NOT provide differentiated behavior based on the", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "Section 8.2 of I-D.ietf-add-svcb-dns describes a second downgrade attack where an attacker can block connections to the encrypted DNS server. For DDR, clients need to validate a Designated Resolver using a connection to the server before trusting it, so attackers that can block these connections can prevent clients from switching to use encrypted DNS. DoH resolvers that allow discovery using DNS SVCB answers over unencrypted DNS MUST NOT provide differentiated behavior based on the"}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "client or network. If the IP address of a Designated Resolver differs from that of an Unencrypted DNS Resolver, clients applying Verified Discovery (verified) MUST validate that the IP address of the Unencrypted DNS Resolver is covered by the SubjectAlternativeName of the Designated", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "client or network. If the IP address of a Designated Resolver differs from that of an Unencrypted DNS Resolver, clients applying Verified Discovery (verified) MUST validate that the IP address of the Unencrypted DNS Resolver is covered by the SubjectAlternativeName of the Designated"}
{"id": "q-en-draft-ietf-add-ddr-295e6abcd5432cecb32dd21904c58acb6994b8cf14d71fac3faecae7351d1af6", "old_text": "Clients using Opportunistic Discovery (opportunistic) MUST be limited to cases where the Unencrypted DNS Resolver and Designated Resolver have the same IP address. Clients which do not follow Opportunistic Discovery (opportunistic) and instead try to connect without first checking for a designation run the possible risk of being intercepted by an attacker hosting an Encrypted DNS Resolver on an IP address of an Unencrypted DNS Resolver where the attacker has failed to gain control of the Unencrypted DNS Resolver. The constraints on the use of Designated Resolvers specified here", "comments": "DISCUSS: CC NAME See my discuss on draft-ietf-add-ddr-11 for my generic ADD DNR/DDR concerns Resolver without some sort of validation Why not? What is the alternative? Using unencrypted DNS to the party that told you how and where to contact these (unvalidated) Designated Resolvers. Resolver that does not use a private or local IP address and can be verified using the mechanism described in Section 4.2, it MAY be used on different network connections so long as the subsequent connections over other networks can also be successfully verified using the mechanism described in Section 4.2. This seems to go against a lot of advise in the documents that networks must be treated independently. Also, the network wants you to use their designated resolver, so to say it is okay to use a totally different Designated Resolvers seems to violate the whole DNR/DDR architecture. This will also cause failure if there is split-DNS or internal-only data. Also, using \"public IP\" is not a good trustworthy method to prove that these IPs are truly the same Designated Resolver used by both networks. I can easily use an internal only zone with . IN A 8.8.8.8 and get a valid ACME'd certificate with SAN=. The user connecting to another network might now be switching from my private DNS on my stolen 8.8.8.8 IP to the real google DNS. address of the designating Unencrypted Resolver in a subjectAltName extension. How feasible is it to get a certificate with SAN=ip from one of the generally accepted root store CA's? Can you give me one example where I can get a certificate for with SAN=193.110.157.123 ? I do not think this is currently possible or easy. If I am right, that means all of Section 4.2 is wishful thinking and not realistic to get rolled out. (I know we can see the cert for 1.1.1.1, so it is possible, but I know of no ACME supported CA that issues these) discovered Designated Resolver. This creates a strange policy when compared to DNR where if you get a DHCP or RA for the Designated Resolver, which is also not validatable, that you do end up using it? And section 6.5 states DNR trumps DDR. address, clients MAY choose to opportunistically use the Designated Resolver even without this certificate check (Section 4.3). Why only when the IP is the same? Why not for when the IP is different? What's to lose, since the only fallback option is stick to using the unencrypted resolver? yields an IP address that was not presented in the Additional Answers section or ipv4hint or ipv6hint fields of the original SVCB query, the connection made to that IP address MUST pass the same TLS certificate checks before being allowed to replace a previously known and validated IP address for the same Designated Resolver name. How does the appearance of an (unsigned) Additional Answer entry convey any kind of real world trust/authentication? In other words, why should it not ALWAYS pass the same TLS certificate checks ? is to use unencrypted DNS, why not just use it anyway? use DDR to discover details about all supported encrypted DNS protocols. It's a little odd to mention this as the DNR draft really tries hard to say to only use the network's offered encrypted DNS servers, so this option is in conflict with the DNR draft. Unfortunately, all currently deployed software does not know this, so it will be very common that this happens. URL SHOULD treat URL as a locally served zone per [RFC6303] Why is this not a MUST ? queries being dropped either maliciously or unintentionally, clients can re-send their SVCB queries periodically. I don't see how this would improve security for the client. It also mixes up behaviour of proper DNS clients that have their own retransmit logic for if they get no answer. attack where an attacker can block connections to the encrypted DNS server, and recommends that clients prevent it by switching to SVCB- reliant behavior once SVCB resolution does succeed. For DDR, this means that once a client discovers a compatible Designated Resolver, it SHOULD NOT use unencrypted DNS until the SVCB record expires I wonder which attacker can block encrypted DNS connections but not spoof (or block!) an unsigned SVCB record to the local client? And even if that is the case, this would be a denial of service attack if the DNS client cannot fallback to unencrypted DNS. Spoofing an SVCB to a bogus IP with an SVCB TTL of a few hours would be a very cheap 1 packet attack to keep the DNS client down for hours. This seems dangerous to implement. unencrypted DNS MUST NOT provide differentiated behavior based on the HTTP path alone, since an attacker could modify the \"dohpath\" parameter. For example, if a DoH resolver provides provides a filtering service for one URI path, and a non-filtered service for another URI path, [...] It seems likely that this advise will get ignored a lot, eg by people who have limited public IPs to use to spin up a DoH server, or who have to pay-per-IP. This advise seems more appropriate for local private IP DoH servers. So I would change this MUST NOT to SHOULD NOT for that reason. Unencrypted Resolver, clients applying Verified Discovery (Section 4.2) MUST validate that the IP address of the Unencrypted Resolver is covered by the SubjectAlternativeName of the Designated Resolver's TLS certificate. How would one obtain such a certificate via ACME? Since on the IP of the unencrypted resolver, there would be no HTTP server running? Additionally, if the client notices a failed verification of the Designated Resolver's TLS certificate, wouldn't it fallback to the Unencrypted Resolver? But now it may not? So this becomes a denial of service then ? to cases where the Unencrypted Resolver and Designated Resolver have the same IP address. Based on earlier text, this should read \"the same non-public IP address\". you use \"unilateral probing\" (draft-ietf-dprive-unilateral-probing) to just connect to the DoT or DoQ ports of the Unencrypted Resolver and see if you get anything back. It might be unauthenticated TLS but better than port 53? Special-Use Domain Names (SUDN) registry established by [RFC6761]. Why not pick URL As .local is already handled to not be forwarded by DNS forwarders. It would also not require an addition to SUDN. Even if someone was already using URL, it would not interfere with that usage because that usage would not be using SVCB records. To avoid name lookup deadlock, Designated Resolvers SHOULD follow the guidance in Section 10 of [RFC8484] regarding the avoidance of DNS- based references that block the completion of the TLS handshake. I find that reference to list more the issues than solutions that can be followed. The solution to use another DoH server is not really appropriate in most cases The client's other interface likely resides on other networks where its private DoH server cannot resolve the name of another network's private DoH server. It is also not easy to get an IP based certificate (eg SAN=ip) that is accepted by general webpki root stores. And things like OCSP stapling doesn't help if the target is malicious - they just would omit the stapling that proves against its malicious use. The only real advise usable from RFC8484 is \"use the local unencrypted resolver to resolve the encrypted resolver\". Might as well say that and remove the 8484 reference. as the protocol is mentioned throughout the document as interacting with this protocol. when accessing web content on a dual-stack network: A, AAAA, and HTTPS queries. Discovering a Designated Resolver as part of one of these queries, without having to add yet another query, minimizes the total number of queries clients send. I don't understand this paragraph. Once clients can send SVCB queries for web content, they already have had to do all the DDR/DNR related queries, so I don't understand the argument of \"saving queries\" ? Also, it is adding queries for a specific target, eg URL, and this cannot be \"saved\" either? What am I misunderstanding here? provides provides -> provides\nWe should reference Security Considerations, because if you don't have any validation, an attacker can point you to any server they want (someone other than the server you were configured with by DHCP, etc).\nWe can probably just remove this, not using it in implementations right now.\nIt's already done by the major public resolvers, and there is an RFC to support this (RFC 8738). The IP address validation is clearly important and a matter of WG consensus. Hopefully, this will push adoption of RFC 8738 by more ACME services.\nWe can reference the DNR section. The MUST NOT should only apply to DDR, but doesn't invalidate other ways of using the resolver.\nIf they are different, it can be a random attacker (see security considerations). The client may also have other better options to fall back to.\nAgree, this should be simplified to say \"even if you connect over a different IP address for any reason, still validate the original IP address\"\nThere are definitely cases where the client can have directly selected their own resolver name, etc. We don't have to use the network resolver.\nCorrect, and that's why we need this document, and it was added to the IANA section.\nSure, we can make this a MUST.\nIt makes the attacker's job harder by requiring them to be persistent. Normal retransmission logic is fine too.\nThis is a direct outcome of the svcb-dns document. Spoofing SVCB shouldn't be enough to cause a DoS here, because we should continue to use the previously used encrypted DNS server while on this network (even if you get a bad SVCB packet).\nFor the security reasons, I think this needs to stay as a MUST NOT. However, if you differentiate based on hostname + path, you can still differentiate, it just isn't a differentiation that can be discovered by IP address queries.\nYes, you'd fall back to unencrypted resolution if you can't have a secure trusted connection for encrypted resolution.\nYes, but that part is SHOULD. Will fix.\nI think that's just opportunistic DoT / DoQ. That's already specified.\n.local is the SUDN for multicast DNS, but it is also squatted on by some internal networks. Following the patterns for URL, etc, .arpa is a much more appropriate name. This is what the group had consensus on years ago and has been deployed widely already.\nI think we can say \"as explained in RFC 8484, don't loop on the resolver name.\"\nSure, that's fine.\nYeah, this paragraph is old and can be removed.\nLooks good! Just silly editorial nits", "new_text": "Clients using Opportunistic Discovery (opportunistic) MUST be limited to cases where the Unencrypted DNS Resolver and Designated Resolver have the same IP address, which SHOULD be a private or local IP address. Clients which do not follow Opportunistic Discovery (opportunistic) and instead try to connect without first checking for a designation run the possible risk of being intercepted by an attacker hosting an Encrypted DNS Resolver on an IP address of an Unencrypted DNS Resolver where the attacker has failed to gain control of the Unencrypted DNS Resolver. The constraints on the use of Designated Resolvers specified here"}
{"id": "q-en-draft-ietf-add-svcb-dns-2c0ddfcea519d5c6a039686cd7111beee128cbbbcafe315840c584540b10ac85", "old_text": "This key is automatically mandatory if present. (See Section 7 of SVCB for the definition of \"automatically mandatory\".) 4.3. These SvcParamKeys from SVCB apply to the \"dns\" scheme without", "comments": "Closes URL\nAllowing the use of nonstandard port numbers in SVCB for DNS provides valuable operational flexibility, but it also creates a risk related to cross-protocol confusion attacks, in which a malicious server convinces a client to send TCP contents that can be misparsed as a different protocol by a non-TLS victim server. For example, a malicious DNS server could indicate an SNI or session ticket that can be misparsed as a command by a recipient server, and use DNS rebinding to change its own apparent IP address to the victim server. Clients would then send this server-controlled payload to the victim server. This is dangerous if the client is implicitly authorized to control the victim server (e.g. authorization by IP address). Note that only DoT and DoH (over TCP) are amenable to these attacks. QUIC Initial encryption ensures that the packet has no meaning to a recipient that is not QUIC-aware, and any QUIC-aware recipient should not be vulnerable to these attacks. We should consider ways to minimize the impact of these attacks. For example, we could require clients to restrict the value of the parameter to a small fixed list, or refer to the . Credit to NAME", "new_text": "This key is automatically mandatory if present. (See Section 7 of SVCB for the definition of \"automatically mandatory\".) Support for the \"port\" key can be unsafe if the client has implicit elevated access to some network service (e.g. a local service that is inaccessible to remote parties) and that service uses a TCP-based protocol other than TLS. A hostile DNS server might be able to manipulate this service by causing the client to send a specially crafted TLS SNI or session ticket that can be misparsed as a command or exploit. To avoid such attacks, clients SHOULD NOT support the \"port\" key unless one of the following conditions applies: The client is being used with a DNS server that it trusts not attempt this attack. The client is being used in a context where implicit elevated access cannot apply. The client restricts the set of allowed TCP port values to exclude any ports where a confusion attack is likely to be possible (e.g. the \"bad ports\" list from the \"Port blocking\" section of FETCH). 4.3. These SvcParamKeys from SVCB apply to the \"dns\" scheme without"}
{"id": "q-en-draft-ietf-dnsop-glue-is-not-optional-e5b483ccd811a8d8faae54190017b6bf9524ad658abd646ce96bec4cd802ffb2", "old_text": "defined for the DNS. Note that this document only clarifies requirements of name server software implementations. It does not change any requirements on data placed in DNS zones or registries. 1.1.", "comments": "\u2026out presence of glue in zones.\nNo further discussion on list nor here so I will merge this.\nFrom the dnsop list, Ralf Weber suggests the draft could more clearly state that a name server might provide an \"incorrect\" response because it wasn't given necessary glue records in the zone data.", "new_text": "defined for the DNS. Note that this document only clarifies requirements of name server software implementations. It does not introduce or change any requirements on data placed in DNS zones or registries. In other words, this document only makes requirements on \"available glue records\" (i.e., those given in a zone), but does not make requirements regarding their presence in a zone. If some glue records are absent from a given zone, an authoritative name server may be unable to return a useful referral response for the corresponding domain. The IETF may want to consider a separate update to the requirements for including glue in zone data, beyond those given in RFC1034 and RFC1035. 1.1."}
{"id": "q-en-draft-ietf-dnsop-terminology-bis-45b23aec9752a2349576b21cf4ca0f5d817c2a4ad5d49bd1133604f0c08c32f1", "old_text": "5. This section defines the terms used for the systems that act as DNS clients, DNS servers, or both. For terminology specific to the public DNS root server system, see RSSAC026. That document defines terms such as \"root server\", \"root", "comments": "\u2026empt to define \"resolve\" and \"resolution\".\nThis issue concerns consistency and definition of some terms in the document. \"Name server\", \"name server\" and, perhaps, \"server\" seem to be used interchangeably. None of those terms have a formal definition in the document. The terms are also used in conjunction with various modifiers like \"authoritative server\", \"DNS server\", etc. While all those terms are likely pretty well-known, for the purposes of this document, un my opinion, it would be helpful to give a generic definition of \"server\", and how it is then combined with modifiers to form terminology for specific types of servers. Based on the definition of \"server\", it would be helpful to include a definition for \"name server\", \"name server\" or \"DNS server\" (pick one) and then use that term uniformly throughout the document.\nAdded a note about this in URL\nFixed in\nThese are common terms that are used in contexts that are not always associated with the resolver itself, so they should have stand-alone definitions.\nFirst attempt at definition in URL", "new_text": "5. This section defines the terms used for the systems that act as DNS clients, DNS servers, or both. In the RFCs, DNS servers are sometimes called \"name servers\", \"nameservers\", or just \"servers\". There is no formal definition of DNS server, but the RFCs generally assume that it is an Internet server that listens for queries and sends responses using the DNS protocol defined in RFC1035 and its successors. For terminology specific to the public DNS root server system, see RSSAC026. That document defines terms such as \"root server\", \"root"}
{"id": "q-en-draft-ietf-dnsop-terminology-bis-45b23aec9752a2349576b21cf4ca0f5d817c2a4ad5d49bd1133604f0c08c32f1", "old_text": "of resolver (some of which are defined below), and understanding the use of the term depends on understanding the context. A resolver that cannot perform all resolution itself. Stub resolvers generally depend on a recursive resolver to undertake the actual resolution function. Stub resolvers are discussed but", "comments": "\u2026empt to define \"resolve\" and \"resolution\".\nThis issue concerns consistency and definition of some terms in the document. \"Name server\", \"name server\" and, perhaps, \"server\" seem to be used interchangeably. None of those terms have a formal definition in the document. The terms are also used in conjunction with various modifiers like \"authoritative server\", \"DNS server\", etc. While all those terms are likely pretty well-known, for the purposes of this document, un my opinion, it would be helpful to give a generic definition of \"server\", and how it is then combined with modifiers to form terminology for specific types of servers. Based on the definition of \"server\", it would be helpful to include a definition for \"name server\", \"name server\" or \"DNS server\" (pick one) and then use that term uniformly throughout the document.\nAdded a note about this in URL\nFixed in\nThese are common terms that are used in contexts that are not always associated with the resolver itself, so they should have stand-alone definitions.\nFirst attempt at definition in URL", "new_text": "of resolver (some of which are defined below), and understanding the use of the term depends on understanding the context. A related term is \"resolve\", which is not formally defined in RFC1034 or RFC1035. An imputed definition might be \"asking a question that consists of a domain name, class, and type, and receiving some sort of answer\". Similarly, an imputed definition of \"resolution\" might be \"the answer received from resolving\". A resolver that cannot perform all resolution itself. Stub resolvers generally depend on a recursive resolver to undertake the actual resolution function. Stub resolvers are discussed but"}
{"id": "q-en-draft-ietf-dnssd-srp-b4ebdeb3c00c06745a21c9bbe8eb2d918811cf617374528ce210c7f4875cdb09", "old_text": "simplifications are available. Instead of being configured with (or discovering) the service registration domain, the (proposed) special- use domain name (see RFC6761) \"default.service.arpa\" is used. The details of how SRP server(s) are discovered will be specific to the constrained network, and therefore we do not suggest a specific mechanism here. SRP clients on constrained networks are expected to receive from the network a list of SRP servers with which to register. It is the responsibility of a Constrained-Node Network supporting SRP to provide one or more SRP server addresses. It is the responsibility of the SRP server supporting a Constrained-Node Network to handle the updates appropriately. In some network environments, updates may be accepted directly into a local \"default.service.arpa\" zone, which has only local visibility. In other network environments, updates for names ending in \"default.service.arpa\" may be rewritten internally to names with broader visibility. 2.1.3.", "comments": "This is part of the previous pull request, but I missed it because it was interleaved with some unrelated changes.", "new_text": "simplifications are available. Instead of being configured with (or discovering) the service registration domain, the (proposed) special- use domain name (see RFC6761) \"default.service.arpa\" is used. The details of how SRP registrar(s) are discovered will be specific to the constrained network, and therefore we do not suggest a specific mechanism here. SRP requestors on constrained networks are expected to receive from the network a list of SRP registrars with which to register. It is the responsibility of a Constrained-Node Network supporting SRP to provide one or more SRP registrar addresses. It is the responsibility of the SRP registrar supporting a Constrained-Node Network to handle the updates appropriately. In some network environments, updates may be accepted directly into a local \"default.service.arpa\" zone, which has only local visibility. In other network environments, updates for names ending in \"default.service.arpa\" may be rewritten internally to names with broader visibility. 2.1.3."}
{"id": "q-en-draft-ietf-dnssd-srp-19ecea136cf34e7f40598026ee66fdf7dff48601bfa02fd0ba7856ef8f5d5320", "old_text": "An instruction is a Host Description Instruction if, for the appropriate hostname, it contains 2.3.2. An SRP Update MUST include zero or more Service Discovery", "comments": "The set of constraints for the host description includes a constrain that A and AAAA records must be of sufficient scope. This is asking a bit much of the SRP server, which may or may not be able to make this determination. There may also be cases where an SRP registration with an address of less than global scope is useful. Consequently, this constraint is removed, and text added to talk about how servers should deal with addresses of less-than-global scope.", "new_text": "An instruction is a Host Description Instruction if, for the appropriate hostname, it contains A and/or AAAA records that are not of of sufficient scope to be validly published in a DNS zone can be ignored by the SRP server, which could result in a host description effectively containing zero reachable addresses even when it contains one or more addresses. For example, if a link-scope address or IPv4 autoconfiguration address is provided by the SRP requestor, the SRP registrar could not publish this in a DNS zone. However, in some situations, the SRP registrar may make the records available through a mechanism such as an advertising proxy only on the specific link from which the SRP update originated; in such a situation, locally-scoped records are still valid. 2.3.2. An SRP Update MUST include zero or more Service Discovery"}
{"id": "q-en-draft-ietf-dnssd-srp-3c41bedb984657511d2abc95fcff28ab8f75d953f375d3aa53f58a20da9fe35c", "old_text": "Service Registration Protocol for DNS-Based Service Discovery draft-ietf-dnssd-srp-13 Abstract", "comments": "Update the draft number, date, and change draft-sekar-ul to draft-ietf-dnssd-update-lease.", "new_text": "Service Registration Protocol for DNS-Based Service Discovery draft-ietf-dnssd-srp-14 Abstract"}
{"id": "q-en-draft-ietf-dnssd-srp-3c41bedb984657511d2abc95fcff28ab8f75d953f375d3aa53f58a20da9fe35c", "old_text": "record MUST contain the same public key. The update is signed using SIG(0), using the private key that corresponds to the public key in the KEY record. The lifetimes of the records in the update is set using the EDNS(0) Update Lease option I-D.sekar-dns-ul. The KEY record in Service Description updates MAY be omitted for brevity; if it is omitted, the SRP registrar MUST behave as if the", "comments": "Update the draft number, date, and change draft-sekar-ul to draft-ietf-dnssd-update-lease.", "new_text": "record MUST contain the same public key. The update is signed using SIG(0), using the private key that corresponds to the public key in the KEY record. The lifetimes of the records in the update is set using the EDNS(0) Update Lease option I-D.ietf-dnssd-update-lease. The KEY record in Service Description updates MAY be omitted for brevity; if it is omitted, the SRP registrar MUST behave as if the"}
{"id": "q-en-draft-ietf-dnssd-srp-3c41bedb984657511d2abc95fcff28ab8f75d953f375d3aa53f58a20da9fe35c", "old_text": "Because the DNS-SD registration protocol is automatic, and not managed by humans, some additional bookkeeping is required. When an update is constructed by the SRP requestor, it MUST include an EDNS(0) Update Lease Option I-D.sekar-dns-ul. The Update Lease Option contains two lease times: the Lease Time and the Key Lease Time. These leases are promises, similar to RFC2131, from the SRP requestor that it will send a new update for the service registration before", "comments": "Update the draft number, date, and change draft-sekar-ul to draft-ietf-dnssd-update-lease.", "new_text": "Because the DNS-SD registration protocol is automatic, and not managed by humans, some additional bookkeeping is required. When an update is constructed by the SRP requestor, it MUST include an EDNS(0) Update Lease Option I-D.ietf-dnssd-update-lease. The Update Lease Option contains two lease times: the Lease Time and the Key Lease Time. These leases are promises, similar to RFC2131, from the SRP requestor that it will send a new update for the service registration before"}
{"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "There are two primary use cases for this protocol. The primary one is to prevent on-path network devices from interfering with native DNS operations. This interference includes, but is not limited to, spoofing DNS responses, blocking DNS requests, and tracking. HTTP authentication and proxy friendliness are", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "There are two primary use cases for this protocol. The primary use case is to prevent on-path network devices from interfering with native DNS operations. This interference includes, but is not limited to, spoofing DNS responses, blocking DNS requests, and tracking. HTTP authentication and proxy friendliness are"}
{"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "A secondary use case is web applications that want to access DNS information. Standardizing an HTTPS mechanism allows this to be done in a way consistent with the cross-origin resource sharing CORS security model of the web and also integrate the caching mechanisms of DNS with those of HTTP. These applications may be interested in using a different media type than traditional clients. [ This paragraph is to be removed when this document is published as an RFC ] Note that these use cases are different than those in a similar protocol described at I-D.ietf-dnsop-dns-wireformat-http. The use case for that protocol is proxying DNS queries over HTTP instead of over DNS itself. The use cases in this document all", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "A secondary use case is web applications that want to access DNS information. Standardizing an HTTPS mechanism allows this to be done in a way consistent with the cross-origin resource sharing security model of the web CORS and also integrate the caching mechanisms of DNS with those of HTTP. These applications may be interested in using a different media type than traditional clients. [[ This paragraph is to be removed when this document is published as an RFC ]] Note that these use cases are different than those in a similar protocol described at I-D.ietf-dnsop-dns-wireformat-http. The use case for that protocol is proxying DNS queries over HTTP instead of over DNS itself. The use cases in this document all"}
{"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "defined. The protocol must use a secure transport that meets the requirements for modern https://. 4.1.", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "defined. The protocol must use a secure transport that meets the requirements for modern HTTPS. 4.1."}
{"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "5.1. The media type is \"application/dns-udpwireformat\". The body is the DNS on-the-wire format is defined in RFC1035. The body MUST be encoded with base64url RFC4648. Padding characters for base64url MUST NOT be included. DNS API clients using the DNS wire format MAY have one or more EDNS(0) extensions RFC6891 in the request.", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "5.1. The media type is \"application/dns-udpwireformat\". The body is the DNS on-the-wire format is defined in RFC1035. When using the GET method, the body MUST be encoded with base64url RFC4648. Padding characters for base64url MUST NOT be included. When using the POST method, the body is not encoded. DNS API clients using the DNS wire format MAY have one or more EDNS(0) extensions RFC6891 in the request."}
{"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "At the time this is published, the response types are works in progress. The only known response type is \"application/dns- udpwireformat\", but it is likely that at least one JSON-based response format will be defined in the future. The DNS response for \"application/dns-udpwireformat\" in dnswire MAY", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "At the time this is published, the response types are works in progress. The only known response type is \"application/dns- udpwireformat\", but it is possible that at least one JSON-based response format will be defined in the future. The DNS response for \"application/dns-udpwireformat\" in dnswire MAY"}
{"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "7. In order to satisfy the security requirements of DNS over HTTPS, this protocol MUST use HTTP/2 RFC7540 or its successors. HTTP/2 enforces a modern TLS profile necessary for achieving the security requirements of this protocol. This protocol MUST be used with https scheme URI RFC7230. The messages in classic UDP based DNS RFC1035 are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, priority, parallelism, and header compression. For this additional reason, this protocol MUST use HTTP/2 RFC7540 or its successors. 8.", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "7. This protocol MUST be used with https scheme URI RFC7230. This protocol MUST use HTTP/2 RFC7540 or its successors in order to satisfy the security requirements of DNS over HTTPS. Further, the messages in classic UDP based DNS RFC1035 are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, priority, parallelism, and header compression, all of which are supported by HTTP/2 RFC7540 or its successors. 8."}
{"id": "q-en-draft-ietf-doh-dns-over-https-d459369d833135bb016aebd0ea7f2ecd61637901689d02831c1b42ef69f06d0b", "old_text": "9. Running DNS over https:// relies on the security of the underlying HTTP connection. By requiring at least RFC7540 levels of support for TLS this protocol expects to use current best practices for secure transport. Session level encryption has well known weaknesses with respect to", "comments": "I think that we should specify behaviour for truncated responses with the TC bit more clearly. Right now the text is a little vague. There seem to be three options: mandate the generation of an HTTP error mandate the forwarding of the truncated response explicitly allow either forbid the use of truncated responses and recommend fallback to TCP if the DNS API server is just making DNS requests to another server I think that the draft current vacilates in the direction of 3, but I tend to like 1 or 4. Choosing 4 wouldn't be consistent with the dnsop DNS-over-HTTP draft using DOH though.\nis the right answer, so the text should just be stronger about it. If DOH requires or , that means that the DNS server code will need to be heavily reworked to capture these from going out, which seems pointless. DOH clients can act like normal stubs: either they do or don't know about truncation and TC.\nI agree - I don't think doh needs to behave differently here than any other datagram based client - i.e. its not a transport error. NAME can you do a pr to clarify?\nWill do.", "new_text": "9. Running DNS over HTTPS relies on the security of the underlying HTTP connection. By requiring at least RFC7540 levels of support for TLS, this protocol expects to use current best practices for secure transport. Session level encryption has well known weaknesses with respect to"}
{"id": "q-en-draft-ietf-doh-dns-over-https-373d1c46d59457c7f4eed5cdb925ae26390299eb9763dbfd83040196a340ebe7", "old_text": "1. The Internet does not always provide end to end reachability for native DNS. On-path network devices may spoof DNS responses, block DNS requests, or just redirect DNS queries to different DNS servers that give less-than-honest answers. These are also sometimes delivered with poor performance or reduced feature sets. Over time, there have been many proposals for using HTTP and HTTPS as a substrate for DNS queries and responses. To date, none of those proposals have made it beyond early discussion, partially due to disagreement about what the appropriate formatting should be and partially because they did not follow HTTP best practices. This document defines a specific protocol for sending DNS RFC1035 queries and getting DNS responses over HTTP RFC7540 using https:// (and therefore TLS RFC5246 security for integrity and", "comments": "This text is great justification to include in a charter, but this is a protocol specification and it really doesn't add much.\nIt seemed like the WG wanted the document to be shorter and more specific, so this feels good.", "new_text": "1. This document defines a specific protocol for sending DNS RFC1035 queries and getting DNS responses over HTTP RFC7540 using https:// (and therefore TLS RFC5246 security for integrity and"}
{"id": "q-en-draft-ietf-doh-dns-over-https-e84cdd10af357f1a6788d78ac31823eaddc5ef82fc201340591f72ad51fbf43a", "old_text": "3. The protocol described here bases its design on the following protocol requirements:", "comments": "\u2026 but keep the requirement for future formats.\nAttempts to\nAs input to the process, this was extremely valuable, but now that this is in WGLC, we're done with it. Does the audience of the ultimate RFC need this information? The only thing that I can see that is worth salvaging is a statement to the effect that DNS64 isn't explicitly supported.\nYes, we still need it. We have not gone to IETF Last Call and, despite what some people think, the IETF gets to change a protocol significantly during IETF Last Call. But even after that, once it is an RFC, others will want to update the RFC. It is important to keep the requirements for them as well.\nActually, we probably don't need it in the RFC as long as the one future requirement (for formats) is still there. Please see PR", "new_text": "3. [[ RFC Editor: Please remove this entire section before publication. ]] The protocol described here bases its design on the following protocol requirements:"}
{"id": "q-en-draft-ietf-doh-dns-over-https-e84cdd10af357f1a6788d78ac31823eaddc5ef82fc201340591f72ad51fbf43a", "old_text": "In order to maximize interoperability, DNS API clients and DNS API servers MUST support the \"application/dns-message\" media type. Other media types MAY be used as defined by HTTP Content Negotiation (RFC7231 Section 3.4). 6.", "comments": "\u2026 but keep the requirement for future formats.\nAttempts to\nAs input to the process, this was extremely valuable, but now that this is in WGLC, we're done with it. Does the audience of the ultimate RFC need this information? The only thing that I can see that is worth salvaging is a statement to the effect that DNS64 isn't explicitly supported.\nYes, we still need it. We have not gone to IETF Last Call and, despite what some people think, the IETF gets to change a protocol significantly during IETF Last Call. But even after that, once it is an RFC, others will want to update the RFC. It is important to keep the requirements for them as well.\nActually, we probably don't need it in the RFC as long as the one future requirement (for formats) is still there. Please see PR", "new_text": "In order to maximize interoperability, DNS API clients and DNS API servers MUST support the \"application/dns-message\" media type. Other media types MAY be used as defined by HTTP Content Negotiation (RFC7231 Section 3.4). Those media types MUST be flexible enough to express every DNS query that would normally be sent in DNS over UDP (including queries and responses that use DNS extensions, but not those that require multiple responses). 6."}
{"id": "q-en-draft-ietf-doh-dns-over-https-c68589fdadcb6bedbe989e8003945452ca46cfcfa6cc6f5f3df8b4c0343bb298", "old_text": "are independent and fully compatible protocols, each solving different problems. The use of one does not diminish the need nor the usefulness of the other. It is the choice of a client to either perform full DNSSEC validation of answers or to treat answers from a DNS API server as DNSSEC-authenticated, and this choice may be affected by the response media type. caching describes the interaction of this protocol with HTTP caching. An adversary that can control the cache used by the client can affect", "comments": "\u2026raft-ietf-doh-dns-over-https/commit/3f74bd6e0760f96717ffd3e667a37e13b45839c4#commitcomment-28865666 and URL that this wording is much clearer.\nSara, making this pr was on my todo list - thanks for the time saver. Paul sent me an IM about it, so I'll ga and merge.", "new_text": "are independent and fully compatible protocols, each solving different problems. The use of one does not diminish the need nor the usefulness of the other. It is the choice of a client to either perform full DNSSEC validation of answers or to trust the DNS API server to do DNSSEC validation and inspect the AD (Authentic Data) bit in the returned message to determine whether an answer was authentic or not. As noted in http-response, different response media types will provide more or less information from a DNS response so this choice may be affected by the response media type. caching describes the interaction of this protocol with HTTP caching. An adversary that can control the cache used by the client can affect"}
{"id": "q-en-draft-ietf-doh-dns-over-https-c68589fdadcb6bedbe989e8003945452ca46cfcfa6cc6f5f3df8b4c0343bb298", "old_text": "security implications of HTTP caching for other protocols that use HTTP. In the absence of information about the authenticity of responses, such as DNSSEC, a DNS API server can give a client invalid data in responses. A client MUST NOT use arbitrary DNS API servers. Instead, a client MUST only use DNS API servers specified using mechanisms such as explicit configuration. This does not guarantee", "comments": "\u2026raft-ietf-doh-dns-over-https/commit/3f74bd6e0760f96717ffd3e667a37e13b45839c4#commitcomment-28865666 and URL that this wording is much clearer.\nSara, making this pr was on my todo list - thanks for the time saver. Paul sent me an IM about it, so I'll ga and merge.", "new_text": "security implications of HTTP caching for other protocols that use HTTP. In the absence of DNSSEC information about the authenticity of responses a DNS API server can give a client invalid data in responses. A client MUST NOT use arbitrary DNS API servers. Instead, a client MUST only use DNS API servers specified using mechanisms such as explicit configuration. This does not guarantee"}
{"id": "q-en-draft-ietf-doh-dns-over-https-fdb58a792cd80b531f12de84f6e8b8f1ea3408564a53c1182012f3927bab9ad0", "old_text": "configuration. This does not guarantee protection against invalid data but reduces the risk. A client can use DNS over HTTPS as one of multiple mechanisms to obtain DNS data. If a client of this protocol encounters an HTTP error after sending a DNS query, and then falls back to a different DNS retrieval mechanism, doing so can weaken the privacy and authenticity expected by the user of the client. 10. Local policy considerations and similar factors mean different DNS", "comments": "this paragraph is true.. but its a tad confusing and I don't think really describes anything germane to doh.\nNAME WFM to just remove this text", "new_text": "configuration. This does not guarantee protection against invalid data but reduces the risk. 10. Local policy considerations and similar factors mean different DNS"}
{"id": "q-en-draft-ietf-doh-dns-over-https-be29419b942129087a4e1ddc5436aaffe0e95df772b2e3b37b1c9c5899e02019", "old_text": "This protocol MUST be used with the https scheme URI RFC7230. 6.1. A DoH exchange can pass through a hierarchy of caches that include", "comments": "Is there a reason not to make a recommendation for the case of a DOH-only service? The current text says: Implementations of DoH clients and servers need to consider the benefit and privacy impact of all these features, and their deployment context, when deciding whether or not to enable them. Would you consider a recommendation like \"For DOH clients which do not intermingle DOH requests with other HTTP suppression of these headers and other potentially identifying headers is an appropriate data minimization strategy.\"?\nNAME That could be added, but it would need a lot more text because your proposed text only deals with one layer (HTTP) while the text above it also talks about fingerprinting with TLS, TCP, and IP (in reverse order). The WG can decide whether adding recommendations for just one layer is worthwhile or possibly misleading about the overall privacy.\nNAME It was only an example, and I would be happy to expand. That said, I think this is the most likely source of leakage: folks using a general purpose HTTP subsystem that carries information not needed for DOH requests. Actually recommending data minimization there, rather than saying \"it should be considered\" seems like a worthwhile thing to do in my personal opinion.\nAgreed with Ted (on the list) that this conversation should be on the list. Locking it here.\nI have unlocked the comments so that people can propose specific wording changes as diffs in the PR itself; most questions and suggestions should be done on the list.", "new_text": "This protocol MUST be used with the https scheme URI RFC7230. PrivacyConsiderations and Security discuss additional considerations for the integration with HTTP. 6.1. A DoH exchange can pass through a hierarchy of caches that include"}
{"id": "q-en-draft-ietf-doh-dns-over-https-be29419b942129087a4e1ddc5436aaffe0e95df772b2e3b37b1c9c5899e02019", "old_text": "7. The data payload for the application/dns-message media type is a single message of the DNS on-the-wire format defined in section 4.2.1 of RFC1035. The format was originally for DNS over UDP. Although RFC1035 says \"Messages carried by UDP are restricted to 512 bytes\", that was later updated by RFC6891. This media type restricts the maximum size of the DNS message to 65535 bytes. Note that the wire format used in this media type is different than the wire format used in RFC7858 (which uses the format defined in section 4.2.2 of RFC1035). DoH clients using this media type MAY have one or more EDNS options", "comments": "Is there a reason not to make a recommendation for the case of a DOH-only service? The current text says: Implementations of DoH clients and servers need to consider the benefit and privacy impact of all these features, and their deployment context, when deciding whether or not to enable them. Would you consider a recommendation like \"For DOH clients which do not intermingle DOH requests with other HTTP suppression of these headers and other potentially identifying headers is an appropriate data minimization strategy.\"?\nNAME That could be added, but it would need a lot more text because your proposed text only deals with one layer (HTTP) while the text above it also talks about fingerprinting with TLS, TCP, and IP (in reverse order). The WG can decide whether adding recommendations for just one layer is worthwhile or possibly misleading about the overall privacy.\nNAME It was only an example, and I would be happy to expand. That said, I think this is the most likely source of leakage: folks using a general purpose HTTP subsystem that carries information not needed for DOH requests. Actually recommending data minimization there, rather than saying \"it should be considered\" seems like a worthwhile thing to do in my personal opinion.\nAgreed with Ted (on the list) that this conversation should be on the list. Locking it here.\nI have unlocked the comments so that people can propose specific wording changes as diffs in the PR itself; most questions and suggestions should be done on the list.", "new_text": "7. The data payload for the application/dns-message media type is a single message of the DNS on-the-wire format defined in Section 4.2.1 of RFC1035. The format was originally for DNS over UDP. Although RFC1035 says \"Messages carried by UDP are restricted to 512 bytes\", that was later updated by RFC6891. This media type restricts the maximum size of the DNS message to 65535 bytes. Note that the wire format used in this media type is different than the wire format used in RFC7858 (which uses the format defined in Section 4.2.2 of RFC1035). DoH clients using this media type MAY have one or more EDNS options"}
{"id": "q-en-draft-ietf-doh-dns-over-https-be29419b942129087a4e1ddc5436aaffe0e95df772b2e3b37b1c9c5899e02019", "old_text": "9. Running DNS over HTTPS relies on the security of the underlying HTTP transport. This mitigates classic amplification attacks for UDP- based DNS. Implementations utilizing HTTP/2 benefit from the TLS", "comments": "Is there a reason not to make a recommendation for the case of a DOH-only service? The current text says: Implementations of DoH clients and servers need to consider the benefit and privacy impact of all these features, and their deployment context, when deciding whether or not to enable them. Would you consider a recommendation like \"For DOH clients which do not intermingle DOH requests with other HTTP suppression of these headers and other potentially identifying headers is an appropriate data minimization strategy.\"?\nNAME That could be added, but it would need a lot more text because your proposed text only deals with one layer (HTTP) while the text above it also talks about fingerprinting with TLS, TCP, and IP (in reverse order). The WG can decide whether adding recommendations for just one layer is worthwhile or possibly misleading about the overall privacy.\nNAME It was only an example, and I would be happy to expand. That said, I think this is the most likely source of leakage: folks using a general purpose HTTP subsystem that carries information not needed for DOH requests. Actually recommending data minimization there, rather than saying \"it should be considered\" seems like a worthwhile thing to do in my personal opinion.\nAgreed with Ted (on the list) that this conversation should be on the list. Locking it here.\nI have unlocked the comments so that people can propose specific wording changes as diffs in the PR itself; most questions and suggestions should be done on the list.", "new_text": "9. RFC7626 discusses DNS Privacy Considerations in both \"On the wire\" (Section 2.4), and \"In the server\" (Section 2.5) contexts. This is also a useful framing for DoH's privacy considerations. 9.1. DoH encrypts DNS traffic and requires authentication of the server. This mitigates both passive surveillance RFC7258 and active attacks that attempt to divert DNS traffic to rogue servers (RFC7626 Section 2.5.1). DNS over TLS RFC7858 provides similar protections, while direct UDP and TCP based transports are vulnerable to this class of attack. Additionally, the use of the HTTPS default port 443 and the ability to mix DoH traffic with other HTTPS traffic on the same connection can deter unprivileged on-path devices from interfering with DNS operations and make DNS traffic analysis more difficult. 9.2. A DoH implementation is built on IP, TCP, TLS, and HTTP. Each layer contains one or more common features that can be used to correlate queries to the same identity. DNS transports will generally carry the same privacy properties of the layers used to implement them. For example, the properties of IP, TCP, and TLS apply to DNS over TLS implementations. The privacy considerations of using the HTTPS layer in DoH are incremental to those of DNS over TLS. DoH is not known to introduce new concerns beyond those associated with HTTPS. At the IP level, the client address provides obvious correlation information. This can be mitigated by use of a NAT, proxy, VPN, or simple address rotation over time. It may be aggravated by use of a DNS server that can correlate real-time addressing information with other personal identifiers, such as when a DNS server and DHCP server are operated by the same entity. TCP-based solutions may seek performance through the use of TCP Fast Open RFC7413. The cookies used in TCP Fast Open allow servers to correlate TCP sessions together. TLS based implementations often achieve better handshake performance through the use of some form of session resumption mechanism such as session tickets RFC5077. Session resumption creates trivial mechanisms for a server to correlate TLS connections together. HTTP's feature set can also be used for identification and tracking in a number of different ways. For example, authentication request header fields explicitly identify profiles in use, and HTTP Cookies are designed as an explicit state tracking mechanism between the client and serving site and often are used as an authentication mechanism. Additionally, the User-Agent and Accept-Language request header fields often convey specific information about the client version or locale. This facilitates content negotiation and operational work- arounds for implementation bugs. Request header fields that control caching can expose state information about a subset of the client's history. Mixing DoH requests with other HTTP requests on the same connection also provides an opportunity for richer data correlation. The DoH protocol design allows applications to fully leverage all the features of the HTTP ecosystem, including features not enumerated here. Implementations of DoH clients and servers need to consider the benefit and privacy impact of these features, and their deployment context, when deciding whether or not to enable them. Implementations are advised to expose the minimal set of data needed to achieve the desired feature set. Determining whether or not a DoH implementation requires HTTP cookie RFC6265 support is particularly important because HTTP cookies are the primary state tracking mechanism in HTTP. HTTP Cookies SHOULD NOT be accepted by DOH clients unless they are explicitly required by a use case. 10. Running DNS over HTTPS relies on the security of the underlying HTTP transport. This mitigates classic amplification attacks for UDP- based DNS. Implementations utilizing HTTP/2 benefit from the TLS"}
{"id": "q-en-draft-ietf-doh-dns-over-https-be29419b942129087a4e1ddc5436aaffe0e95df772b2e3b37b1c9c5899e02019", "old_text": "This prohibition does not guarantee protection against invalid data, but it does reduce the risk. 10. Local policy considerations and similar factors mean different DNS servers may provide different results to the same query: for instance", "comments": "Is there a reason not to make a recommendation for the case of a DOH-only service? The current text says: Implementations of DoH clients and servers need to consider the benefit and privacy impact of all these features, and their deployment context, when deciding whether or not to enable them. Would you consider a recommendation like \"For DOH clients which do not intermingle DOH requests with other HTTP suppression of these headers and other potentially identifying headers is an appropriate data minimization strategy.\"?\nNAME That could be added, but it would need a lot more text because your proposed text only deals with one layer (HTTP) while the text above it also talks about fingerprinting with TLS, TCP, and IP (in reverse order). The WG can decide whether adding recommendations for just one layer is worthwhile or possibly misleading about the overall privacy.\nNAME It was only an example, and I would be happy to expand. That said, I think this is the most likely source of leakage: folks using a general purpose HTTP subsystem that carries information not needed for DOH requests. Actually recommending data minimization there, rather than saying \"it should be considered\" seems like a worthwhile thing to do in my personal opinion.\nAgreed with Ted (on the list) that this conversation should be on the list. Locking it here.\nI have unlocked the comments so that people can propose specific wording changes as diffs in the PR itself; most questions and suggestions should be done on the list.", "new_text": "This prohibition does not guarantee protection against invalid data, but it does reduce the risk. 11. Local policy considerations and similar factors mean different DNS servers may provide different results to the same query: for instance"}
{"id": "q-en-draft-ietf-doh-dns-over-https-21c0aff5795314314cfe7fd3055629adbaae37b0430d38066109eff761913203", "old_text": "to support reordering, parallelism, priority, and header compression to acheive similar performance. Those features were introduced to HTTP in HTTP/2 RFC7540. Earlier versions of HTTP are capable of conveying the semantic requirements of DOH but would result in very poor performance for many uses cases. 8.", "comments": "This text is rather subjective without data. Changing it to \"may\" still brings the point without needing a reference to a study proving the sentence true.\n(whoops; just realized I've been blowing my pull requests by chaining them off a single branch... my bad, and this will cause you some pain)", "new_text": "to support reordering, parallelism, priority, and header compression to acheive similar performance. Those features were introduced to HTTP in HTTP/2 RFC7540. Earlier versions of HTTP are capable of conveying the semantic requirements of DOH but may result in very poor performance for many uses cases. 8."}
{"id": "q-en-draft-ietf-doh-dns-over-https-484e643d293fee6461bc2861186ef1d09c5dedd1527e26c443d9e73945943114", "old_text": "Stapling RFC6961 to provide the client with certificate revocation information that does not require contacting a third party. 11. Joe Hildebrand contributed lots of material for a different iteration", "comments": "this is a little longer than I had hoped given that bootstrap deadlock is not a new problem for dns introduced by doh.. but the https validaton angle is worth mentioning.\nIf DOH uses https under the hood, how does this https host name get resolved? I would have expected this to be mentioned in the RFC, though I didn't see any mention of it, besides OCSP Stapling. Is it assumed that DOH would need to be bootstrapped by traditional DNS? Or is it assumed that a DOH endpoint would have a TLS/SSL certificate for its ip address? (Apparently a certificate for an IP address is possible in some cases, though hard to get.) I assume DOH is not intended to be used by home NAT routers? (How would the router get a signed TLS certificate? Maybe a DHCP response could say: use this DOH url and trust this self-signed cert for its ip address (though a certificate might be too big for a DHCP response). Or, in cases where no local DNS services are needed, the DHCP response could maybe say: use this public DOH url, here's the ip address for that hostname)\nThe document doesn't describe how a HTTPS client authenticates HTTPS... that's on purpose in order to simply leverage HTTPS without surprises. as a practical matter a number of things can be done. One is TOFU as you suggest. Another is an IP based certificate which you also suggest. Another would be to lookup the name via another, perhaps less trusted resolver, knowing that HTTPS means the IP address isn't part of the security scheme anyhow - its just a route.. subject to blocking, but not to impersonation. Another would be to bootstrap the client with a name and an address instead of one or the other. but as a general matter I think this is one step beyond the scope of the document as its basically writing down the rules for HTTPS.\nThis issue semi-duplicates\nI think this should at least be called out as an operational concern. e.g. mentioning that you probably need an alternate resolver to find the DOH server, after which a paranoid implementation can re-query for the DOH's address as an integrity check.\nThis is the direction I have taken for . When opening a connection to the remote DOH server, if an ip was provided, it will just use that, if not, it will try to resolve using the local stub, but that may not work.... Just like an IP has always been provided to bootstrap which nameserver to use, here we need to also be provided a name so we can validate that the server we are talking to is the one we trust.", "new_text": "Stapling RFC6961 to provide the client with certificate revocation information that does not require contacting a third party. A DNS API client may face a similar bootstrapping problem when the HTTP request needs to resolve the hostname portion of the DNS URI. Just as the address of a traditional DNS nameserver cannot be originally determined from that same server, a DOH client cannot use its DOH server to initially resolve the server's host name into an address. Alternative strategies a client might employ include making the initial resolution part of the configuration, IP based URIs and corresponding IP based certificates for HTTPS, or resolving the DNS API Server's hostname via traditional DNS or another DOH server while still authenticating the resulting connection via HTTPS. 11. Joe Hildebrand contributed lots of material for a different iteration"}
{"id": "q-en-draft-ietf-emu-eap-tls13-a19d607bd7cf8aab270053ef0e641a360110e33c220bb1382358e74471763c13", "old_text": "Error alert message. In earlier versions of TLS, error alerts could be warnings or fatal. In TLS 1.3, error alerts are always fatal and the only alerts sent at warning level are \"close_notify\" and \"user_cancelled\", both of which indicate that the connection is not going to continue normally, see RFC8446. In TLS 1.3 RFC8446, error alerts are not mandatory to send after a", "comments": "Update the draft to use the same spelling as TLS RFC uses. Relates to issue\nWhile reading draft-ietf-emu-eap-tls13-15 I noticed that section now uses which is the named used in but not in RFC 8446 itself. I first thought this was a typo, before a further look showed that this change was mentioned in IETF 110 . Suggestion: make clear which parts need to be looked up from RFC 8446bis. In addition to exporter terminology, the meeting slides above also discuss about usercancelled behavior on slide 10. For clarity, any updates from RFC 8446bis should be marked clearly.\nIt would be best not to have normative references that block on RFC 8446bis. Perhaps we can just reference that it is renamed from \"exportermastersecret\"\nSounds good. That should make clear where it comes from and that there's no need to file errata about unknown term and/or RFC 8446bis dependency. Taking another look at my comment about usercancelled behavior on slide 10; to clarify what I was meaning: The slides hint that clarifications in 8446bis are needed by EAP-TLS too. If so, this looks like a reference to RFC 8446bis too. Finally, one nit: the alert name in RFC 8446 is \"usercanceled\", not \"user_cancelled\" as in draft -15.\nI made a PR with explaining that the document use the terminology from RFC8446bis. URL Do we need to say anything about usercancelled? RFC8446bis might do several clarifications and corrections. I think we should only mention things that are directly relevant for EAP-TLS 1.3 URL Current plan for RFC8446bis seems to be to ignore closenotify and I assume require close_notify after. This does not seem directly relevant for EAP-TLS 1.3. Or?\nclose issue?\nYes, please close this. I think it's now clear where exportersecret comes from. Also, thanks for the pointer to TLSv1.3 discussion. I agree usercanceled followed by close_notify discussion is not relevant for EAP-TLS 1.3. It's seems to be TLSv1.3 specification and implementation level detail as opposed to something that EAP-TLS 1.3 needs to directly react on.\nThis looks good to merge", "new_text": "Error alert message. In earlier versions of TLS, error alerts could be warnings or fatal. In TLS 1.3, error alerts are always fatal and the only alerts sent at warning level are \"close_notify\" and \"user_canceled\", both of which indicate that the connection is not going to continue normally, see RFC8446. In TLS 1.3 RFC8446, error alerts are not mandatory to send after a"}
{"id": "q-en-draft-ietf-emu-eap-tls13-adbad78888b0bb8e7fdb3cbbd943ef068cfa14bf17cd9b3d61874bb4b4d1eb4c", "old_text": "maintain backwards compatibility. However, this document does provide additional guidance on authentication, authorization, and resumption for EAP-TLS regardless of the underlying TLS version used. This document only describes differences compared to RFC5216. All message flow are example message flows specific to TLS 1.3 and do not apply to TLS 1.2. Since EAP-TLS couples the TLS handshake state machine with the EAP state machine it is possible that new versions of TLS will cause incompatibilities that introduce failures or security issues if they are not carefully integrated into the EAP-TLS protocol. Therefore, implementations MUST limit the maximum TLS version they use to 1.3, unless later versions are explicitly enabled by the administrator. This document specifies EAP-TLS 1.3 and does not specify how other TLS-based EAP methods use TLS 1.3. The specification for how other", "comments": "Hi! I performed a second AD review on draft-ietf-emu-eap-tls13. This time on -18. Thank you to the authors, the responsible co-chair, and the WG for all of the work to redesign the -13 protocol in response to the initial IESG review in December 2020. My review feedback consists of my own minimal additional comments and highlights of where previously stated IESG ballot items do not appear to be responded to or addressed. All items appear to be minor. ** Section 2.4. Previously, this text referenced TLS 1.3 or higher necessitating more ambiguous reference to version numbers. However, now the text only notes TLS 1.3. OLD When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP- TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) for the TLS version used. For TLS 1.3 the compliance requirements are defined in Section 9 of [RFC8446]. NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446]. ==[ COMMENT (Ben Kaduk) Section 2.1.4 The two paragraphs below replaces the corresponding paragraphs in Section 2.1.3 of [RFC5216] when EAP-TLS is used with TLS 1.3 or higher. The other paragraphs in Section 2.1.3 of [RFC5216] still apply with the exception that SessionID is deprecated. If the EAP-TLS peer authenticates successfully, the EAP-TLS server MUST send an EAP-Request packet with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. The message flow ends with the EAP-TLS server sending an EAP-Success message. If the EAP-TLS server authenticates successfully, the EAP-TLS peer MUST send an EAP-Response message with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. Just to walk through the various scenarios here; if the last message of the TLS handshake for a given TLS version is sent by the TLS client, that will go in an EAP-Response, so the server is then able to send EAP-Success directly. On the other hand, if the last TLS handshake message is sent by the server, that will have to go in an EAP-TLS EAP-Request, and so the peer will have to send an empty EAP-TLS response in order for the server to be able to wend EAP-Success? Do we need to have any text here to handle that empty EAP-TLS Eap-Request case? ==[ COMMENT(Alvaro Retana) Is the intention of the Appendix to provide additional formal updates to rfc5216? That is what it looks like to me, but there's no reference to it from the main text. Please either reference the Appendix when talking about some of the other updates (if appropriate) or specifically include the text somewhere more prominent. [Roman] As proposed above, please provide some forward reference. ==[ COMMENT (\u00c9ric Vyncke) I find \"This section updates Section 2.1.1 of [RFC5216].\" a little ambiguous as it the 'updated section' is not identified clearly. I.e., as the sections in RFC 5216 are not too long, why not simply providing whole new sections ? [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216) ==[ COMMENT (\u00c9ric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Regards, Roman\n==[ COMMENT (\u00c9ric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Eric is correct that US english i.e. and e.g. is usually followed by a comma. I corrected this in d9f5028 I also run a word spell check on the document. Word does not suggest removing any comma after \"but\" but rather insert an additional comma after a but. I think this can be left to the RFC editor. The word spell check caught two other grammar errors (s after plural, no s after singular) RFC 8446 does not use capitalization on post-handshake as that is just a name for a set of messages. I corrected these things in\nI Made a new PR\nMade a commit with NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446].\nSection 2.1.4 Changed the text to: The message flow ends with a protected success indication from the EAP-TLS server, follwed by an EAP-Response packet of EAP-Type=EAP-TLS and no data from the EAP-TLS peer, follwed by EAP-Success from the server.\nI added a forward reference to the appendix in the introduction\nNAME looks like you have got everything covered? are there any comments from Roman which still need to be addressed? should we submit a new version and ask for IETF last call?\nI have tried to do everything. Problem with the updates thing. The replace or amend strategy does not really work.... [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216)\nNAME We should likely not do to much changes. Maybe we can write something like Amends with the following text. TLS 1.2 specific details in RFC 5216 are replaced. Requirement in the new takes precedence over requirements in the old text.\nThis review was addressed in earlier submissions. closing", "new_text": "maintain backwards compatibility. However, this document does provide additional guidance on authentication, authorization, and resumption for EAP-TLS regardless of the underlying TLS version used. This document only describes differences compared to RFC5216. When EAP-TLS is used with TLS 1.3, some references are updated as specified in updateref. All message flow are example message flows specific to TLS 1.3 and do not apply to TLS 1.2. Since EAP-TLS couples the TLS handshake state machine with the EAP state machine it is possible that new versions of TLS will cause incompatibilities that introduce failures or security issues if they are not carefully integrated into the EAP-TLS protocol. Therefore, implementations MUST limit the maximum TLS version they use to 1.3, unless later versions are explicitly enabled by the administrator. This document specifies EAP-TLS 1.3 and does not specify how other TLS-based EAP methods use TLS 1.3. The specification for how other"}
{"id": "q-en-draft-ietf-emu-eap-tls13-adbad78888b0bb8e7fdb3cbbd943ef068cfa14bf17cd9b3d61874bb4b4d1eb4c", "old_text": "If the EAP-TLS peer authenticates successfully, the EAP-TLS server MUST send an EAP-Request packet with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. The message flow ends with the EAP-TLS server sending an EAP-Success message. If the EAP-TLS server authenticates successfully, the EAP-TLS peer MUST send an EAP-Response message with EAP-Type=EAP-TLS containing", "comments": "Hi! I performed a second AD review on draft-ietf-emu-eap-tls13. This time on -18. Thank you to the authors, the responsible co-chair, and the WG for all of the work to redesign the -13 protocol in response to the initial IESG review in December 2020. My review feedback consists of my own minimal additional comments and highlights of where previously stated IESG ballot items do not appear to be responded to or addressed. All items appear to be minor. ** Section 2.4. Previously, this text referenced TLS 1.3 or higher necessitating more ambiguous reference to version numbers. However, now the text only notes TLS 1.3. OLD When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP- TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) for the TLS version used. For TLS 1.3 the compliance requirements are defined in Section 9 of [RFC8446]. NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446]. ==[ COMMENT (Ben Kaduk) Section 2.1.4 The two paragraphs below replaces the corresponding paragraphs in Section 2.1.3 of [RFC5216] when EAP-TLS is used with TLS 1.3 or higher. The other paragraphs in Section 2.1.3 of [RFC5216] still apply with the exception that SessionID is deprecated. If the EAP-TLS peer authenticates successfully, the EAP-TLS server MUST send an EAP-Request packet with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. The message flow ends with the EAP-TLS server sending an EAP-Success message. If the EAP-TLS server authenticates successfully, the EAP-TLS peer MUST send an EAP-Response message with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. Just to walk through the various scenarios here; if the last message of the TLS handshake for a given TLS version is sent by the TLS client, that will go in an EAP-Response, so the server is then able to send EAP-Success directly. On the other hand, if the last TLS handshake message is sent by the server, that will have to go in an EAP-TLS EAP-Request, and so the peer will have to send an empty EAP-TLS response in order for the server to be able to wend EAP-Success? Do we need to have any text here to handle that empty EAP-TLS Eap-Request case? ==[ COMMENT(Alvaro Retana) Is the intention of the Appendix to provide additional formal updates to rfc5216? That is what it looks like to me, but there's no reference to it from the main text. Please either reference the Appendix when talking about some of the other updates (if appropriate) or specifically include the text somewhere more prominent. [Roman] As proposed above, please provide some forward reference. ==[ COMMENT (\u00c9ric Vyncke) I find \"This section updates Section 2.1.1 of [RFC5216].\" a little ambiguous as it the 'updated section' is not identified clearly. I.e., as the sections in RFC 5216 are not too long, why not simply providing whole new sections ? [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216) ==[ COMMENT (\u00c9ric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Regards, Roman\n==[ COMMENT (\u00c9ric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Eric is correct that US english i.e. and e.g. is usually followed by a comma. I corrected this in d9f5028 I also run a word spell check on the document. Word does not suggest removing any comma after \"but\" but rather insert an additional comma after a but. I think this can be left to the RFC editor. The word spell check caught two other grammar errors (s after plural, no s after singular) RFC 8446 does not use capitalization on post-handshake as that is just a name for a set of messages. I corrected these things in\nI Made a new PR\nMade a commit with NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446].\nSection 2.1.4 Changed the text to: The message flow ends with a protected success indication from the EAP-TLS server, follwed by an EAP-Response packet of EAP-Type=EAP-TLS and no data from the EAP-TLS peer, follwed by EAP-Success from the server.\nI added a forward reference to the appendix in the introduction\nNAME looks like you have got everything covered? are there any comments from Roman which still need to be addressed? should we submit a new version and ask for IETF last call?\nI have tried to do everything. Problem with the updates thing. The replace or amend strategy does not really work.... [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216)\nNAME We should likely not do to much changes. Maybe we can write something like Amends with the following text. TLS 1.2 specific details in RFC 5216 are replaced. Requirement in the new takes precedence over requirements in the old text.\nThis review was addressed in earlier submissions. closing", "new_text": "If the EAP-TLS peer authenticates successfully, the EAP-TLS server MUST send an EAP-Request packet with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. The message flow ends with a protected success indication from the EAP-TLS server, follwed by an EAP-Response packet of EAP-Type=EAP-TLS and no data from the EAP-TLS peer, follwed by EAP-Success from the server. If the EAP-TLS server authenticates successfully, the EAP-TLS peer MUST send an EAP-Response message with EAP-Type=EAP-TLS containing"}
{"id": "q-en-draft-ietf-emu-eap-tls13-adbad78888b0bb8e7fdb3cbbd943ef068cfa14bf17cd9b3d61874bb4b4d1eb4c", "old_text": "When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP- TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) for the TLS version used. For TLS 1.3 the compliance requirements are defined in Section 9 of RFC8446. In EAP-TLS with TLS 1.3, only cipher suites with confidentiality SHALL be supported.", "comments": "Hi! I performed a second AD review on draft-ietf-emu-eap-tls13. This time on -18. Thank you to the authors, the responsible co-chair, and the WG for all of the work to redesign the -13 protocol in response to the initial IESG review in December 2020. My review feedback consists of my own minimal additional comments and highlights of where previously stated IESG ballot items do not appear to be responded to or addressed. All items appear to be minor. ** Section 2.4. Previously, this text referenced TLS 1.3 or higher necessitating more ambiguous reference to version numbers. However, now the text only notes TLS 1.3. OLD When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP- TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) for the TLS version used. For TLS 1.3 the compliance requirements are defined in Section 9 of [RFC8446]. NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446]. ==[ COMMENT (Ben Kaduk) Section 2.1.4 The two paragraphs below replaces the corresponding paragraphs in Section 2.1.3 of [RFC5216] when EAP-TLS is used with TLS 1.3 or higher. The other paragraphs in Section 2.1.3 of [RFC5216] still apply with the exception that SessionID is deprecated. If the EAP-TLS peer authenticates successfully, the EAP-TLS server MUST send an EAP-Request packet with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. The message flow ends with the EAP-TLS server sending an EAP-Success message. If the EAP-TLS server authenticates successfully, the EAP-TLS peer MUST send an EAP-Response message with EAP-Type=EAP-TLS containing TLS records conforming to the version of TLS used. Just to walk through the various scenarios here; if the last message of the TLS handshake for a given TLS version is sent by the TLS client, that will go in an EAP-Response, so the server is then able to send EAP-Success directly. On the other hand, if the last TLS handshake message is sent by the server, that will have to go in an EAP-TLS EAP-Request, and so the peer will have to send an empty EAP-TLS response in order for the server to be able to wend EAP-Success? Do we need to have any text here to handle that empty EAP-TLS Eap-Request case? ==[ COMMENT(Alvaro Retana) Is the intention of the Appendix to provide additional formal updates to rfc5216? That is what it looks like to me, but there's no reference to it from the main text. Please either reference the Appendix when talking about some of the other updates (if appropriate) or specifically include the text somewhere more prominent. [Roman] As proposed above, please provide some forward reference. ==[ COMMENT (\u00c9ric Vyncke) I find \"This section updates Section 2.1.1 of [RFC5216].\" a little ambiguous as it the 'updated section' is not identified clearly. I.e., as the sections in RFC 5216 are not too long, why not simply providing whole new sections ? [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216) ==[ COMMENT (\u00c9ric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Regards, Roman\n==[ COMMENT (\u00c9ric Vyncke) None of us are native English speaker, but \"e.g.\" as \"i.e.\" are usually followed by a comma while \"but\" has usually no comma before ;-) [Roman] A simple s/e.g./e.g.,/ and s/i.e./i.e.,/ would catch this. Eric is correct that US english i.e. and e.g. is usually followed by a comma. I corrected this in d9f5028 I also run a word spell check on the document. Word does not suggest removing any comma after \"but\" but rather insert an additional comma after a but. I think this can be left to the RFC editor. The word spell check caught two other grammar errors (s after plural, no s after singular) RFC 8446 does not use capitalization on post-handshake as that is just a name for a set of messages. I corrected these things in\nI Made a new PR\nMade a commit with NEW When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP-TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of [RFC8446].\nSection 2.1.4 Changed the text to: The message flow ends with a protected success indication from the EAP-TLS server, follwed by an EAP-Response packet of EAP-Type=EAP-TLS and no data from the EAP-TLS peer, follwed by EAP-Success from the server.\nI added a forward reference to the appendix in the introduction\nNAME looks like you have got everything covered? are there any comments from Roman which still need to be addressed? should we submit a new version and ask for IETF last call?\nI have tried to do everything. Problem with the updates thing. The replace or amend strategy does not really work.... [Roman] I'd propose a simple fix by using (approximate phrases such as) either: This section updates Section xxx of [RFC5216] by amending it with the following text. (for the cases where the intent is to keep the existing text in rfc5216 but add this new text) This section updates Section xxx of [RFC5216] by replacing it with the following text. (for the cases where the intent is to replace an entire section in rfc5216)\nNAME We should likely not do to much changes. Maybe we can write something like Amends with the following text. TLS 1.2 specific details in RFC 5216 are replaced. Requirement in the new takes precedence over requirements in the old text.\nThis review was addressed in earlier submissions. closing", "new_text": "When EAP-TLS is used with TLS version 1.3, the EAP-TLS peers and EAP- TLS servers MUST comply with the compliance requirements (mandatory- to-implement cipher suites, signature algorithms, key exchange algorithms, extensions, etc.) defined in Section 9 of RFC8446. In EAP-TLS with TLS 1.3, only cipher suites with confidentiality SHALL be supported."}
{"id": "q-en-draft-ietf-jsonpath-base-6f81aacdbe43416544af05b072f99c0f81552b8cb0b1532e582876168992b8d1", "old_text": "1.2. A frequently emphasized advantage of XML is the availability of powerful tools to analyse, transform and selectively extract data from XML documents. XPath is one of these tools. In 2007, the need for something solving the same class of problems for the emerging JSON community became apparent, specifically for: Finding data interactively and extracting them out of RFC8259 JSON values without special scripting. Specifying the relevant parts of the JSON data in a request by a client, so the server can reduce the amount of data in its response, minimizing bandwidth usage. So what does such a tool look like for JSON? When defining a JSONPath, how should expressions look? The XPath expression looks like or in popular programming languages such as JavaScript, Python and PHP, with a variable x holding the argument. Here we observe that such languages already have a fundamentally XPath-like feature built in. The JSONPath tool in question should: be naturally based on those language characteristics. cover only essential parts of XPath 1.0. be lightweight in code size and memory consumption. be runtime efficient. 1.3. JSONPath expressions always apply to a value in the same way as XPath expressions are used in combination with an XML document. Since a value is anonymous, JSONPath uses the abstract name \"$\" to refer to the root node of the argument. JSONPath expressions can use the or the for paths input to a JSONPath processor. [1] Where a JSONPath processor uses JSONPath expressions as output paths, these will always be converted to Output Paths which employ the more general . [2] Bracket notation is more general than dot notation and can serve as a canonical form when a JSONPath processor uses JSONPath expressions as output paths. JSONPath allows the wildcard symbol \"*\" for member names and array indices. It borrows the descendant operator \"..\" from E4X and the array slice syntax proposal \"[start:end:step]\" SLICE from ECMASCRIPT 4. JSONPath was originally designed to employ an for computing expressions. The present specification defines a simple expression language that is independent from any scripting language in use on the platform. JSONPath can use expressions, written in parentheses: \"()\", as an alternative to explicit names or indices as in: The symbol \"@\" is used for the current node. Filter expressions are supported via the syntax \"?()\" as in Here is a complete overview and a side by side comparison of the JSONPath syntax elements with their XPath counterparts. XPath has a lot more to offer (location paths in unabbreviated syntax, operators and functions) than listed here. Moreover there is a significant difference how the subscript operator works in Xpath and JSONPath: Square brackets in XPath expressions always operate on the resulting from the previous path fragment. Indices always start at 1. With JSONPath, square brackets operate on the or addressed by the previous path fragment. Array indices always start at 0. 2.", "comments": "Thanks for merging. I had a small fix to the caption of table 1 which is now .\n(This was original raised in and now being split out) (Michael Kay) want to confuse the JSONPath spec by going into detailed comparisons at the risk of contradicting the normative text. (Glyn Normington) (Michael Kay) It seemed to be important in 2007, while argumenting to have something like XPath for JSON. If nowadays the terminology used has changed significantly with XPath 2.0 and 3.0, we better leave that comparison table 2 out. I am quite passionless here. At the IETF 110 meeting consensus was raised that inclusion of XPath in the Appendix section would be the most appropriate way to reference to it.\nClosed in\nGreat improvement!Additions / Replacements of high value ... thanks", "new_text": "1.2. This document picks up Stefan Goessner's popular JSONPath specification dated 2007-02-21 JSONPath-orig and provides a normative definition for it. inspired-by-xpath provides a brief synopsis with XML's XPath XPath, which was the inspiration that led to the definition of JSONPath. JSONPath was intended as a light-weight companion to JSON implementations on platforms such as PHP and JavaScript, so instead of defining its own expression language like XPath did, JSONPath delegated this to the expression language of the platform. While the languages in which JSONPath is used do have significant commonalities, over time this caused non-portability of JSONPath expressions between the ensuing platform-specific dialects. The present specification intends to remove platform dependencies and server as a common JSONPath specification that can be used across platforms. Obviously, this means that backwards compatibility could not always be achieved; a design principle of this specification is to go with a \"consensus\" between implementations even if it is rough, as long as that does not jeopardize the objective of obtaining a usable, stable JSON query language. 1.3. JSONPath expressions are applied to a JSON value, the . Within the JSONPath expression, the abstract name \"$\" is used to refer to the of the argument, i.e., to the argument as a whole. JSONPath expressions can use the or the to build paths that are input to a JSONPath processor. Bracket notation is more general than dot notation and can serve as a canonical form (for instance, when a JSONPath processor uses JSONPath expressions as output paths). JSONPath allows the wildcard symbol \"*\" to select any member of an object or any element of an array (wildcard). The descendant operator \"..\" selects the node and all its descendants (descendant- selector). The array slice syntax \"[start:end:step]\" allows selecting a regular selection of an element from an array, giving a start position, an end position, and possibly a step value that moves the position from the start to the end (slice). JSONPath employs an for computing values and making decisions. The present specification defines a simple expression language that is independent from any scripting language in use on the platform. JSONPath can use such expression language expressions, written in parentheses: \"()\", as an alternative to explicit names or indices as in [no-dot-length]: The symbol \"@\" is used for the current node, i.e., the node in the context of which the expression is evaluated. Filter expressions are supported via the syntax \"?()\" as in tbl-overview provides a quick overview of the JSONPath syntax elements. 2."}
{"id": "q-en-draft-ietf-jsonpath-base-6f81aacdbe43416544af05b072f99c0f81552b8cb0b1532e582876168992b8d1", "old_text": "3.5.7.2. The \"descendant-selector\" is inspired by ECMAScript for XML (E4X). It selects the node and all its descendants. 3.5.8.", "comments": "Thanks for merging. I had a small fix to the caption of table 1 which is now .\n(This was original raised in and now being split out) (Michael Kay) want to confuse the JSONPath spec by going into detailed comparisons at the risk of contradicting the normative text. (Glyn Normington) (Michael Kay) It seemed to be important in 2007, while argumenting to have something like XPath for JSON. If nowadays the terminology used has changed significantly with XPath 2.0 and 3.0, we better leave that comparison table 2 out. I am quite passionless here. At the IETF 110 meeting consensus was raised that inclusion of XPath in the Appendix section would be the most appropriate way to reference to it.\nClosed in\nGreat improvement!Additions / Replacements of high value ... thanks", "new_text": "3.5.7.2. The \"descendant-selector\" selects the node and all its descendants. 3.5.8."}
{"id": "q-en-draft-ietf-jsonpath-base-6f81aacdbe43416544af05b072f99c0f81552b8cb0b1532e582876168992b8d1", "old_text": "isolating a single location within a document. Path is a query syntax that can also be used to pull multiple locations. ", "comments": "Thanks for merging. I had a small fix to the caption of table 1 which is now .\n(This was original raised in and now being split out) (Michael Kay) want to confuse the JSONPath spec by going into detailed comparisons at the risk of contradicting the normative text. (Glyn Normington) (Michael Kay) It seemed to be important in 2007, while argumenting to have something like XPath for JSON. If nowadays the terminology used has changed significantly with XPath 2.0 and 3.0, we better leave that comparison table 2 out. I am quite passionless here. At the IETF 110 meeting consensus was raised that inclusion of XPath in the Appendix section would be the most appropriate way to reference to it.\nClosed in\nGreat improvement!Additions / Replacements of high value ... thanks", "new_text": "isolating a single location within a document. Path is a query syntax that can also be used to pull multiple locations. [no-dot-length] We probably want to use an expression that does not use "}
{"id": "q-en-draft-ietf-jsonpath-base-b7e72ef6e4ea83f8bea7a9c34fbdacfd842ef2d70866e12e87d9af7047337fae", "old_text": "The examples in tbl-example use the expression mechanism to obtain the number of elements in an array, to test for the presence of a member in a object, and to perform numeric comparisons of member values with a constant. 3.", "comments": "Lots of niggly details re blank space. Style consistency, some missing places (assuming we stick with the liberal approach). Allowing \"in\" before or after a newline or tab. Adding some legend to the only remaining table with XPath content in the main document.\n[ ] define a production , replacing [ ] go through ABNF and make sure blank space is allowed where it needs to be (Do not use the term \"whitespace\" any more.) Task \"T1\" in the document URL\nand\nFrom the document: (whitespace is now generally called blank space, but that doesn't change the work to be done.)\nand\nHow much do we want to reference XPath within the spec? I understand that XPath is the inspiration (and in the tagline that NAME used in his blog post: \"XPath for JSON\"), but I wonder if much of this can be left out of the spec. The argument for this is that we're defining JSONPath. It shouldn't assume any understanding of XPath or any other technology. It's not the job of the specification to convince readers to use it. It's the job of the specification to define the technology. I'm perfectly happy moving this into an external \"Understanding JSONPath\" website at some point.\nI seem to remember that last time we discussed this we agreed to move the introductory comparison with XPath to an appendix, with the possible exception of the overview table (but that could be duplicated).\nThe XPath inclusion was moved into an appendix.\nDo we allow spaces (outside of string literals)? If so, where is whitespace allowed? I think whitespace in query expressions should be allowed. This would fall in to the context of the . Maybe inside the union/ selector: Are there any other places where whitespace could be considered useful?\nIt may be convenient to allow spaces between matchers - to allow well-readable multi-line queries for complex cases.\nLooks like the allows whitespace internally, so that (with leading and trailing whitespace) is valid.\nClosed in and\nWhat was the resolution here? Where did we end up allowing whitespace?", "new_text": "The examples in tbl-example use the expression mechanism to obtain the number of elements in an array, to test for the presence of a member in a object, and to perform numeric comparisons of member values with a constant. For further illustration, the table shows XPath expressions that would be used with a comparable XML document; see also xpath-overview. 3."}
{"id": "q-en-draft-ietf-jsonpath-base-b7e72ef6e4ea83f8bea7a9c34fbdacfd842ef2d70866e12e87d9af7047337fae", "old_text": "member name \"\"a\"\". The result is again a list of one node: \"[{\"b\":0},{\"b\":1},{\"c\":2}]\". Next, \"[*]\" selects from any input node -- either an array or an object -- all its elements or members. The result is a list of three nodes: \"{\"b\":0}\", \"{\"b\":1}\", and \"{\"c\":2}\". Finally, \".b\" selects from any input node of type object with a member name \"b\" and selects the node of the member value of the input", "comments": "Lots of niggly details re blank space. Style consistency, some missing places (assuming we stick with the liberal approach). Allowing \"in\" before or after a newline or tab. Adding some legend to the only remaining table with XPath content in the main document.\n[ ] define a production , replacing [ ] go through ABNF and make sure blank space is allowed where it needs to be (Do not use the term \"whitespace\" any more.) Task \"T1\" in the document URL\nand\nFrom the document: (whitespace is now generally called blank space, but that doesn't change the work to be done.)\nand\nHow much do we want to reference XPath within the spec? I understand that XPath is the inspiration (and in the tagline that NAME used in his blog post: \"XPath for JSON\"), but I wonder if much of this can be left out of the spec. The argument for this is that we're defining JSONPath. It shouldn't assume any understanding of XPath or any other technology. It's not the job of the specification to convince readers to use it. It's the job of the specification to define the technology. I'm perfectly happy moving this into an external \"Understanding JSONPath\" website at some point.\nI seem to remember that last time we discussed this we agreed to move the introductory comparison with XPath to an appendix, with the possible exception of the overview table (but that could be duplicated).\nThe XPath inclusion was moved into an appendix.\nDo we allow spaces (outside of string literals)? If so, where is whitespace allowed? I think whitespace in query expressions should be allowed. This would fall in to the context of the . Maybe inside the union/ selector: Are there any other places where whitespace could be considered useful?\nIt may be convenient to allow spaces between matchers - to allow well-readable multi-line queries for complex cases.\nLooks like the allows whitespace internally, so that (with leading and trailing whitespace) is valid.\nClosed in and\nWhat was the resolution here? Where did we end up allowing whitespace?", "new_text": "member name \"\"a\"\". The result is again a list of one node: \"[{\"b\":0},{\"b\":1},{\"c\":2}]\". Next, \"[*]\" selects from an input node of type array all its elements (if the input note were of type object, it would select all its member values, but not the member names). The result is a list of three nodes: \"{\"b\":0}\", \"{\"b\":1}\", and \"{\"c\":2}\". Finally, \".b\" selects from any input node of type object with a member name \"b\" and selects the node of the member value of the input"}
{"id": "q-en-draft-ietf-jsonpath-base-7445bbe1e58fc447116e857df0fca5daaca10bf696811bd9e7024a71511bcf59", "old_text": "Regular expression tests can be applied to \"string\" values only. The value of the first operand (\"containable\") of a \"contain-expr\" is compared to every single element of the RHS \"container\". In case of a match a selection occurs. Containment tests -- like comparisons -- are restricted to primitive values. So even if a structured \"containable\" value is equal to a certain structured value in \"container\", no selection is done. The value of the second operand (\"container\") of a \"contain-expr\" needs to be resolved to an array. Otherwise nothing is selected. The following table lists filter expression operators in order of precedence from highest (binds most tightly) to lowest (binds least tightly).", "comments": "From the meeting at IETF 112 ([1]): Consensus: 'in' won't be used, but it will be easy to add later if there's real demand [1] URL\nGood eyes. Thanks!\nYes, we agreed to remove the 'in' operator as some kind of syntactic sugar. thanks\nThanks! I pushed a small commit removing an \"in\" example.", "new_text": "Regular expression tests can be applied to \"string\" values only. The following table lists filter expression operators in order of precedence from highest (binds most tightly) to lowest (binds least tightly)."}
{"id": "q-en-draft-ietf-jsonpath-base-964bf5709b0809a98952ded1db05215c49f733a24290c5797274f6aa7ea4a7e6", "old_text": "to the member name from any JSON object in its input nodelist. It selects no nodes from any other JSON value. Note that the \"dot-selector\" follows the philosophy of JSON strings and is allowed to contain bit sequences that cannot encode Unicode characters (a single unpaired UTF-16 surrogate, for example). The behaviour of an implementation is undefined for member names which do not encode Unicode characters. 3.5.3. 3.5.3.1.", "comments": "NAME It would be good to address NAME comments in this PR. I have made some suggestions, but feel free to pick them apart if you are keen to retain \"truthiness\" and \"falsiness\". BTW perhjaps I struggle with the latter term because of the existence of some .\nIndeed. It is not that easy to say here what needs to be said, so I'm still in thinking mode. Thanks for the pointer to English slang; of course I was completely unaware. We don't want to use the fooiness terms. Interestingly, the remaining grammar does not seem to have a place where JSON values are implicitly converted to Boolean; this is now all explicit unless I'm missing something. So maybe we don't need this table (and its legend) at all?\nYou're right! The two occurrences of are as follows. which simply tests for non-existence of a value(s) selected by a path and does not depend on the actual values themselves. which negates a boolean expression. Looking at the details of , there's no way it can result in a value which might need to be converted to boolean. So please can we scrap the table and kiss \"fooiness\" goodbye once and for all?\nI think we should first merge it and then delete it, in case we need to resurrect it later.\nwhat's about ... which can surely be converted to a negated form. But sometimes someone doesn't want to do that ... like in any programming language. Nevertheless I second to kick fooiness out. As a minimalist I am always a friend of kicking unneeded things out.\nComparisons already yield Boolean values, as does /.\nWell, then in fact I misunderstood Glyn's comment above ... I agree to remove the conversion table as superfluous.", "new_text": "to the member name from any JSON object in its input nodelist. It selects no nodes from any other JSON value. 3.5.3. 3.5.3.1."}
{"id": "q-en-draft-ietf-jsonpath-base-964bf5709b0809a98952ded1db05215c49f733a24290c5797274f6aa7ea4a7e6", "old_text": "of their element or member values then. Applied to other value types, it will select nothing. : The zero number/empty string exceptions are no longer true. Booleans work the same everywhere. Negation operator \"neg-op\" allows to test of values. Applying negation operator twice \"!!\" gives us of values. Some examples:", "comments": "NAME It would be good to address NAME comments in this PR. I have made some suggestions, but feel free to pick them apart if you are keen to retain \"truthiness\" and \"falsiness\". BTW perhjaps I struggle with the latter term because of the existence of some .\nIndeed. It is not that easy to say here what needs to be said, so I'm still in thinking mode. Thanks for the pointer to English slang; of course I was completely unaware. We don't want to use the fooiness terms. Interestingly, the remaining grammar does not seem to have a place where JSON values are implicitly converted to Boolean; this is now all explicit unless I'm missing something. So maybe we don't need this table (and its legend) at all?\nYou're right! The two occurrences of are as follows. which simply tests for non-existence of a value(s) selected by a path and does not depend on the actual values themselves. which negates a boolean expression. Looking at the details of , there's no way it can result in a value which might need to be converted to boolean. So please can we scrap the table and kiss \"fooiness\" goodbye once and for all?\nI think we should first merge it and then delete it, in case we need to resurrect it later.\nwhat's about ... which can surely be converted to a negated form. But sometimes someone doesn't want to do that ... like in any programming language. Nevertheless I second to kick fooiness out. As a minimalist I am always a friend of kicking unneeded things out.\nComparisons already yield Boolean values, as does /.\nWell, then in fact I misunderstood Glyn's comment above ... I agree to remove the conversion table as superfluous.", "new_text": "of their element or member values then. Applied to other value types, it will select nothing. DELETE-ME: (( We currently don't need this. But we commit a version in case we do need it later. The negation operator \"neg-op\" can be applied to any JSON value and returns \"true\" only when applied to \"false\" or \"null\": Consequently, applying the negation operator twice \"!!\" returns \"true\" for all values except \"false\" or \"null\". )) Some examples:"}
{"id": "q-en-draft-ietf-jsonpath-base-152de77850745f04dc83574eaec9e7b2c3bfc6a5b7cdfa171fff0984c5967a84", "old_text": "D.draft-bormann-jsonpath-iregexp for performance issues in regular expression implementations.) Implementers need to be aware that good average performance is not sufficient as long as an attacker can choose to submit specially crafted JSONPath queries that trigger surprisingly high, possibly exponential, CPU usage. 5.2.", "comments": "Also add some general advice from URL See also . Ref: URL\nRecursion is not really the problem, stack depth is (or any other uncontrolled resource usage triggered by queries into deeply nested structures). Gr\u00fc\u00dfe, Carsten\nNAME Are you comfortable with this now, or would you perhaps like to suggest some other way to address the concerns of URL\nThank you.Looks good and informative to me.", "new_text": "D.draft-bormann-jsonpath-iregexp for performance issues in regular expression implementations.) Implementers need to be aware that good average performance is not sufficient as long as an attacker can choose to submit specially crafted JSONPath queries or arguments that trigger surprisingly high, possibly exponential, CPU usage or, for example via a naive recursive implementation of the descendant selector, stack overflow. Implementations need to have appropriate resource management to mitigate these attacks. 5.2."}
{"id": "q-en-draft-ietf-jsonpath-base-6d25b71a9afc260aca2ed973a8f4c7f6a63d68126d883fd2868c0c096fdaf90b", "old_text": "The semantics of regular expressions are as defined in I-D.draft- bormann-jsonpath-iregexp. 3.5.8.3. JSON document:", "comments": "See also . Ref: URL Fixes URL\nTo be honest I also considered a comment along the lines of \"couldn't we just say 'standard Boolean math'?\" But (another little -used math degree here) I found it pleasing and, on reflection, possibly beneficial for an implementor.\nAmong other things, it dawns on me that if I were implementing, I'd copy-paste all those lines into a unit test.\nThe proposed operator precedence in the draft is shown below. Precedence Syntax -- -- 5 (...) 4 ! 3 ==, !=,, >=,=~ 2 && 1 It is assumed that these precedences are motivated by existing languages, but it is also noted that operator precedence is different in different languages such as JavaScript and Python. For example, logical not has low precedence in Python but high precedence in JavaScript and C. The less than/greater than operators have higher precedence than the equality operators in JavaScript and C, but the same precedence in Python. Existing JSONPath implementations that follow these languages for evaluating filter expressions will also have these differences. This issue, created by request, collects the comparable operator precedences for Python, JavaScript, Perl, C And XPath 3.1, to illustrate where they're similar to the draft, and where they differ. Perl is included because of these languages, it is the only one that has a operator. Note that in Perl, the precedence of the operator is higher than the comparison operators. One comment: the draft operator precedence table should include an additional column for associativity, which is needed to fully describe the operator. The unary logical not operator should be shown as \"right associative\", and the binary operators should be shown as \"left associative\". Precedence Associativity -- -- 7n/a 6left to right 5right to left 4left to right 3left to right 2left to right 1left to right Source: Precedence Associativity -- -- 9left 8left 7chained 6chain na 5left 4left 3right 2left 1left Source: Remarks: Unary \"not\" is equivalent to \"!\" except for lower precedence. Binary \"and\" and \"or\" are equivalent to && and except lower precedence Precedence Associativity -- -- 6Left to right 5Right to left 4Left to right 3Left to right 2Left to right 1Left to right Source: Precedence Syntax -- -- 8() 7x[index:index] 6x[index] 5x.attribute 4in, not in, is, is not, , >=,<>, !=, == 3not x 2and 1or Source: Precedence Associativity -- -- 3NA 2Either 1Either Source: Remarks: XPath 3.1 and XQuery don't have a logical not operator, there's a operator, but it's a binary operator that means something different. Instead, XPath 3.1 and XQuery provide the function , whose precedence rules follow the rules for functions.\nSince logical NOT binds more tightly than logical AND which binds more tightly than logical OR in all but one of the above tables and since we don't support comparison of predicates, I'm finding it hard to think of a case where the semantics differ between these tables. Can anyone provide such an example?\nWhich one? They're all the same.\nOne might ask the question, if one were interested in these things, why is the draft precedence ordering slightly different from all of Python, C, JavaScript and Perl? Why not make it the same as at least one of them? There is no difference in implementation effort.\nI think we are still decoding this statement. Where is the difference?\nThe draft relational and equality/inequality operators are grouped together, like Python, but unlike C, JavaScript, and Perl. The draft logical not binds high, like C and JavaScript, but unlike Python. (Perl being Perl has two logical nots, one that binds high and one that binds low, but are otherwise equivalent.)\nLogical NOT is missing from the XPath 3.1 table.\nXPath 3.1 and XQuery don't have a logical not operator. Instead, they provide the function , whose precedence rules follow the rules for functions (not shown in my table.)\nIt may also be interesting to compare with the . In , like XPath 3.1, Boolean 'NOT' is a , not an operator.\nWe could at least document the associativity.\nAfter discussion, is there anything that needs to be changed in the draft aside from adding associativity specification?\nLGTM", "new_text": "The semantics of regular expressions are as defined in I-D.draft- bormann-jsonpath-iregexp. The logical AND, OR, and NOT operators have the normal semantics of Boolean algebra and consequently obey these laws (where \"P\", \"Q\", and \"R\" are any expressions with syntax \"logical-and-expr\", \"T\" is any true expression, such as \"1 == 1\", and \"F\" is any false expression, such as \"1 == 0\"): 3.5.8.3. JSON document:"}
{"id": "q-en-draft-ietf-jsonpath-base-40dda90b688357c64a7a9443227bde1e3b49b011615efcef51b4efb188489852", "old_text": "\"index-wild-selector\", \"filter-selector\", or \"list-selector\" acting on objects or arrays, or a \"slice-selector\" acting on arrays. Note that there is no \"bald\" descendant selector \"..\". 3.5.7.2. The \"descendant-selector\" selects certain descendants of a node: the \"..\" form (and the \"..[]\" form where \"\" is a \"quoted-member-name\") selects those descendants that are member", "comments": "This is probably a bit too wordy, but is a starting point for a sharper definition of the nodelist orders which can be produced by the descendant selector. Ref: URL\nI'm not sure of the context. Please could you point out particular lines. (The usual way to do this is to enter your review by clicking a \"+\" which appears when you hover the mouse pointer just to the left of a line in the \"Files changed\" tab of the PR. You can comment on a block of lines by clicking on the line number of the first line of the block and then shift-clicking on the line number of the last line of the block.) Meta comment: thanks for your review. I'm afraid it's non-binding. See on the mailing list.\nI created local comments as requested per NAME Well, let us keep that discussion in that list maybe? Actually, Stefan Hagen's kind approvals are not binding, so I will await your approval, Carsten, or someone else's with the appropriate privileges. (I could give Stefan those privileges, but that seems wrong!) In fact, I'm puzzled over who can approve PRs (in a binding way). The repository settings list four people with direct access to the repo (see the attached screenshot), but then there are cabo, timbray, and probably some other users with binding PR approval privileges. I'm not sure how we arrived at this set of people. Maybe someone will remember... On Tue, 12 Jul 2022 at 20:16, Glyn Normington wrote:", "new_text": "\"index-wild-selector\", \"filter-selector\", or \"list-selector\" acting on objects or arrays, or a \"slice-selector\" acting on arrays. Note that there is no \"bald\" descendant selector (\"..\" on its own). 3.5.7.2. A \"descendant-selector\" selects certain descendants of a node: the \"..\" form (and the \"..[]\" form where \"\" is a \"quoted-member-name\") selects those descendants that are member"}
{"id": "q-en-draft-ietf-jsonpath-base-40dda90b688357c64a7a9443227bde1e3b49b011615efcef51b4efb188489852", "old_text": "the \"..[*]\" and \"..*\" forms select all the descendants. The resultant nodelist is ordered as if: nodes are visited before their children, and nodes of an array are visited in array order. Children of an object may be visited in any order, since JSON objects are unordered. Implementations may visit descendants in any way providing the resultant nodelist could have been produced by visiting descendants as above. 3.5.7.3.", "comments": "This is probably a bit too wordy, but is a starting point for a sharper definition of the nodelist orders which can be produced by the descendant selector. Ref: URL\nI'm not sure of the context. Please could you point out particular lines. (The usual way to do this is to enter your review by clicking a \"+\" which appears when you hover the mouse pointer just to the left of a line in the \"Files changed\" tab of the PR. You can comment on a block of lines by clicking on the line number of the first line of the block and then shift-clicking on the line number of the last line of the block.) Meta comment: thanks for your review. I'm afraid it's non-binding. See on the mailing list.\nI created local comments as requested per NAME Well, let us keep that discussion in that list maybe? Actually, Stefan Hagen's kind approvals are not binding, so I will await your approval, Carsten, or someone else's with the appropriate privileges. (I could give Stefan those privileges, but that seems wrong!) In fact, I'm puzzled over who can approve PRs (in a binding way). The repository settings list four people with direct access to the repo (see the attached screenshot), but then there are cabo, timbray, and probably some other users with binding PR approval privileges. I'm not sure how we arrived at this set of people. Maybe someone will remember... On Tue, 12 Jul 2022 at 20:16, Glyn Normington wrote:", "new_text": "the \"..[*]\" and \"..*\" forms select all the descendants. An of the descendants of a node is a sequence of all the descendants in which: nodes of any array appear in array order, nodes appear immediately before all their descendants. This definition does not stipulate the order in which the children of an object appear, since JSON objects are unordered. The resultant nodelist of a \"descendant-selector\" applied to a node must be a sub-sequence of an array-sequenced preorder of the descendants of the node. 3.5.7.3."}
{"id": "q-en-draft-ietf-jsonpath-base-1f1defd1eaf85c4b367f555503ade1a15192c592c8446de5ef9fddd4ddbcdbe6", "old_text": "3.5.5.2.2. The comparison operators \"==\", \"<\", and \">\" are defined first and then these are used to define \"!=\", \"<=\", and \">=\". When a path resulting in an empty nodelist appears on either side of a comparison:", "comments": "Fixes URL\nThanks for your comments NAME and NAME I feel I should explain. The current draft defines the orderings and using the same text. For strings, the text talks about \"less than\" but not \"greater than\". So the current text implicitly assumes that for strings and , if and only if . Non-mathematicians would tend to say this is patently obvious. But this PR avoids that assumption by only defining using the same text and then defining in terms of . However, I take the point about the standalone definitions and the arbitrariness of defining one in terms of the other. So my question is: is the current draft text sufficiently precise? If not, I'll rework this PR to make it more precise. (When I started out doing that, it began to get quite wordy, so I changed tack.)\nDespite the fact, that Glyn's approach is mathematically clean, I would favour the symmetric approach by defining , ,. I am quite sure, that nobody will cry \"redundancy\" then.\nYes. Is that a problem? comes before ...\nActually, on the symmetry point, we were comfortable defining in terms of and that was asymmetric.\nIn what system of mathematics are operations ordered? This is completely arbitrary. I'm not convinced this needs to change. I prefer defining both and . And for symmetry, I'd change strings to match.\nApplying editorial discretion here. Merging.\nThe current spec defines string comparison using one of the operators , , , or in terms of when one string is less than another. That's fine for , but: for it effectively assumes is fixed (i.e. that for strings and if or ) for and it leaves quite a lot to the reader's imagination. Once we've settled on , the above should be addressed. ~We should also adopt a similar approach for ordered comparisons () of arrays if is adopted.~ (It wasn't.) (I labelled this issue as because I don't believe it will change the intended semantics in either case.)\nI propose to let URL land before addressing this issue.\nI personally prefer the previous version, with standalone definitions, but think this is a matter of editorial discretion.I agree with Tim. It's fine and correct to define in terms of other operators, but I feel and should both be equally defined. Defining one in terms of the other means you have to make an arbitrary choice of which is directly defined and which is indirectly defined.This PR is exactly the right way to do this.", "new_text": "3.5.5.2.2. The comparison operators \"==\" and \"<\" are defined first and then these are used to define \"!=\", \"<=\", \">\", and \">=\". When a path resulting in an empty nodelist appears on either side of a comparison:"}
{"id": "q-en-draft-ietf-jsonpath-base-1f1defd1eaf85c4b367f555503ade1a15192c592c8446de5ef9fddd4ddbcdbe6", "old_text": "the comparison is between two paths each of which result in an empty nodelist. a comparison using either of the operators \"<\" or \">\" yields false. When any path on either side of a comparison results in a nodelist consisting of a single node, each such path is replaced by the value", "comments": "Fixes URL\nThanks for your comments NAME and NAME I feel I should explain. The current draft defines the orderings and using the same text. For strings, the text talks about \"less than\" but not \"greater than\". So the current text implicitly assumes that for strings and , if and only if . Non-mathematicians would tend to say this is patently obvious. But this PR avoids that assumption by only defining using the same text and then defining in terms of . However, I take the point about the standalone definitions and the arbitrariness of defining one in terms of the other. So my question is: is the current draft text sufficiently precise? If not, I'll rework this PR to make it more precise. (When I started out doing that, it began to get quite wordy, so I changed tack.)\nDespite the fact, that Glyn's approach is mathematically clean, I would favour the symmetric approach by defining , ,. I am quite sure, that nobody will cry \"redundancy\" then.\nYes. Is that a problem? comes before ...\nActually, on the symmetry point, we were comfortable defining in terms of and that was asymmetric.\nIn what system of mathematics are operations ordered? This is completely arbitrary. I'm not convinced this needs to change. I prefer defining both and . And for symmetry, I'd change strings to match.\nApplying editorial discretion here. Merging.\nThe current spec defines string comparison using one of the operators , , , or in terms of when one string is less than another. That's fine for , but: for it effectively assumes is fixed (i.e. that for strings and if or ) for and it leaves quite a lot to the reader's imagination. Once we've settled on , the above should be addressed. ~We should also adopt a similar approach for ordered comparisons () of arrays if is adopted.~ (It wasn't.) (I labelled this issue as because I don't believe it will change the intended semantics in either case.)\nI propose to let URL land before addressing this issue.\nI personally prefer the previous version, with standalone definitions, but think this is a matter of editorial discretion.I agree with Tim. It's fine and correct to define in terms of other operators, but I feel and should both be equally defined. Defining one in terms of the other means you have to make an arbitrary choice of which is directly defined and which is indirectly defined.This PR is exactly the right way to do this.", "new_text": "the comparison is between two paths each of which result in an empty nodelist. a comparison using the operator \"<\" yields false. When any path on either side of a comparison results in a nodelist consisting of a single node, each such path is replaced by the value"}
{"id": "q-en-draft-ietf-jsonpath-base-1f1defd1eaf85c4b367f555503ade1a15192c592c8446de5ef9fddd4ddbcdbe6", "old_text": "for each of those names, the values associated with the name by the objects are equal. a comparison using either of the operators \"<\" or \">\" yields true if and only if the comparison is between values of the same type which are both numbers or both strings and which satisfy the comparison: numbers expected to interoperate as per RFC7493 MUST compare using the normal mathematical ordering; numbers not expected to", "comments": "Fixes URL\nThanks for your comments NAME and NAME I feel I should explain. The current draft defines the orderings and using the same text. For strings, the text talks about \"less than\" but not \"greater than\". So the current text implicitly assumes that for strings and , if and only if . Non-mathematicians would tend to say this is patently obvious. But this PR avoids that assumption by only defining using the same text and then defining in terms of . However, I take the point about the standalone definitions and the arbitrariness of defining one in terms of the other. So my question is: is the current draft text sufficiently precise? If not, I'll rework this PR to make it more precise. (When I started out doing that, it began to get quite wordy, so I changed tack.)\nDespite the fact, that Glyn's approach is mathematically clean, I would favour the symmetric approach by defining , ,. I am quite sure, that nobody will cry \"redundancy\" then.\nYes. Is that a problem? comes before ...\nActually, on the symmetry point, we were comfortable defining in terms of and that was asymmetric.\nIn what system of mathematics are operations ordered? This is completely arbitrary. I'm not convinced this needs to change. I prefer defining both and . And for symmetry, I'd change strings to match.\nApplying editorial discretion here. Merging.\nThe current spec defines string comparison using one of the operators , , , or in terms of when one string is less than another. That's fine for , but: for it effectively assumes is fixed (i.e. that for strings and if or ) for and it leaves quite a lot to the reader's imagination. Once we've settled on , the above should be addressed. ~We should also adopt a similar approach for ordered comparisons () of arrays if is adopted.~ (It wasn't.) (I labelled this issue as because I don't believe it will change the intended semantics in either case.)\nI propose to let URL land before addressing this issue.\nI personally prefer the previous version, with standalone definitions, but think this is a matter of editorial discretion.I agree with Tim. It's fine and correct to define in terms of other operators, but I feel and should both be equally defined. Defining one in terms of the other means you have to make an arbitrary choice of which is directly defined and which is indirectly defined.This PR is exactly the right way to do this.", "new_text": "for each of those names, the values associated with the name by the objects are equal. a comparison using the operator \"<\" yields true if and only if the comparison is between values of the same type which are both numbers or both strings and which satisfy the comparison: numbers expected to interoperate as per RFC7493 MUST compare using the normal mathematical ordering; numbers not expected to"}
{"id": "q-en-draft-ietf-jsonpath-base-1f1defd1eaf85c4b367f555503ade1a15192c592c8446de5ef9fddd4ddbcdbe6", "old_text": "first string compares less than the remainder of the second string. Note that comparisons using either of the operators \"<\" or \">\" yield false if either value being compared is an object, array, boolean, or \"null\". \"!=\", \"<=\" and \">=\" are defined in terms of the other comparison operators. For any \"a\" and \"b\": The comparison \"a != b\" yields true if and only if \"a == b\" yields false.", "comments": "Fixes URL\nThanks for your comments NAME and NAME I feel I should explain. The current draft defines the orderings and using the same text. For strings, the text talks about \"less than\" but not \"greater than\". So the current text implicitly assumes that for strings and , if and only if . Non-mathematicians would tend to say this is patently obvious. But this PR avoids that assumption by only defining using the same text and then defining in terms of . However, I take the point about the standalone definitions and the arbitrariness of defining one in terms of the other. So my question is: is the current draft text sufficiently precise? If not, I'll rework this PR to make it more precise. (When I started out doing that, it began to get quite wordy, so I changed tack.)\nDespite the fact, that Glyn's approach is mathematically clean, I would favour the symmetric approach by defining , ,. I am quite sure, that nobody will cry \"redundancy\" then.\nYes. Is that a problem? comes before ...\nActually, on the symmetry point, we were comfortable defining in terms of and that was asymmetric.\nIn what system of mathematics are operations ordered? This is completely arbitrary. I'm not convinced this needs to change. I prefer defining both and . And for symmetry, I'd change strings to match.\nApplying editorial discretion here. Merging.\nThe current spec defines string comparison using one of the operators , , , or in terms of when one string is less than another. That's fine for , but: for it effectively assumes is fixed (i.e. that for strings and if or ) for and it leaves quite a lot to the reader's imagination. Once we've settled on , the above should be addressed. ~We should also adopt a similar approach for ordered comparisons () of arrays if is adopted.~ (It wasn't.) (I labelled this issue as because I don't believe it will change the intended semantics in either case.)\nI propose to let URL land before addressing this issue.\nI personally prefer the previous version, with standalone definitions, but think this is a matter of editorial discretion.I agree with Tim. It's fine and correct to define in terms of other operators, but I feel and should both be equally defined. Defining one in terms of the other means you have to make an arbitrary choice of which is directly defined and which is indirectly defined.This PR is exactly the right way to do this.", "new_text": "first string compares less than the remainder of the second string. Note that comparisons using the operator \"<\" yield false if either value being compared is an object, array, boolean, or \"null\". \"!=\", \"<=\", \">\", and \">=\" are defined in terms of the other comparison operators. For any \"a\" and \"b\": The comparison \"a != b\" yields true if and only if \"a == b\" yields false."}
{"id": "q-en-draft-ietf-jsonpath-base-1f1defd1eaf85c4b367f555503ade1a15192c592c8446de5ef9fddd4ddbcdbe6", "old_text": "The comparison \"a <= b\" yields true if and only if \"a < b\" yields true or \"a == b\" yields true. The comparison \"a >= b\" yields true if and only if \"a > b\" yields true or \"a == b\" yields true. 3.5.5.2.3.", "comments": "Fixes URL\nThanks for your comments NAME and NAME I feel I should explain. The current draft defines the orderings and using the same text. For strings, the text talks about \"less than\" but not \"greater than\". So the current text implicitly assumes that for strings and , if and only if . Non-mathematicians would tend to say this is patently obvious. But this PR avoids that assumption by only defining using the same text and then defining in terms of . However, I take the point about the standalone definitions and the arbitrariness of defining one in terms of the other. So my question is: is the current draft text sufficiently precise? If not, I'll rework this PR to make it more precise. (When I started out doing that, it began to get quite wordy, so I changed tack.)\nDespite the fact, that Glyn's approach is mathematically clean, I would favour the symmetric approach by defining , ,. I am quite sure, that nobody will cry \"redundancy\" then.\nYes. Is that a problem? comes before ...\nActually, on the symmetry point, we were comfortable defining in terms of and that was asymmetric.\nIn what system of mathematics are operations ordered? This is completely arbitrary. I'm not convinced this needs to change. I prefer defining both and . And for symmetry, I'd change strings to match.\nApplying editorial discretion here. Merging.\nThe current spec defines string comparison using one of the operators , , , or in terms of when one string is less than another. That's fine for , but: for it effectively assumes is fixed (i.e. that for strings and if or ) for and it leaves quite a lot to the reader's imagination. Once we've settled on , the above should be addressed. ~We should also adopt a similar approach for ordered comparisons () of arrays if is adopted.~ (It wasn't.) (I labelled this issue as because I don't believe it will change the intended semantics in either case.)\nI propose to let URL land before addressing this issue.\nI personally prefer the previous version, with standalone definitions, but think this is a matter of editorial discretion.I agree with Tim. It's fine and correct to define in terms of other operators, but I feel and should both be equally defined. Defining one in terms of the other means you have to make an arbitrary choice of which is directly defined and which is indirectly defined.This PR is exactly the right way to do this.", "new_text": "The comparison \"a <= b\" yields true if and only if \"a < b\" yields true or \"a == b\" yields true. The comparison \"a > b\" yields true if and only if \"b < a\" yields true. The comparison \"a >= b\" yields true if and only if \"b < a\" yields true or \"a == b\" yields true. 3.5.5.2.3."}
{"id": "q-en-draft-ietf-jsonpath-base-8cc9c2a299d1690d49d2199dd938f65da53a872827b32f92a235149043972470", "old_text": "3.8. A Normalized Path is a JSONPath with restricted syntax that identifies a node by providing a query that results in exactly that node. For example, the JSONPath expression \"$.book[?(@.price<10)]\" could select two values with Normalized Paths \"$['book'][3]\" and \"$['book'][5]\". For a given JSON value, there is a one to one correspondence between the value's nodes and the Normalized Paths that identify these nodes. Note that there is precisely one Normalized Path that identifies each node. A JSONPath implementation may output Normalized Paths instead of, or in addition to, the values identified by these paths. Since bracket notation is more general than dot notation, it is used to construct Normalized Paths. Single quotes are used to delimit string member names. This reduces the number of characters that need escaping when Normalized Paths appear as strings (which are delimited with double quotes) in JSON texts. Certain characters are escaped, in one and only one way; all other characters are unescaped. Normalized Paths are Singular Paths. Not all Singular Paths are Normalized Paths: \"$[-3]\", for example, is a Singular Path, but not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a node.) Since there can only be one Normalized Path identifying a given node, the syntax stipulates which characters are escaped and which are not. So the definition of \"normal-hexchar\" is designed for hex escaping of characters which are not straightforwardly-printable, for example U+000B LINE TABULATION, but for which no standard JSON escape such as \"n\" is available. 3.8.1.", "comments": "I welcome this ... thanks.\nFrom the side line: Do we need to add description of behavior (with a qualifier) to motivate the truly welcome attributes of the ?\nA Normalized Path is just a particular kind of JSONPath query. Its only behaviour is to identify a node within the tree of nodes in a value. Or did you have some other behaviour in mind?\nOh, of course, but that formulation had never occurred to me, and it's useful. If it's not in the spec, it should be.\nI meant the first of the two sentences: Here we describe the behavior of an implementation (output as response to an unspecified request that triggers the former). I just wonder if this section is a good place to describe a part of a protocol, when the topics are a) the \"projection\" itself and b) potential benefits (all from a data perspective).\nWell, we describe the nature of what it is to implement JSONPath. Generally, in a protocol, you not only need to prescribe what is on the wire, but also what it means. That is only ever in the implementation. That doesn't mean that it describes the behavior of an implementation. E.g., in TCP, the byte stream that is transported by the protocol only ever happens in the implementation. Nothing on the wire reflects this byte stream. But describing that byte stream as the underlying model of TCP is rightly not criticized as \"describing the behavior of an implementation\".\nThe penny has just dropped: you are questioning the need for such a description rather than asking for more of the same! I agree with NAME response.\nI think an important aspect that this is missing is that normalized paths are contexted to the JSON data that was queried; whereas singular paths exist independently, as does any other path. For example, the path (used in the text) is singular and can rightfully exist on its own. However it can't be normalized without querying JSON data. Once queried, the resulting normalized paths can be different based on the data. yields a normalized path of for the value yields a normalized path of for the value .\nThis text isn't getting into implementation behavior at all. It's specifying output and allowing for options.\nWell, unforced ambiguous behavior then. Specifying output and allowing for (unrequested) options. For instance when the SomeTool processor outputs a number 42 and the options are that it represents any base in which the symbols are generally valid (per consensus) to me would rather raise questions than answer them. I would expect these complication (extra dimensions) less inside a format but more in a \"processor\" role or protocol description. The latter describing expected behavior within the dynamics of some interchange. I used the term without caution. Sorry for that. In my reading of the sentence we discuss about, the consumer of such a result has to look elsewhere to find out what the processor provides as result, right? That is our normal task as tool users. However, I think we better try to not bring in such ambiguities without noting the reason outside of our mandate to change. Maybe note, that existing processors (or however we like to call these providers of jsonpath responses) display mixed behaviors (what you call options). Makes sense? If not, sorry for the noise.\nWell, we do say: I wouldn't go so far to say that the Normalized Path is \"contexted\". You need the value to compute the Normalized Path from the query, but after that, it doesn't require context to apply it (well, they are meant to be applied to that value, but they are regular JSONPath queries). With IETF 115 raging and Glyn being offline for a couple of days, we'll probably not respond very fast the next few days; I think good points are being made here even it is not always clear how to modify the text further to fully reflect them.\nI believe the PR now makes this property of Normalized Paths clear. (I'd prefer not to labour the example.)\nNAME I have attempted to address your concern by making the following general statement (suggested by NAME Normalized Paths. A nodelist is defined as follows (in the Terminology section): From this, it is clear that an implementation of JSONPath may output a nodelist as a JSON array of strings, where the strings are Normalized Paths. So I have deleted the following statements, which seemed to be causing confusion: Normalized Paths instead of, or in addition to, the values identified by these paths. For example, the JSONPath expression could select two values and an implementation could output the Normalized Paths and . (I'm not sure whether this will be acceptable to the other reviewers, so I'm going to leave the PR open until they have had a chance to review the latest version.)\nNormalizing paths? What does that mean? There is no concept of \"normalizing\" arbitrary paths, it's pointless, there is no suggestion of that in Goessner's original article, or anywhere else that I'm aware of.\nNAME Not sure if you've seen the final form of this PR, but I'm going to merge it to close off the discussion threads. We can adjust the section later if necessary.\nURL begs the question of whether nodelists should have a normalized form. This would improve the interoperation, predictability, etc. of JSONPath implementations that output nodelists. Without this, JSONPath implementations which output nodelists would likely choose multiple different ways of representing nodelists. The proposal is for a Normalized Nodelist to consist of a possibly empty, whitespace separated list of Normalized Paths as in the following ABNF: Note: this proposal does not provide a normalized form for output from JSONPath implementations that output both nodelists and values.\ngood approach ... I would prefer a comma separated list, being able to be consumed as a JSON array without further modification ...\nohh ... and the URL as JSON strings, i.e.\nI don't think we want to do this. This is not a JSONPath query, but a structure built from that. I'd like to leave structure building to JSON entirely. We are not defining the syntax of JSON, and we should avoid falling into the restatement antipattern trap. So yes, a nodelist might look like , but not because we say so, but because that is JSON syntax. We would simply say nodelists are canonically represented as JSON arrays of strings, where the strings are normatlizedpaths.\nyes ... that's even better and pragmatical.\nWhy do they exist? The spec says nothing on the subject. If we don't have something convincing to say, why not remove them? I know we discussed this before, but as I was reading this with fresh eyes I was thinking \"There's a lot of stuff here but why bother?\"\nThe normalized path was to be used in the output. I don't think there was any other use for them. We wanted to ensure that the output was consistent (primarily for testing/comparison purposes), so we devised the \"normalized path\" to ensure that paths were output in a known format. After the refactor, it basically just means that the path doesn't use the shorthand notations; all of the segments are bracketed.\nThe following sentence in the Normalized Paths section provides some motivation: but maybe we need to expand it along the lines of NAME comment above.\nYes, that's my recollection. Yes, Normalized Paths use only bracket notation, but bracket notation can also produce paths which are not Normalized, such as and .\nI repeat my basic question: What are these things for and what benefit do they offer, and to whom? I'm not sure what \"used in the output\" even means. It seems to me that more motivation is needed. I'd also be OK with just removing the whole section.\nHere goes... A Normalized Path plays the same role as a JSON Pointer in situations where JSONPath syntax is required instead of JSON Pointer. A Normalized Path identifies a node in a given JSON value. Given a JSONPath expression which identifies a particular node in a given JSON value, there is a unique Normalized Path which identifies the same node in the given JSON value. Software (applications, libraries, JSONPath implementations, etc.) which outputs Singular Paths identifying nodes in a JSON value can output the equivalent Normalized Paths instead. A benefit of doing this is that the behaviour of the software is then more predictable and more easily tested (because the tests need to accommodate fewer possibilities). If there are multiple implementations of the software, sharing a single test suite is made more tractable. Who does this benefit? Human or programmatic users of the software observe more predictable behaviour. People writing tests have less work to do. Sharing a test suite avoids people duplicating effort.\nNormalized paths are introduced in Stefan Goessners's , and are implemented in the two most important implementations, Goessner JavaScript JSONPath, and . I assume Stefan Goessner can speak to the original motivation, and if you want to know how they are used in practice, I suggest surveying users, perhaps starting on the Jayway site. Among other uses, they are helpful for diagnostics, for discovering what a perhaps non-obvious query resolves to in terms of the individual nodes selected. They have other uses for selecting specific nodes given a much larger query.\nI think the spec needs a statement with a MUST/SHOULD/MAY about supporting these things and a clear suggestion as to what useful purpose they're designed to serve. Otherwise I seriously suggest removing them because to be honest I can't think of a single scenario where I'd use them.\nI think they're intended not to be written but consumed. I agree in that I doubt anyone would write out except maybe in the context of a stackoverflow question or something. They'd just use . The point of this, though, is to be able to declare a preferred syntax that implementations should use to identify a known location in a document. It's just saying that we prefer over . (One advantage of using the bracketed syntax for normalization is that any location in a document can be written in this syntax, whereas there are restrictions on the dot syntax.) Again, I think this is most apparent in output when the implementation is returning paths to locations instead of / along with the values.\nAnother consideration is the spec's definition of nodelist as the output of applying a query: The following statement in the Normalized Paths section: steps into implementation detail. In the rest of the spec we studiously avoid prescribing implementation details. One way to reconcile these would be to put some non-normative text near the start of the spec which talks about implementation options for nodelist. This could then include language such as \"If a nodelist is represented as a list JSONPath expressions, these SHOULD be Normalized Paths.\". (But would it make sense to use \"SHOULD\" in non-normative text??)\nI don't think that's implementation detail. It's defining what the output may be. We can talk about this in another issue if it concerns you, though.\nI would say it's a statement about the representation of the abstract nodelist. A list of Normalized Paths is a string representation. Other possible representations are lists of JSON values in string form, a programming language list of nodes, etc. The choice of representation is an implementation decision. Feel free to raise a separate issue if you'd like to change the current text.\nNormalized Paths ... are a concrete string presentation of nodelists (JSON text). are already used in the spec within every example table (Result Paths). are meant as an additional / alternative output of a JSONPath query. enhance predictability and testing. might (will) be used as input for subsequent analysis / actions (removing of duplicates?). Think of piping in Unix. Especially with the last point standardizing them is essential.\nCould we include a bit of Stefan's argument somewhere in the introduction to the section?\nDone - see URL\nDefinitely the right direction. Thank you!", "new_text": "3.8. A Normalized Path is a canonical representation of the identity of a node in a value. Specifically, a Normalized Path is a JSONPath query with restricted syntax (defined below), e.g., \"$['book'][3]\", which when applied to the value results in a nodelist consisting of just the node identified by the Normalized Path. Note that a Normalized Path represents the identity of a node . There is precisely one Normalized Path identifying any particular node in a value. A canonical representation of a nodelist is as a JSON arrays of strings, where the strings are Normalized Paths. Normalized Paths provide a predictable format that simplifies testing and post-processing of nodelists, e.g., to remove duplicate nodes. Normalized Paths are used in this document as result paths in examples. Normalized Paths use the canonical bracket notation, rather than dot notation. Single quotes are used to delimit string member names. This reduces the number of characters that need escaping when Normalized Paths appear in double quote delimited strings, e.g., in JSON texts. Certain characters are escaped, in one and only one way; all other characters are unescaped. Note: Normalized Paths are Singular Paths, but not all Singular Paths are Normalized Paths. For example, \"$[-3]\" is a Singular Path, but is not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a node.) Since there can only be one Normalized Path identifying a given node, the syntax stipulates which characters are escaped and which are not. So the definition of \"normal-hexchar\" is designed for hex escaping of characters which are not straightforwardly-printable, for example U+000B LINE TABULATION, but for which no standard JSON escape, such as \"n\", is available. 3.8.1."}
{"id": "q-en-draft-ietf-jsonpath-base-1bfbba7af0cf81fc29dde8d14df37513a5280d669c49578891cf6e8427240ec0", "old_text": "Security considerations for JSONPath can stem from attack vectors on JSONPath implementations, and the way JSONPath is used in security-relevant mechanisms.", "comments": "We wanted to add examples, but maybe we can merge this as is. Marking as ready for review.\nLet's get this in and polish it later.", "new_text": "Security considerations for JSONPath can stem from attack vectors on JSONPath implementations, attack vectors on how JSONPath queries are formed, and the way JSONPath is used in security-relevant mechanisms."}
{"id": "q-en-draft-ietf-jsonpath-base-1bfbba7af0cf81fc29dde8d14df37513a5280d669c49578891cf6e8427240ec0", "old_text": "4.2. Where JSONPath is used as a part of a security mechanism, attackers can attempt to provoke unexpected or unpredictable behavior, or take advantage of differences in behavior between JSONPath", "comments": "We wanted to add examples, but maybe we can merge this as is. Marking as ready for review.\nLet's get this in and polish it later.", "new_text": "4.2. JSONPath queries are often not static, but formed from variables that provide index values, member names, or values to compare with in a filter expression. These variables need to be translated into the form they take in a JSONPath query, e.g., by escaping string delimiters, or by only allowing specific constructs such as \".name\" to be formed when the given values allow that. Failure to perform these translations correctly can lead to unexpected failures, which can lead to Availability, Confidentiality, and Integrity breaches, in particular if an adversary has control over the values (e.g., by entering them into a Web form). The resulting class of attacks, (e.g., SQL injections), is consistently found among the top causes of application security vulnerabilities and requires particular attention. 4.3. Where JSONPath is used as a part of a security mechanism, attackers can attempt to provoke unexpected or unpredictable behavior, or take advantage of differences in behavior between JSONPath"}
{"id": "q-en-draft-ietf-jsonpath-base-b36b7ff82a277f72d8a61702e2f64519f1b7ee7ab7ccd2223640afc52e3c8c06", "old_text": "filter-selectors enclosing the filter-selector that is directly enclosing the identifier). An existence expression may test the result of a function expression (see fnex). Parentheses MAY be used within \"boolean-expr\" for grouping.", "comments": "Add the necessary internal references.\nCurrently, the existence test description contradicts the intended use of OptionalBoolean functions. We need to be careful if we want to make a special case for OptionalBoolean functions and not open the door to any boolean value being treated similarly.\n2023-01-10 Interim: Fix text to properly handle OptionalBoolean. Make sure to indicate that an existence test from a path never gets to the actual value.\nCan this issue be elaborated? What is OptionalBoolean? How does this apply to functions? Examples would be great.\nBase 09 explains what OptionalBoolean is and uses it in the match and search functions. The current issue will also add some text to explain the way OptionalBoolean functions behave in filter expressions.\nE.g.,\nI propose a slightly more compact naming of ABNF of - and -segment, which should be more consistent between them two. is well known from LaTeX and means or there.\nabove is wrong (and so it is in the current spec). It needs to read ... to not allow triple dot ( :-)\nNAME Wouldn't dotdot-wildcard etc. be less cryptic?\nHere are some quibbles with the current ABNF (prior to PR being merged) which we can sort out after The Great Renaming:\nYes, yes. See. Fixed in I like the way this looks like now. Maybe I don't understand; can you elaborate? Fixed this (and following) in Removed in To be fixed in overall overhaul, next Yes! To be fixed in overall overhaul, next To be fixed in overall overhaul, next Fixed in Discuss this in its own issue, I'd say. Fixed in It certainly does not -- it is the subset that is actually needed for normalized paths. To stay normalized, we don't allow random other ones.\nOf course ... I'm quite dispassionate here.\nPlease see my proposal in\nWith merged, we still have the expression cleanup the separate issue (enhancement) whether should be allowed, to mean\nMy remaining points are covered by PRs , , and .\nShouldn't be named according to all other above ?\nYes, I agree. See URL\nNAME Are we ready to close this issue now the PRs have been merged or did you have more expression cleanup in mind?\nThe only thing that comes to my mind is standardizing on the column where the is -- this is currently rather chaotic. But that can be a new issue.\nIndentation of : number of lines such indented\n(And it doesn't need to be consistent, just a bit less chaotic.)\nLet's make the a new issue. (I haven't got a good feel for the optimal solution.)\nBefore we close this issue, URL slipped through the net in my earlier PRs.\nThen could resolve to either or . We would introduce an ambiguity, we already found with Json Pointer () ...\nOK, with the latest blank space realignments that are part of (5e2cc85), I think we are done with this.\nOk, let's refactor the ABNF later. ... or do we need two selectors here (Ch.2.61) child-longhand = \"[\" S selector 1(S \",\" S selector) S \"]\" given that '1' means 'at least one'.", "new_text": "filter-selectors enclosing the filter-selector that is directly enclosing the identifier). A test expression either tests the existence of a node designated by an embedded query (see extest) or tests the result of a function expression (see fnex). In the latter case, if the function expression is of type \"OptionalBoolean\" or one of its subtypes, it tests whether the result is \"true\"; if the function expression is of type \"OptionalNodes\" or one of its subtypes, it tests whether the result is different from \"Nothing\". Parentheses MAY be used within \"boolean-expr\" for grouping."}
{"id": "q-en-draft-ietf-jsonpath-base-b36b7ff82a277f72d8a61702e2f64519f1b7ee7ab7ccd2223640afc52e3c8c06", "old_text": "alternatively be absent (\"Nothing\"). \"OptionalNodes\" is an abstraction of a \"filter-path\" (which appears in an existence test or as a function argument). The abstract instances above can be obtained from the concrete representations in tbl-typerep.", "comments": "Add the necessary internal references.\nCurrently, the existence test description contradicts the intended use of OptionalBoolean functions. We need to be careful if we want to make a special case for OptionalBoolean functions and not open the door to any boolean value being treated similarly.\n2023-01-10 Interim: Fix text to properly handle OptionalBoolean. Make sure to indicate that an existence test from a path never gets to the actual value.\nCan this issue be elaborated? What is OptionalBoolean? How does this apply to functions? Examples would be great.\nBase 09 explains what OptionalBoolean is and uses it in the match and search functions. The current issue will also add some text to explain the way OptionalBoolean functions behave in filter expressions.\nE.g.,\nI propose a slightly more compact naming of ABNF of - and -segment, which should be more consistent between them two. is well known from LaTeX and means or there.\nabove is wrong (and so it is in the current spec). It needs to read ... to not allow triple dot ( :-)\nNAME Wouldn't dotdot-wildcard etc. be less cryptic?\nHere are some quibbles with the current ABNF (prior to PR being merged) which we can sort out after The Great Renaming:\nYes, yes. See. Fixed in I like the way this looks like now. Maybe I don't understand; can you elaborate? Fixed this (and following) in Removed in To be fixed in overall overhaul, next Yes! To be fixed in overall overhaul, next To be fixed in overall overhaul, next Fixed in Discuss this in its own issue, I'd say. Fixed in It certainly does not -- it is the subset that is actually needed for normalized paths. To stay normalized, we don't allow random other ones.\nOf course ... I'm quite dispassionate here.\nPlease see my proposal in\nWith merged, we still have the expression cleanup the separate issue (enhancement) whether should be allowed, to mean\nMy remaining points are covered by PRs , , and .\nShouldn't be named according to all other above ?\nYes, I agree. See URL\nNAME Are we ready to close this issue now the PRs have been merged or did you have more expression cleanup in mind?\nThe only thing that comes to my mind is standardizing on the column where the is -- this is currently rather chaotic. But that can be a new issue.\nIndentation of : number of lines such indented\n(And it doesn't need to be consistent, just a bit less chaotic.)\nLet's make the a new issue. (I haven't got a good feel for the optimal solution.)\nBefore we close this issue, URL slipped through the net in my earlier PRs.\nThen could resolve to either or . We would introduce an ambiguity, we already found with Json Pointer () ...\nOK, with the latest blank space realignments that are part of (5e2cc85), I think we are done with this.\nOk, let's refactor the ABNF later. ... or do we need two selectors here (Ch.2.61) child-longhand = \"[\" S selector 1(S \",\" S selector) S \"]\" given that '1' means 'at least one'.", "new_text": "alternatively be absent (\"Nothing\"). \"OptionalNodes\" is an abstraction of a \"filter-path\" (which appears in a test expression or as a function argument). The abstract instances above can be obtained from the concrete representations in tbl-typerep."}
{"id": "q-en-draft-ietf-jsonpath-base-b36b7ff82a277f72d8a61702e2f64519f1b7ee7ab7ccd2223640afc52e3c8c06", "old_text": "A function expression is correctly typed if all the following are true: If it occurs as a \"filter-path\" in an existence test, the function is defined to have result type \"OptionalNodes\" or one of its subtypes, or to have result type \"OptionalBoolean\" or one of its subtypes.", "comments": "Add the necessary internal references.\nCurrently, the existence test description contradicts the intended use of OptionalBoolean functions. We need to be careful if we want to make a special case for OptionalBoolean functions and not open the door to any boolean value being treated similarly.\n2023-01-10 Interim: Fix text to properly handle OptionalBoolean. Make sure to indicate that an existence test from a path never gets to the actual value.\nCan this issue be elaborated? What is OptionalBoolean? How does this apply to functions? Examples would be great.\nBase 09 explains what OptionalBoolean is and uses it in the match and search functions. The current issue will also add some text to explain the way OptionalBoolean functions behave in filter expressions.\nE.g.,\nI propose a slightly more compact naming of ABNF of - and -segment, which should be more consistent between them two. is well known from LaTeX and means or there.\nabove is wrong (and so it is in the current spec). It needs to read ... to not allow triple dot ( :-)\nNAME Wouldn't dotdot-wildcard etc. be less cryptic?\nHere are some quibbles with the current ABNF (prior to PR being merged) which we can sort out after The Great Renaming:\nYes, yes. See. Fixed in I like the way this looks like now. Maybe I don't understand; can you elaborate? Fixed this (and following) in Removed in To be fixed in overall overhaul, next Yes! To be fixed in overall overhaul, next To be fixed in overall overhaul, next Fixed in Discuss this in its own issue, I'd say. Fixed in It certainly does not -- it is the subset that is actually needed for normalized paths. To stay normalized, we don't allow random other ones.\nOf course ... I'm quite dispassionate here.\nPlease see my proposal in\nWith merged, we still have the expression cleanup the separate issue (enhancement) whether should be allowed, to mean\nMy remaining points are covered by PRs , , and .\nShouldn't be named according to all other above ?\nYes, I agree. See URL\nNAME Are we ready to close this issue now the PRs have been merged or did you have more expression cleanup in mind?\nThe only thing that comes to my mind is standardizing on the column where the is -- this is currently rather chaotic. But that can be a new issue.\nIndentation of : number of lines such indented\n(And it doesn't need to be consistent, just a bit less chaotic.)\nLet's make the a new issue. (I haven't got a good feel for the optimal solution.)\nBefore we close this issue, URL slipped through the net in my earlier PRs.\nThen could resolve to either or . We would introduce an ambiguity, we already found with Json Pointer () ...\nOK, with the latest blank space realignments that are part of (5e2cc85), I think we are done with this.\nOk, let's refactor the ABNF later. ... or do we need two selectors here (Ch.2.61) child-longhand = \"[\" S selector 1(S \",\" S selector) S \"]\" given that '1' means 'at least one'.", "new_text": "A function expression is correctly typed if all the following are true: If it occurs as a \"filter-path\" in a test expression, the function is defined to have result type \"OptionalNodes\" or one of its subtypes, or to have result type \"OptionalBoolean\" or one of its subtypes."}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "Comparison expressions are available for comparisons between primitive values (that is, numbers, strings, \"true\", \"false\", and \"null\"). These can be obtained via literal values; Singular Paths, each of which selects at most one node the value of which is then used; and function expressions (see fnex) of type \"ValueType\" or \"NodesType\" (see type-conv).", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "Comparison expressions are available for comparisons between primitive values (that is, numbers, strings, \"true\", \"false\", and \"null\"). These can be obtained via literal values; Singular Queries, each of which selects at most one node the value of which is then used; and function expressions (see fnex) of type \"ValueType\" or \"NodesType\" (see type-conv)."}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "2.5.5.2.1. A path by itself in a Logical context is an existence test which yields true if the path selects at least one node and yields false if the path does not select any nodes. Existence tests differ from comparisons in that: they work with arbitrary relative or absolute paths (not just Singular Paths). they work with paths that select structured values. To examine the value of a node selected by a path, an explicit comparison is necessary. For example, to test whether the node selected by the path \"@.foo\" has the value \"null\", use \"@.foo == null\" (see null-semantics) rather than the negated existence test \"!@.foo\" (which yields false if \"@.foo\" selects a node, regardless of the node's value).", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "2.5.5.2.1. A query by itself in a Logical context is an existence test which yields true if the query selects at least one node and yields false if the query does not select any nodes. Existence tests differ from comparisons in that: they work with arbitrary relative or absolute queries (not just Singular Queries). they work with queries that select structured values. To examine the value of a node selected by a query, an explicit comparison is necessary. For example, to test whether the node selected by the query \"@.foo\" has the value \"null\", use \"@.foo == null\" (see null-semantics) rather than the negated existence test \"!@.foo\" (which yields false if \"@.foo\" selects a node, regardless of the node's value)."}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "a comparison using the operator \"<\" yields false. When any path or function expression on either side of a comparison results in a nodelist consisting of a single node, that side is replaced by the value of its node and then:", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "a comparison using the operator \"<\" yields false. When any query or function expression on either side of a comparison results in a nodelist consisting of a single node, that side is replaced by the value of its node and then:"}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "implementation level only -- they do not influence the result of the evaluation.) A function argument is a \"filter-path\" or a \"comparable\". According to filter-selector, a \"function-expr\" is valid as a \"filter-path\" or a \"comparable\". Any function expressions in a query must be well-formed (by conforming to the above ABNF) and well-typed, otherwise the JSONPath", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "implementation level only -- they do not influence the result of the evaluation.) A function argument is a \"filter-query\" or a \"comparable\". According to filter-selector, a \"function-expr\" is valid as a \"filter-query\" or a \"comparable\". Any function expressions in a query must be well-formed (by conforming to the above ABNF) and well-typed, otherwise the JSONPath"}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "related to the JSON literals \"true\" and \"false\" and have no direct syntactical representation in JSONPath. \"NodesType\" is an abstraction of a \"filter-path\" (which appears in a test expression or as a function argument). Members of \"NodesType\" have no direct syntactical representation in JSONPath. The abstract instances above can be obtained from the concrete", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "related to the JSON literals \"true\" and \"false\" and have no direct syntactical representation in JSONPath. \"NodesType\" is an abstraction of a \"filter-query\" (which appears in a test expression or as a function argument). Members of \"NodesType\" have no direct syntactical representation in JSONPath. The abstract instances above can be obtained from the concrete"}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "The argument is a literal primitive value and the defined type of the parameter is \"ValueType\". The argument is a Singular Path or \"filter-path\" (which includes Singular Paths), or a function expression with declared result type \"NodesType\" and the defined type of the parameter is \"NodesType\". Where the declared type of the parameter is not \"NodesType\", a conversion applies.", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "The argument is a literal primitive value and the defined type of the parameter is \"ValueType\". The argument is a Singular Query or \"filter-query\" (which includes Singular Queries), or a function expression with declared result type \"NodesType\" and the defined type of the parameter is \"NodesType\". Where the declared type of the parameter is not \"NodesType\", a conversion applies."}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "filter expression: Its only argument is an instance of \"ValueType\" (possibly taken from a singular path as in the example above). The result also is an instance of \"ValueType\": an unsigned integer or \"Nothing\". If the argument value is a string, the result is the number of", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "filter expression: Its only argument is an instance of \"ValueType\" (possibly taken from a singular query as in the example above). The result also is an instance of \"ValueType\": an unsigned integer or \"Nothing\". If the argument value is a string, the result is the number of"}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "processing in the filter expression: Its only argument is an instance of \"NodesType\" (possibly taken from a \"filter-path\" as in the example above). The result is an instance of \"ValueType\". If the argument contains a single node, the result is the value of", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "processing in the filter expression: Its only argument is an instance of \"NodesType\" (possibly taken from a \"filter-query\" as in the example above). The result is an instance of \"ValueType\". If the argument contains a single node, the result is the value of"}
{"id": "q-en-draft-ietf-jsonpath-base-d7533354f3ec078f7a7a8fa27ae59a4208efcc48bbce8b5d614d5bf63b31ef20", "old_text": "Certain characters are escaped, in one and only one way; all other characters are unescaped. Note: Normalized Paths are Singular Paths, but not all Singular Paths are Normalized Paths. For example, \"$[-3]\" is a Singular Path, but is not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a node.) Since there can only be one Normalized Path identifying a given node, the syntax stipulates which characters are escaped and which are not.", "comments": "XPath terminology is a bit weird, because XPath has a somewhat competing standard XQuery... XPath is based on DSSSL, which had a complex expression language for building paths in the XML trees that was based on the Scheme language. (Which is the experience that makes me hesitant just jumping headlong into another expression language...)\nLGTM with that change. Thanks!", "new_text": "Certain characters are escaped, in one and only one way; all other characters are unescaped. Note: Normalized Paths are Singular Queries, but not all Singular Queries are Normalized Paths. For example, \"$[-3]\" is a Singular Query, but is not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a node.) Since there can only be one Normalized Path identifying a given node, the syntax stipulates which characters are escaped and which are not."}
{"id": "q-en-draft-ietf-jsonpath-base-429fcb03096bb7e30a4e48323f1444485d15ee01aa6cee54d1903df6620176ce", "old_text": "Comparison expressions are available for comparisons between primitive values (that is, numbers, strings, \"true\", \"false\", and \"null\"). These can be obtained via literal values; Singular Queries, each of which selects at most one node the value of which is then used; or function expressions (see fnex) of type \"ValueType\".", "comments": "Sometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nAfter merging in my changes from main, it appears some Singular Queries still need fixing.", "new_text": "Comparison expressions are available for comparisons between primitive values (that is, numbers, strings, \"true\", \"false\", and \"null\"). These can be obtained via literal values; singular queries, each of which selects at most one node the value of which is then used; or function expressions (see fnex) of type \"ValueType\"."}
{"id": "q-en-draft-ietf-jsonpath-base-429fcb03096bb7e30a4e48323f1444485d15ee01aa6cee54d1903df6620176ce", "old_text": "Existence tests differ from comparisons in that: they work with arbitrary relative or absolute queries (not just Singular Queries). they work with queries that select structured values.", "comments": "Sometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nAfter merging in my changes from main, it appears some Singular Queries still need fixing.", "new_text": "Existence tests differ from comparisons in that: they work with arbitrary relative or absolute queries (not just singular queries). they work with queries that select structured values."}
{"id": "q-en-draft-ietf-jsonpath-base-429fcb03096bb7e30a4e48323f1444485d15ee01aa6cee54d1903df6620176ce", "old_text": "Certain characters are escaped in Normalized Paths, in one and only one way; all other characters are unescaped. Note: Normalized Paths are Singular Queries, but not all Singular Queries are Normalized Paths. For example, \"$[-3]\" is a singular query, but is not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a", "comments": "Sometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nAfter merging in my changes from main, it appears some Singular Queries still need fixing.", "new_text": "Certain characters are escaped in Normalized Paths, in one and only one way; all other characters are unescaped. Note: Normalized Paths are singular queries, but not all singular queries are Normalized Paths. For example, \"$[-3]\" is a singular query, but is not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a"}
{"id": "q-en-draft-ietf-jsonpath-base-b181ce1b26de08150f3698e73bd0868a555b466ca512cd919464250a15b7d456", "old_text": ", as described in fnex. A JSONPath implementation MUST raise an error for any query which is not well-formed and valid. The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to. No further errors relating to the well-formedness and the validity of a JSONPath query can be raised during application of", "comments": "Also, delete a term which is used only in the definition of the next term. Fixes URL\nI only found the two occurrences you suggested changing. If there are others, please let me know!\nSometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nThanks.", "new_text": ", as described in fnex. A JSONPath implementation MUST raise an error for any query which is not well formed and valid. The well-formedness and the validity of JSONPath queries are independent of the JSON value the query is applied to. No further errors relating to the well-formedness and the validity of a JSONPath query can be raised during application of"}
{"id": "q-en-draft-ietf-jsonpath-base-b181ce1b26de08150f3698e73bd0868a555b466ca512cd919464250a15b7d456", "old_text": "the result is \"LogicalTrue\"; if the function's declared result type is \"NodesType\", it tests whether the result is non-empty. If the function's declared result type is \"ValueType\", its use in a test expression is not well-typed. Comparison expressions are available for comparisons between primitive values (that is, numbers, strings, \"true\", \"false\", and", "comments": "Also, delete a term which is used only in the definition of the next term. Fixes URL\nI only found the two occurrences you suggested changing. If there are others, please let me know!\nSometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nThanks.", "new_text": "the result is \"LogicalTrue\"; if the function's declared result type is \"NodesType\", it tests whether the result is non-empty. If the function's declared result type is \"ValueType\", its use in a test expression is not well typed. Comparison expressions are available for comparisons between primitive values (that is, numbers, strings, \"true\", \"false\", and"}
{"id": "q-en-draft-ietf-jsonpath-base-b181ce1b26de08150f3698e73bd0868a555b466ca512cd919464250a15b7d456", "old_text": "2.5.5.2.1. A query by itself in a Logical context is an existence test which yields true if the query selects at least one node and yields false if the query does not select any nodes.", "comments": "Also, delete a term which is used only in the definition of the next term. Fixes URL\nI only found the two occurrences you suggested changing. If there are others, please let me know!\nSometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nThanks.", "new_text": "2.5.5.2.1. A query by itself in a logical context is an existence test which yields true if the query selects at least one node and yields false if the query does not select any nodes."}
{"id": "q-en-draft-ietf-jsonpath-base-b181ce1b26de08150f3698e73bd0868a555b466ca512cd919464250a15b7d456", "old_text": "implementation level only -- they do not influence the result of the evaluation.) Any function expressions in a query must be well-formed (by conforming to the above ABNF) and well-typed, otherwise the JSONPath implementation MUST raise an error (see synsem-overview). To define which function expressions are well-typed, a type system is first introduced. 2.6.1.", "comments": "Also, delete a term which is used only in the definition of the next term. Fixes URL\nI only found the two occurrences you suggested changing. If there are others, please let me know!\nSometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nThanks.", "new_text": "implementation level only -- they do not influence the result of the evaluation.) Any function expressions in a query must be well formed (by conforming to the above ABNF) and well typed, otherwise the JSONPath implementation MUST raise an error (see synsem-overview). To define which function expressions are well typed, a type system is first introduced. 2.6.1."}
{"id": "q-en-draft-ietf-jsonpath-base-b181ce1b26de08150f3698e73bd0868a555b466ca512cd919464250a15b7d456", "old_text": "If the argument is \"Nothing\" or contains multiple nodes, the result is \"Nothing\". Note: a Singular Query may be used anywhere where a ValueType is expected, so there is no need to use the \"value\" function extension with a Singular Query. 2.6.9.", "comments": "Also, delete a term which is used only in the definition of the next term. Fixes URL\nI only found the two occurrences you suggested changing. If there are others, please let me know!\nSometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nThanks.", "new_text": "If the argument is \"Nothing\" or contains multiple nodes, the result is \"Nothing\". Note: a singular query may be used anywhere where a ValueType is expected, so there is no need to use the \"value\" function extension with a singular query. 2.6.9."}
{"id": "q-en-draft-ietf-jsonpath-base-b181ce1b26de08150f3698e73bd0868a555b466ca512cd919464250a15b7d456", "old_text": "one way; all other characters are unescaped. Note: Normalized Paths are Singular Queries, but not all Singular Queries are Normalized Paths. For example, \"$[-3]\" is a Singular Query, but is not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a node.)", "comments": "Also, delete a term which is used only in the definition of the next term. Fixes URL\nI only found the two occurrences you suggested changing. If there are others, please let me know!\nSometimes the spec uses \"Singular Query\" and sometimes \"singular query\". We should be consistent. (If we decide to use lower case, it's probably best to address this issue after PR has been merged, since that PR adds some usages of \"Singular Query\".) Also, there is one instance of \"Logical context\", which I think would be better uncapitalised (\"logical context\").\nThanks.", "new_text": "one way; all other characters are unescaped. Note: Normalized Paths are Singular Queries, but not all Singular Queries are Normalized Paths. For example, \"$[-3]\" is a singular query, but is not a Normalized Path. The Normalized Path equivalent to \"$[-3]\" would have an index equal to the array length minus \"3\". (The array length must be at least \"3\" if \"$[-3]\" is to identify a node.)"}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "example, the Unicode PLACE OF INTEREST SIGN (U+2318) would be defined in ABNF as \"%x2318\". The terminology of RFC8259 applies except where clarified below. The terms \"Primitive\" and \"Structured\" are used to group different kinds of values as in RFC8259; JSON Objects and Arrays are structured, all", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "example, the Unicode PLACE OF INTEREST SIGN (U+2318) would be defined in ABNF as \"%x2318\". Functions are referred to using the function name followed by a pair of parentheses, as in \"fname()\". The terminology of RFC8259 applies except where clarified below. The terms \"Primitive\" and \"Structured\" are used to group different kinds of values as in RFC8259; JSON Objects and Arrays are structured, all"}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "2.3.2.2. A \"wildcard\" selector selects the nodes of all children of an object or array. The order in which the children of an object appear in the resultant nodelist is not stipulated, since JSON objects are unordered. Children of an array appear in array order in the resultant nodelist. The \"wildcard\" selector selects nothing from a primitive JSON value (that is, a number, a string, \"true\", \"false\", or \"null\"). 2.3.2.3.", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "2.3.2.2. A wildcard selector selects the nodes of all children of an object or array. The order in which the children of an object appear in the resultant nodelist is not stipulated, since JSON objects are unordered. Children of an array appear in array order in the resultant nodelist. The wildcard selector selects nothing from a primitive JSON value (that is, a number, a string, \"true\", \"false\", or \"null\"). 2.3.2.3."}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "Queries: The examples in tbl-wild show the \"wildcard\" selector in use by a child segment: The example above with the query \"$.o[*, *]\" shows that the wildcard selector may produce nodelists in distinct orders each time it", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "Queries: The examples in tbl-wild show the wildcard selector in use by a child segment: The example above with the query \"$.o[*, *]\" shows that the wildcard selector may produce nodelists in distinct orders each time it"}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "If the nodelist is empty, the conversion result is \"LogicalFalse\". Extraction of a value from a nodelist can be performed in several ways, so an implicit conversion from \"NodesType\" to \"ValueType\" may be surprising and has therefore not been defined. A function expression with a declared type of \"NodesType\" can indirectly be used as an argument for a parameter of declared type \"ValueType\" by wrapping the expression in a call to a function extension such as \"value\" (see value). The well-typedness of function expressions can now be defined in terms of this type system.", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "If the nodelist is empty, the conversion result is \"LogicalFalse\". Notes: Extraction of a value from a nodelist can be performed in several ways, so an implicit conversion from \"NodesType\" to \"ValueType\" may be surprising and has therefore not been defined. A function expression with a declared type of \"NodesType\" can indirectly be used as an argument for a parameter of declared type \"ValueType\" by wrapping the expression in a call to a function extension, such as \"value()\" (see value), that takes a parameter of type \"NodesType\" and returns a result of type \"ValueType\". The well-typedness of function expressions can now be defined in terms of this type system."}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "2.4.4. The \"length\" function extension provides a way to compute the length of a value and make that available for further processing in the filter expression: Its only argument is an instance of \"ValueType\" (possibly taken from a singular query, as in the example above). The result also is an", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "2.4.4. The \"length()\" function extension provides a way to compute the length of a value and make that available for further processing in the filter expression: Its only argument is an instance of \"ValueType\" (possibly taken from a singular query, as in the example above). The result also is an"}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "2.4.5. The \"count\" function extension provides a way to obtain the number of nodes in a nodelist and make that available for further processing in the filter expression: Its only argument is a nodelist. The result is a value, an unsigned integer, that gives the number of nodes in the nodelist. Note that", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "2.4.5. The \"count()\" function extension provides a way to obtain the number of nodes in a nodelist and make that available for further processing in the filter expression: Its only argument is a nodelist. The result is a value, an unsigned integer, that gives the number of nodes in the nodelist. Note that"}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "2.4.6. The \"match\" function extension provides a way to check whether (the entirety of, see search below) a given string matches a given regular expression, which is in I-D.draft-ietf-jsonpath-iregexp form.", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "2.4.6. The \"match()\" function extension provides a way to check whether (the entirety of, see search below) a given string matches a given regular expression, which is in I-D.draft-ietf-jsonpath-iregexp form."}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "2.4.7. The \"search\" function extension provides a way to check whether a given string contains a substring that matches a given regular expression, which is in I-D.draft-ietf-jsonpath-iregexp form.", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "2.4.7. The \"search()\" function extension provides a way to check whether a given string contains a substring that matches a given regular expression, which is in I-D.draft-ietf-jsonpath-iregexp form."}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "2.4.8. The \"value\" function extension provides a way to convert an instance of \"NodesType\" to a value and make that available for further processing in the filter expression: Its only argument is an instance of \"NodesType\" (possibly taken from a \"filter-query\", as in the example above). The result is an", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "2.4.8. The \"value()\" function extension provides a way to convert an instance of \"NodesType\" to a value and make that available for further processing in the filter expression: Its only argument is an instance of \"NodesType\" (possibly taken from a \"filter-query\", as in the example above). The result is an"}
{"id": "q-en-draft-ietf-jsonpath-base-ec184d74fe999f23454700ef44c9eceaa3fe19ea85dbc156456d83c2b3c2579c", "old_text": "result is \"Nothing\". Note: a singular query may be used anywhere where a ValueType is expected, so there is no need to use the \"value\" function extension with a singular query. 2.4.9.", "comments": "Use parentheses as an indication that something is a function. Also, don't confuse the reader by alluding to potential future alternatives to value(). Reviewers: is this an improvement? It's a draft PR for now until we decide the change is worthwhile.\nI think we'll need to explain the () notation, which should be familiar to C programmers but not necessarily to people who grew up on other languages. I also think we have a few more examples that point to hypothetical functions, such as , , .\nI don't think there's a great point at which to explain that notation and, based on your observation, I'm going off the idea.\nNAME Let's see how pans out before taking the current PR any further. Once I remove the function parentheses, the two PRs are changing the same text.\nwould be the place.\nNAME As you suggested, I introduced this in 1.1, but near the start since it's notation rather than terminology. I did a regexp search to find and fix the remaining examples (your list was complete). In doing so, I found the strange markup and removed the backticks since there was no good reason for them.\nWe could do the fun() explanation and completeness search in a separate PR.", "new_text": "result is \"Nothing\". Note: a singular query may be used anywhere where a ValueType is expected, so there is no need to use the \"value()\" function extension with a singular query. 2.4.9."}
{"id": "q-en-draft-ietf-masque-connect-ip-30df03b90d10a22ea9629753a36c08f400a0120ece06eabbe8a71865d729f326", "old_text": "If an endpoint receives an ADDRESS_REQUEST capsule that contains zero Requested Addresses, it MUST abort the IP proxying request stream. 4.7.3. The ROUTE_ADVERTISEMENT capsule (see iana-types for the value of the", "comments": "I'm going to guess that order is not significant. But imagine that an endpoint would only honor a request for N addresses but the requester asks for N+Y. The endpoint has to pick a subset somehow.\nWhat do you mean by \"is the list ... significant\"? I don't understand the comment about subsetting either --- if you ask for N+Y addresses and get back only N, you have N usable addresses (ranges)\nHow does the other end decide the which N to pick? Without declaring if the request order is significant, then there can be no expectations of behaviour and we might find interop problems if assumptions don't align.\nIf you request, say, 4 specific addresses, and the other side can satisfy 3 of the requests because they're still available, it will include those 3. It's implementation-defined it if wants to provide a 4th address that is not the one you requested, or NAK it. However at no point in time was order significant --- you asked for all of these addresses, not any of these addresses.\nFriendly amendment: it may include those 3. It could also choose to only assign one, or a different address entirely if the implementation was so inclined.\nYou are pedantically correct :). I was trying to give a specific worked example.\nThis design seems reasonable. It might help to state clearly somewhere that the value of the request ID has no significance other than as a unique handle, and that the ordering of and in a single capsule has no significance. This is in contrast to the ROUTE_ADVERTISEMENT where the ordering of things is very significant.\nThat's a good suggestion Lucas.\nMy intention when writing this was for the order in the lists to not carry semantics, and similarly for the request ID to only exist for unicity. We should spell that out clearly. One thing we could do however, is saying that request IDs in ADDRESS_REQUEST MUST be increasing, to make duplicate detection easier on the receiver. If we do it, it adds a constraint on the sender and means that the receiver will check, while if we don't add the constraint then receivers won't bother to check. Any thoughts on this point?\nI have no strong feeling about the potential to add such rules.\nNAME can you propose text here?", "new_text": "If an endpoint receives an ADDRESS_REQUEST capsule that contains zero Requested Addresses, it MUST abort the IP proxying request stream. Note that the ordering of Requested Addresses does not carry any semantics. Similarly, the Request ID is only meant as a unique identifier, it does not convey any priority or importance. 4.7.3. The ROUTE_ADVERTISEMENT capsule (see iana-types for the value of the"}
{"id": "q-en-draft-ietf-masque-connect-ip-7db94b9c97cc68e364e99fa6f426462ec55c733a5d327a68377d6f43a7e5eb6b", "old_text": "meet these requirements, it MUST abort the IP proxying request stream. 5. The mechanism for proxying IP in HTTP defined in this document allows", "comments": "Say for example that A and B have the same IP Version and they have the same or overlapping Start IP Address / End IP Address ranges. A's IP Protocol is zero (the wildcard) whereas B's is non-zero. That causes a weird kind of overlap even though all the requirements are followed. Is it worth disallowing in the spec?\nEarlier in this section, the version is defined as such: I think then this is OK, since zeros aren't allowed in route advertisements. NAME that sound OK?\nThe IP Verson cannot be zero in the ROUTEADVERTISEMENT, but the IP Protocol can: The following ROUTEADVERTISEMENT follows the requirements (the IP Protocol of A is lower than the IP Protocol of B), yet there is an overlap:\nOh, I see, I misread. I do think that it makes sense to put the wildcard first \u2014 so 0 shows up before others. And then the more specific protocol rules can potentially have different ranges. I still think this is acceptable to leave as-is.", "new_text": "meet these requirements, it MUST abort the IP proxying request stream. Since setting the IP protocol to zero indicates all protocols are allowed, the requirements above make it possible for two routes to overlap when one has IP protocol set to zero and the other set to non-zero. Endpoints MUST not send a ROUTE_ADVERTISEMENT capsule with routes that overlap in such a way. Validating this requirement is OPTIONAL, but if an endpoint detects the violation, it MUST abort the IP proxying request stream. 5. The mechanism for proxying IP in HTTP defined in this document allows"}
{"id": "q-en-draft-ietf-masque-connect-ip-8033d028d57615439370d3d4428d6f2e949e5d8b7906984234b288d394d49534", "old_text": "of the destinations included in the scope, then the IP proxy can immediately fail the request. 4.7. This document defines multiple new capsule types that allow endpoints", "comments": "From Vinny Parla on the :\nI don't think we need to make normative changes to the protocol, but adding a note for implementers that they might want to treat extension headers differently that other IP protocols sounds like a good change. Some additional review comments from an internal review that I wanted to share with you. Section 4.6 has a minor issue as the IANA registry mixes IPv6 extension header with layer-4 headers. It should also specify the use of RFC 5952 for representing IPv6 addresses. However, this can be ignored for now. Section 4.7.1 unsure whether \"IP Version\" field is useful as it can be derived from \"IP Address\" length/notation, also not critical but perhaps this would reduce unnecessary information. There is also a minimum 68-byte MTU for IPv4 ;-) - not sure that is important, but wanted to share. The recursive DNS server info (found in RA and DHCP), is not present. Was this an intentional decision or perhaps something that was not considered for some reason? There is no discussion of link-local addresses / ULA (I would expect both end of the 'tunnel' to have link-local addresses), or link-local multicast (e.g., ff02::1). Or perhaps the IP proxy send an IPv6 RA ? or should be prevented from sending one ? Should IP proxy reply to ff02::2 (all link-local routers), ... -Vinny", "new_text": "of the destinations included in the scope, then the IP proxy can immediately fail the request. Note that IP protocol numbers represent both upper layers (as defined in IPv6, examples include TCP and UDP) and IPv6 extension headers (as defined in IPv6, examples include Fragment and Options headers). IP proxies MAY reject requests to scope to protocol numbers that are used for extension headers. Upon receiving packets, implementations that support scoping by IP protocol number MUST walk the chain of extensions to find the matching IP protocol number. 4.7. This document defines multiple new capsule types that allow endpoints"}
{"id": "q-en-draft-ietf-masque-connect-ip-8033d028d57615439370d3d4428d6f2e949e5d8b7906984234b288d394d49534", "old_text": "invoking packet included in the ICMP packet and only forward the ICMP packet to the client whose scoping matches the invoking packet. 11. 11.1.", "comments": "From Vinny Parla on the :\nI don't think we need to make normative changes to the protocol, but adding a note for implementers that they might want to treat extension headers differently that other IP protocols sounds like a good change. Some additional review comments from an internal review that I wanted to share with you. Section 4.6 has a minor issue as the IANA registry mixes IPv6 extension header with layer-4 headers. It should also specify the use of RFC 5952 for representing IPv6 addresses. However, this can be ignored for now. Section 4.7.1 unsure whether \"IP Version\" field is useful as it can be derived from \"IP Address\" length/notation, also not critical but perhaps this would reduce unnecessary information. There is also a minimum 68-byte MTU for IPv4 ;-) - not sure that is important, but wanted to share. The recursive DNS server info (found in RA and DHCP), is not present. Was this an intentional decision or perhaps something that was not considered for some reason? There is no discussion of link-local addresses / ULA (I would expect both end of the 'tunnel' to have link-local addresses), or link-local multicast (e.g., ff02::1). Or perhaps the IP proxy send an IPv6 RA ? or should be prevented from sending one ? Should IP proxy reply to ff02::2 (all link-local routers), ... -Vinny", "new_text": "invoking packet included in the ICMP packet and only forward the ICMP packet to the client whose scoping matches the invoking packet. Since there are known risks with some IPv6 extension headers (e.g., ROUTING-HDR), implementers need to follow the latest guidance regarding handling of IPv6 extension headers. 11. 11.1."}
{"id": "q-en-draft-ietf-masque-connect-ip-029102dbb7d81d23e9d64a1d130ba8d051fb0d5a67795f23b5d1c5ed8c602f2d", "old_text": "ADDRESS_REQUEST capsules, endpoints MAY send ADDRESS_ASSIGN capsules unprompted. 4.7.2. The ADDRESS_REQUEST capsule (see iana-types for the value of the", "comments": "From Vinny Parla on the :\nConnect IP follows the same . We should probably spell that out somewhere. Similarly, we should remind implementers that they may need to treat link-local differently to avoid forwarding it, and that they may need to ensure they properly handle link-local multicast. Using ULAs is deployment-specific so I don't think anything needs to be said here: ULA isn't different from other IPv6 address blocks and we decided early on that addressing schemes are out of scope for this document Regarding RAs, I don't think anything needs to be said in this document since some deployments might want them while others might not Some additional review comments from an internal review that I wanted to share with you. Section 4.6 has a minor issue as the IANA registry mixes IPv6 extension header with layer-4 headers. It should also specify the use of RFC 5952 for representing IPv6 addresses. However, this can be ignored for now. Section 4.7.1 unsure whether \"IP Version\" field is useful as it can be derived from \"IP Address\" length/notation, also not critical but perhaps this would reduce unnecessary information. There is also a minimum 68-byte MTU for IPv4 ;-) - not sure that is important, but wanted to share. The recursive DNS server info (found in RA and DHCP), is not present. Was this an intentional decision or perhaps something that was not considered for some reason? There is no discussion of link-local addresses / ULA (I would expect both end of the 'tunnel' to have link-local addresses), or link-local multicast (e.g., ff02::1). Or perhaps the IP proxy send an IPv6 RA ? or should be prevented from sending one ? Should IP proxy reply to ff02::2 (all link-local routers), ... -Vinny", "new_text": "ADDRESS_REQUEST capsules, endpoints MAY send ADDRESS_ASSIGN capsules unprompted. Note that the IP forwarding tunnels described in this document are not fully featured \"interfaces\" in the IPv6 addressing architecture sense IPv6-ADDR. In particular, they do not necessarily have IPv6 link-local addresses. Additionally, IPv6 stateless autoconfiguration or router advertisement messages are not used in such interfaces, and neither is neighbor discovery. 4.7.2. The ADDRESS_REQUEST capsule (see iana-types for the value of the"}
{"id": "q-en-draft-ietf-masque-connect-ip-029102dbb7d81d23e9d64a1d130ba8d051fb0d5a67795f23b5d1c5ed8c602f2d", "old_text": "Datagram. This prevents infinite loops in the presence of routing loops, and matches the choices in IPsec IPSEC. IPv6 requires that every link have an MTU of at least 1280 bytes IPv6. Since IP proxying in HTTP conveys IP packets in HTTP Datagrams and those can in turn be sent in QUIC DATAGRAM frames which cannot be", "comments": "From Vinny Parla on the :\nConnect IP follows the same . We should probably spell that out somewhere. Similarly, we should remind implementers that they may need to treat link-local differently to avoid forwarding it, and that they may need to ensure they properly handle link-local multicast. Using ULAs is deployment-specific so I don't think anything needs to be said here: ULA isn't different from other IPv6 address blocks and we decided early on that addressing schemes are out of scope for this document Regarding RAs, I don't think anything needs to be said in this document since some deployments might want them while others might not Some additional review comments from an internal review that I wanted to share with you. Section 4.6 has a minor issue as the IANA registry mixes IPv6 extension header with layer-4 headers. It should also specify the use of RFC 5952 for representing IPv6 addresses. However, this can be ignored for now. Section 4.7.1 unsure whether \"IP Version\" field is useful as it can be derived from \"IP Address\" length/notation, also not critical but perhaps this would reduce unnecessary information. There is also a minimum 68-byte MTU for IPv4 ;-) - not sure that is important, but wanted to share. The recursive DNS server info (found in RA and DHCP), is not present. Was this an intentional decision or perhaps something that was not considered for some reason? There is no discussion of link-local addresses / ULA (I would expect both end of the 'tunnel' to have link-local addresses), or link-local multicast (e.g., ff02::1). Or perhaps the IP proxy send an IPv6 RA ? or should be prevented from sending one ? Should IP proxy reply to ff02::2 (all link-local routers), ... -Vinny", "new_text": "Datagram. This prevents infinite loops in the presence of routing loops, and matches the choices in IPsec IPSEC. Implementers need to ensure that they do not forward any link-local traffic onto a different interface than the one it was received on. IP proxies also need to properly reply to packets destined to link- local multicast addresses. IPv6 requires that every link have an MTU of at least 1280 bytes IPv6. Since IP proxying in HTTP conveys IP packets in HTTP Datagrams and those can in turn be sent in QUIC DATAGRAM frames which cannot be"}
{"id": "q-en-draft-ietf-masque-connect-ip-17548e51f49352ffd95214fd1b9d80f09f15bf21f3e9950a019968def7c6e790", "old_text": "4.2. 4.2.1. The ADDRESS_ASSIGN capsule allows an endpoint to inform its peer that", "comments": "Good point, added. Note that I've also added an important note about how ROUTEADVERTISEMENT and ROUTEWITHDRAWAL coexist, please take a look NAME and NAME .\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think the capsules description needs to be explicit if multiple capsules are allowed. I think there are different considerations depending on what type of capsules are used. The authors have already discussed and concluded that multiple routeadvertisement will be needed to allow to declare reachability of multiple prefix. I don''t know if multiple addressassign capsules are allowed. Allowing multiple allows multiple address ranges to be declared. However, that results that we also get the question if one need to withdraw source address prefixes..\nYes, multiple capsules of each type are allowed. It does seem useful to clarify this, and what it means to receive multiple addresses and multiple routes (which are additive). NAME can you help write this?\nAgreed, multiple capsules of all currently defined capsule types are allowed. For example, multiple ADDRESS_ASSIGN are needed to support a dual-stack tunnel. I'll write a PR.", "new_text": "4.2. This document defines multiple new capsule types that allow endpoints to exchange IP configuration information. Both endpoints MAY send any number of these new capsules. 4.2.1. The ADDRESS_ASSIGN capsule allows an endpoint to inform its peer that"}
{"id": "q-en-draft-ietf-masque-connect-ip-17548e51f49352ffd95214fd1b9d80f09f15bf21f3e9950a019968def7c6e790", "old_text": "endpoint is allowed to send packets from any source address that falls within the prefix. 4.2.2. The ADDRESS_REQUEST capsule allows an endpoint to request assignment", "comments": "Good point, added. Note that I've also added an important note about how ROUTEADVERTISEMENT and ROUTEWITHDRAWAL coexist, please take a look NAME and NAME .\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think the capsules description needs to be explicit if multiple capsules are allowed. I think there are different considerations depending on what type of capsules are used. The authors have already discussed and concluded that multiple routeadvertisement will be needed to allow to declare reachability of multiple prefix. I don''t know if multiple addressassign capsules are allowed. Allowing multiple allows multiple address ranges to be declared. However, that results that we also get the question if one need to withdraw source address prefixes..\nYes, multiple capsules of each type are allowed. It does seem useful to clarify this, and what it means to receive multiple addresses and multiple routes (which are additive). NAME can you help write this?\nAgreed, multiple capsules of all currently defined capsule types are allowed. For example, multiple ADDRESS_ASSIGN are needed to support a dual-stack tunnel. I'll write a PR.", "new_text": "endpoint is allowed to send packets from any source address that falls within the prefix. If an endpoint receives multiple ADDRESS_ASSIGN capsules, all of the assigned addresses or prefixes can be used. For example, multiple ADDRESS_ASSIGN capsules are necessary to assign both IPv4 and IPv6 addresses. 4.2.2. The ADDRESS_REQUEST capsule allows an endpoint to request assignment"}
{"id": "q-en-draft-ietf-masque-connect-ip-17548e51f49352ffd95214fd1b9d80f09f15bf21f3e9950a019968def7c6e790", "old_text": "Upon receiving the ROUTE_ADVERTISEMENT capsule, an endpoint MAY start routing IP packets in that prefix to its peer. 4.2.4. The ROUTE_WITHDRAWAL capsule allows an endpoint to communicate to its", "comments": "Good point, added. Note that I've also added an important note about how ROUTEADVERTISEMENT and ROUTEWITHDRAWAL coexist, please take a look NAME and NAME .\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think the capsules description needs to be explicit if multiple capsules are allowed. I think there are different considerations depending on what type of capsules are used. The authors have already discussed and concluded that multiple routeadvertisement will be needed to allow to declare reachability of multiple prefix. I don''t know if multiple addressassign capsules are allowed. Allowing multiple allows multiple address ranges to be declared. However, that results that we also get the question if one need to withdraw source address prefixes..\nYes, multiple capsules of each type are allowed. It does seem useful to clarify this, and what it means to receive multiple addresses and multiple routes (which are additive). NAME can you help write this?\nAgreed, multiple capsules of all currently defined capsule types are allowed. For example, multiple ADDRESS_ASSIGN are needed to support a dual-stack tunnel. I'll write a PR.", "new_text": "Upon receiving the ROUTE_ADVERTISEMENT capsule, an endpoint MAY start routing IP packets in that prefix to its peer. If an endpoint receives multiple ROUTE_ADVERTISEMENT capsules, all of the advertised routes can be used. For example, multiple ROUTE_ADVERTISEMENT capsules are necessary to provide routing to both IPv4 and IPv6 hosts. Routes are removed using ROUTE_WITHDRAWAL capsules. 4.2.4. The ROUTE_WITHDRAWAL capsule allows an endpoint to communicate to its"}
{"id": "q-en-draft-ietf-masque-connect-ip-17548e51f49352ffd95214fd1b9d80f09f15bf21f3e9950a019968def7c6e790", "old_text": "endpoint that receives packets for routes it has rejected MUST NOT treat that as an error. 5. IP packets are sent using HTTP Datagrams I-D.ietf-masque-h3-datagram.", "comments": "Good point, added. Note that I've also added an important note about how ROUTEADVERTISEMENT and ROUTEWITHDRAWAL coexist, please take a look NAME and NAME .\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think the capsules description needs to be explicit if multiple capsules are allowed. I think there are different considerations depending on what type of capsules are used. The authors have already discussed and concluded that multiple routeadvertisement will be needed to allow to declare reachability of multiple prefix. I don''t know if multiple addressassign capsules are allowed. Allowing multiple allows multiple address ranges to be declared. However, that results that we also get the question if one need to withdraw source address prefixes..\nYes, multiple capsules of each type are allowed. It does seem useful to clarify this, and what it means to receive multiple addresses and multiple routes (which are additive). NAME can you help write this?\nAgreed, multiple capsules of all currently defined capsule types are allowed. For example, multiple ADDRESS_ASSIGN are needed to support a dual-stack tunnel. I'll write a PR.", "new_text": "endpoint that receives packets for routes it has rejected MUST NOT treat that as an error. ROUTE_ADVERTISEMENT and ROUTE_WITHDRAWAL capsules are applied in order of receipt: if a prefix is covered by multiple received ROUTE_ADVERTISEMENT and/or ROUTE_WITHDRAWAL capsules, only the last received capsule applies as it supersedes prior ROUTE_ADVERTISEMENT and ROUTE_WITHDRAWAL capsules for this prefix. 5. IP packets are sent using HTTP Datagrams I-D.ietf-masque-h3-datagram."}
{"id": "q-en-draft-ietf-masque-connect-ip-434ad78b41a20b70f204e64524297b4b114898bdb9c99d00941cbbb54b767dcd", "old_text": "tunnel. Along with a successful response, the proxy can send capsules to assign addresses and routes to the client (capsules). The client can also assign addresses and routes to the proxy for network-to-network routing. 4.1.", "comments": "This PR is one potential way to It simplifies route exchanges by going with NAME 's suggestion of sending the entire routing table each time something changes. It could be somewhat inefficient in some cases, but it becomes easier to reason about than what's currently in the draft when overlapping prefixes are involved.\nThis is interesting, but I'd prefer to wait to do this until after we post a -00 version. NAME is that OK with you?\nI'd prefer we figure this out before -00, because this would be a pretty significant non-compatible change and I'd like to implement -00\nNAME you nearly convinced me in the call that we rather want to announce ranges separately but I forgot why. Can you outline your reasoning again?\nNAME my argument was that we shouldn't have two different ways to set routes, one that gives a table and one that gives a single route. If we go to only have a table, then it's fine.\nI know saw the discussion in and support this approach. Also if we agree on this approach, I also think we should merge it before we submit -00\nNAME I think we're ready to merge this PR\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think this looks good enough to include in the -00.", "new_text": "tunnel. Along with a successful response, the proxy can send capsules to assign addresses and advertise routes to the client (capsules). The client can also assign addresses and advertise routes to the proxy for network-to-network routing. 4.1."}
{"id": "q-en-draft-ietf-masque-connect-ip-434ad78b41a20b70f204e64524297b4b114898bdb9c99d00941cbbb54b767dcd", "old_text": "4.2.1. The ADDRESS_ASSIGN capsule allows an endpoint to inform its peer that it has assigned an IP address to it. It allows assigning a prefix which can contain multiple addresses. This capsule uses a Capsule Type of 0xfff100. Its value uses the following format: IP Version of this address assignment. MUST be either 4 or 6.", "comments": "This PR is one potential way to It simplifies route exchanges by going with NAME 's suggestion of sending the entire routing table each time something changes. It could be somewhat inefficient in some cases, but it becomes easier to reason about than what's currently in the draft when overlapping prefixes are involved.\nThis is interesting, but I'd prefer to wait to do this until after we post a -00 version. NAME is that OK with you?\nI'd prefer we figure this out before -00, because this would be a pretty significant non-compatible change and I'd like to implement -00\nNAME you nearly convinced me in the call that we rather want to announce ranges separately but I forgot why. Can you outline your reasoning again?\nNAME my argument was that we shouldn't have two different ways to set routes, one that gives a table and one that gives a single route. If we go to only have a table, then it's fine.\nI know saw the discussion in and support this approach. Also if we agree on this approach, I also think we should merge it before we submit -00\nNAME I think we're ready to merge this PR\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think this looks good enough to include in the -00.", "new_text": "4.2.1. The ADDRESS_ASSIGN capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer that it has assigned an IP address or prefix to it. The ADDRESS_ASSIGN capsule allows assigning a prefix which can contain multiple addresses. Any of these addresses can be used as the source address on IP packets originated by the receiver of this capsule. IP Version of this address assignment. MUST be either 4 or 6."}
{"id": "q-en-draft-ietf-masque-connect-ip-434ad78b41a20b70f204e64524297b4b114898bdb9c99d00941cbbb54b767dcd", "old_text": "The number of bits in the IP Address that are used to define the prefix that is being assigned. This MUST be less than or equal to the length of the IP Address field, in bits. If the prefix length is equal to the length of the IP Address, the endpoint is only allowed to send packets from a single source address. If the prefix length is less than the length of the IP address, the endpoint is allowed to send packets from any source address that falls within the prefix. If an endpoint receives multiple ADDRESS_ASSIGN capsules, all of the assigned addresses or prefixes can be used. For example, multiple", "comments": "This PR is one potential way to It simplifies route exchanges by going with NAME 's suggestion of sending the entire routing table each time something changes. It could be somewhat inefficient in some cases, but it becomes easier to reason about than what's currently in the draft when overlapping prefixes are involved.\nThis is interesting, but I'd prefer to wait to do this until after we post a -00 version. NAME is that OK with you?\nI'd prefer we figure this out before -00, because this would be a pretty significant non-compatible change and I'd like to implement -00\nNAME you nearly convinced me in the call that we rather want to announce ranges separately but I forgot why. Can you outline your reasoning again?\nNAME my argument was that we shouldn't have two different ways to set routes, one that gives a table and one that gives a single route. If we go to only have a table, then it's fine.\nI know saw the discussion in and support this approach. Also if we agree on this approach, I also think we should merge it before we submit -00\nNAME I think we're ready to merge this PR\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think this looks good enough to include in the -00.", "new_text": "The number of bits in the IP Address that are used to define the prefix that is being assigned. This MUST be less than or equal to the length of the IP Address field, in bits. If the prefix length is equal to the length of the IP Address, the receiver of this capsule is only allowed to send packets from a single source address. If the prefix length is less than the length of the IP address, the receiver of this capsule is allowed to send packets from any source address that falls within the prefix. If an endpoint receives multiple ADDRESS_ASSIGN capsules, all of the assigned addresses or prefixes can be used. For example, multiple"}
{"id": "q-en-draft-ietf-masque-connect-ip-434ad78b41a20b70f204e64524297b4b114898bdb9c99d00941cbbb54b767dcd", "old_text": "4.2.2. The ADDRESS_REQUEST capsule allows an endpoint to request assignment of an IP address from its peer. This capsule is not required for simple client/proxy communication where the client only expects to receive one address from the proxy. The capsule allows the endpoint to optionally indicate a preference for which address it would get assigned. This capsule uses a Capsule Type of 0xfff101. Its value uses the following format: IP Version of this address request. MUST be either 4 or 6.", "comments": "This PR is one potential way to It simplifies route exchanges by going with NAME 's suggestion of sending the entire routing table each time something changes. It could be somewhat inefficient in some cases, but it becomes easier to reason about than what's currently in the draft when overlapping prefixes are involved.\nThis is interesting, but I'd prefer to wait to do this until after we post a -00 version. NAME is that OK with you?\nI'd prefer we figure this out before -00, because this would be a pretty significant non-compatible change and I'd like to implement -00\nNAME you nearly convinced me in the call that we rather want to announce ranges separately but I forgot why. Can you outline your reasoning again?\nNAME my argument was that we shouldn't have two different ways to set routes, one that gives a table and one that gives a single route. If we go to only have a table, then it's fine.\nI know saw the discussion in and support this approach. Also if we agree on this approach, I also think we should merge it before we submit -00\nNAME I think we're ready to merge this PR\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think this looks good enough to include in the -00.", "new_text": "4.2.2. The ADDRESS_REQUEST capsule (see iana-types for the value of the capsule type) allows an endpoint to request assignment of an IP address from its peer. This capsule is not required for simple client/proxy communication where the client only expects to receive one address from the proxy. The capsule allows the endpoint to optionally indicate a preference for which address it would get assigned. IP Version of this address request. MUST be either 4 or 6."}
{"id": "q-en-draft-ietf-masque-connect-ip-434ad78b41a20b70f204e64524297b4b114898bdb9c99d00941cbbb54b767dcd", "old_text": "4.2.3. The ROUTE_ADVERTISEMENT capsule allows an endpoint to communicate to its peer that it is willing to route traffic to a given prefix. This indicates that the sender has an existing route to the prefix, and notifies its peer that if the receiver of the ROUTE_ADVERTISEMENT capsule sends IP packets for this prefix in HTTP Datagrams, the sender of the capsule will forward them along its preexisting route. This capsule uses a Capsule Type of 0xfff102. Its value uses the following format: IP Version of this route advertisement. MUST be either 4 or 6. IP address of the advertised route. If the IP Version field has value 4, the IP Address field SHALL have a length of 32 bits. If the IP Version field has value 6, the IP Address field SHALL have a length of 128 bits. The number of bits in the IP Address that are used to define the prefix of the advertised route. This MUST be lesser or equal to the length of the IP Address field, in bits. If the prefix length is equal to the length of the IP Address, this route only allows sending packets to a single destination address. If the prefix length is less than the length of the IP address, this route allows sending packets to any destination address that falls within the prefix. The Internet Protocol Number for traffic that can be sent to this prefix. If the value is 0, all protocols are allowed. Upon receiving the ROUTE_ADVERTISEMENT capsule, an endpoint MAY start routing IP packets in that prefix to its peer. If an endpoint receives multiple ROUTE_ADVERTISEMENT capsules, all of the advertised routes can be used. For example, multiple ROUTE_ADVERTISEMENT capsules are necessary to provide routing to both IPv4 and IPv6 hosts. Routes are removed using ROUTE_WITHDRAWAL capsules. 4.2.4. The ROUTE_WITHDRAWAL capsule allows an endpoint to communicate to its peer that it is not willing to route traffic to a given prefix. This capsule uses a Capsule Type of 0xfff103. Its value uses the following format: IP Version of this route withdrawal. MUST be either 4 or 6. IP address of the withdrawn route. If the IP Version field has value 4, the IP Address field SHALL have a length of 32 bits. If the IP Version field has value 6, the IP Address field SHALL have a length of 128 bits. The number of bits in the IP Address that are used to define the prefix of the withdrawn route. This MUST be lesser or equal to the length of the IP Address field, in bits. The Internet Protocol Number for traffic for this route. If the value is 0, all protocols are withdrawn for this prefix. Upon receiving the ROUTE_WITHDRAWAL capsule, an endpoint MUST stop routing IP packets in that prefix to its peer. Note that this capsule can be reordered with DATAGRAM frames, and therefore an endpoint that receives packets for routes it has rejected MUST NOT treat that as an error. ROUTE_ADVERTISEMENT and ROUTE_WITHDRAWAL capsules are applied in order of receipt: if a prefix is covered by multiple received ROUTE_ADVERTISEMENT and/or ROUTE_WITHDRAWAL capsules, only the last received capsule applies as it supersedes prior ROUTE_ADVERTISEMENT and ROUTE_WITHDRAWAL capsules for this prefix. 5.", "comments": "This PR is one potential way to It simplifies route exchanges by going with NAME 's suggestion of sending the entire routing table each time something changes. It could be somewhat inefficient in some cases, but it becomes easier to reason about than what's currently in the draft when overlapping prefixes are involved.\nThis is interesting, but I'd prefer to wait to do this until after we post a -00 version. NAME is that OK with you?\nI'd prefer we figure this out before -00, because this would be a pretty significant non-compatible change and I'd like to implement -00\nNAME you nearly convinced me in the call that we rather want to announce ranges separately but I forgot why. Can you outline your reasoning again?\nNAME my argument was that we shouldn't have two different ways to set routes, one that gives a table and one that gives a single route. If we go to only have a table, then it's fine.\nI know saw the discussion in and support this approach. Also if we agree on this approach, I also think we should merge it before we submit -00\nNAME I think we're ready to merge this PR\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think this looks good enough to include in the -00.", "new_text": "4.2.3. The ROUTE_ADVERTISEMENT capsule (see iana-types for the value of the capsule type) allows an endpoint to communicate to its peer that it is willing to route traffic to a set of IP address ranges. This indicates that the sender has an existing route to each address range, and notifies its peer that if the receiver of the ROUTE_ADVERTISEMENT capsule sends IP packets for one of these ranges in HTTP Datagrams, the sender of the capsule will forward them along its preexisting route. Any address which is in one of the address ranges can be used as the destination address on IP packets originated by the receiver of this capsule. The ROUTE_ADVERTISEMENT capsule contains a sequence of IP Address Ranges. IP Version of this range. MUST be either 4 or 6. Inclusive start and end IP address of the advertised range. If the IP Version field has value 4, these fields SHALL have a length of 32 bits. If the IP Version field has value 6, these fields SHALL have a length of 128 bits. The Start IP Address MUST be strictly lesser than the End IP Address. The Internet Protocol Number for traffic that can be sent to this range. If the value is 0, all protocols are allowed. Upon receiving the ROUTE_ADVERTISEMENT capsule, an endpoint MAY start routing IP packets in these ranges to its peer. Each ROUTE_ADVERTISEMENT contains the full list of address ranges. If multiple ROUTE_ADVERTISEMENT capsules are sent in one direction, each ROUTE_ADVERTISEMENT capsule supersedes prior ones. In other words, if a given address range was present in a prior capsule but the most recently received ROUTE_ADVERTISEMENT capsule does not contain it, the receiver will consider that range withdrawn. If multiple ranges using the same IP protocol were to overlap, some routing table implementations might reject them. To prevent overlap, the ranges are ordered; this places the burden on the sender and makes verification by the receiver much simpler. If an IP Address Range A precedes an IP address range B in the same ROUTE_ADVERTISEMENT capsule, they MUST follow these requirements: IP Version of A MUST be lesser or equal than IP Version of B If the IP Version of A and B are equal, the IP Protocol of A MUST be lesser or equal than IP Protocol of B. If the IP Version and IP Protocol of A and B are both equal, the End IP Address of A MUST be strictly lesser than the Start IP Address of B. If an endpoint received a ROUTE_ADVERTISEMENT capsule that does not meet these requirements, it MUST abort the stream. 5."}
{"id": "q-en-draft-ietf-masque-connect-ip-434ad78b41a20b70f204e64524297b4b114898bdb9c99d00941cbbb54b767dcd", "old_text": "or split-tunnel. In this case, the client does not specify any scope in its request. The server assigns the client an IPv6 address prefix to the client (2001:db8::/64) and a full-tunnel route of all IPv6 addresses (::/0). The client can then send to any IPv6 host using a source address in its assigned prefix. A setup for a split-tunnel VPN (the case where the client can only access a specific set of private subnets) is quite similar. In this case, the advertised route is restricted to 2001:db8::/32, rather than ::/0. 6.2.", "comments": "This PR is one potential way to It simplifies route exchanges by going with NAME 's suggestion of sending the entire routing table each time something changes. It could be somewhat inefficient in some cases, but it becomes easier to reason about than what's currently in the draft when overlapping prefixes are involved.\nThis is interesting, but I'd prefer to wait to do this until after we post a -00 version. NAME is that OK with you?\nI'd prefer we figure this out before -00, because this would be a pretty significant non-compatible change and I'd like to implement -00\nNAME you nearly convinced me in the call that we rather want to announce ranges separately but I forgot why. Can you outline your reasoning again?\nNAME my argument was that we shouldn't have two different ways to set routes, one that gives a table and one that gives a single route. If we go to only have a table, then it's fine.\nI know saw the discussion in and support this approach. Also if we agree on this approach, I also think we should merge it before we submit -00\nNAME I think we're ready to merge this PR\nAs discussed in , turning a sequence of ROUTEADVERTISEMENT and ROUTEWITHDRAWAL capsules into a functional routing table isn't as simple as initially expected. This is because two separate capsules can describe prefixes that are distinct but overlap. We should simplify the design or better explain the caveats.\nI think this looks good enough to include in the -00.", "new_text": "or split-tunnel. In this case, the client does not specify any scope in its request. The server assigns the client an IPv4 address to the client (192.0.2.11) and a full-tunnel route of all IPv4 addresses (0.0.0.0/0). The client can then send to any IPv4 host using a source address in its assigned prefix. A setup for a split-tunnel VPN (the case where the client can only access a specific set of private subnets) is quite similar. In this case, the advertised route is restricted to 192.0.2.0/24, rather than 0.0.0.0/0. 6.2."}
{"id": "q-en-draft-ietf-masque-connect-ip-f49bcbb154904b7c8d769576163869b007de480a771581a16292bed42adf5772", "old_text": "Unlike CONNECT-UDP requests, which require specifying a target host, CONNECT-IP requests can allow endpoints to send arbitrary IP packets to any host. The client can choose to restrict a given request to a specific host or IP protocol by adding parameters to its request. When the server knows that a request is scoped to a target host or protocol, it can leverage this information to optimize its resource allocation; for example, the server can assign the same public IP address to two CONNECT-IP requests that are scoped to different hosts and/or different protocols. CONNECT-IP uses URI template variables (client-config) to determine the scope of the request for packet proxying. All variables defined", "comments": "Instead of just an address / hostname, could we ask to open a tunnel to a prefix?\nTommy, can you please clarify the reasoning why this functionality is useful. Specifying the target as a subnet would to my understanding compared to a general IP tunneling address just restrict that the client will never try to send IP packets to any IP addresses outside of this subnet. And thus potentially enable serving multiple masque clients using restricted targets to share the source IP address. Is that the intention? Or is the intention here to have the masque server potentially redirect to another instance based on the target in the request?\nThis came up in a discussion\u2014if we let you ask to connect to a specific hostname or address instead of all addresses (as we do today), we wondered if there was any need to connect to a subnet of addresses. This would be for connecting to an IPv6 /64, which is unique enough for a device, and doesn't require needing to assign an IP address exclusively to the client that is issuing the request.\nI'm supportive of adding this if we can come up with an encoding that's not a pain to implement. For example slash with prefix length seems fine to me - the slash would be percent-encoded just like the IPv6 address colons are.\nIn relation to my just raised issue on source address validation if we add this functionality, I would think that the proxy would need to do destination or source address validation depending on direction of traffic to enforce the signalled aspect.", "new_text": "Unlike CONNECT-UDP requests, which require specifying a target host, CONNECT-IP requests can allow endpoints to send arbitrary IP packets to any host. The client can choose to restrict a given request to a specific prefix or IP protocol by adding parameters to its request. When the server knows that a request is scoped to a target prefix or protocol, it can leverage this information to optimize its resource allocation; for example, the server can assign the same public IP address to two CONNECT-IP requests that are scoped to different prefixes and/or different protocols. CONNECT-IP uses URI template variables (client-config) to determine the scope of the request for packet proxying. All variables defined"}
{"id": "q-en-draft-ietf-masque-connect-ip-15aac0ad9ca8c6fe1921c82bc6a9c5e1b6c2aa94a7646f82f4929261d0b38cc4", "old_text": "an HTTP Datagram. This prevents infinite loops in the presence of routing loops, and matches the choices in IPsec IPSEC. Endpoints MAY implement additional filtering policies on the IP packets they forward.", "comments": "I think there are two option here, 1) close/deny the connect request, and 2) use streams instead. I think we should consider both option, especially given stream are just there to use.\nIsn't this a dup of ? That's fixed with . I guess the text could be generalized to talk about more than just IPv6 MTU issues.\nNo, this aspect on what you do when the MTU required is not present is not resolved by that PR. I would note that closing gives the endpoints one experience that is likely easier to handle in implementation and do avoid the issue with reordering and the varying packet treatment that moving to stream would result.\nI would suggest adding a sentence saying that if an endpoint realizes that the IPv6 tunnel MTU goes lower than 1280, then it MUST close the stream.\nIETF 114: Note that if something happens and your MTU gets smaller, you may need to destroy that link. Potentially make this option a SHOULD do PMTUD, otherwise you could end up in a situation where you're not a compliant IPv6 tunnel. Proposal:\nEven if PR was an improvement to the description of the IPv6 minimal link MTU issue the text still leaves an open issue in that it doesn't make clear which of the endpoints that are responsible for HTTP intermediaries or if is truly both which has another implication. So a Connect-IP endpoint will be aware of the HTTP datagram supported MTU and if its HTTP/3 connection can support sending at least 1280 bytes of IPv6 packets without fragmentation. However, if there is two or more HTTP intermediaries in between the client and the server then there are HTTP intermediaries that might have an HTTP/3 connection which has less than 1280 bytes MTU for HTTP datagram payloads in-between the client and server. Thus, arises the issue of how to detect this. PR states: if HTTP intermediaries are in use, CONNECT-IP endpoints can enter in an out of band agreement with the intermediaries to ensure that endpoints and intermediaries pad QUIC INITIAL packets. From my perspective the big question is how a client or a server knows that there are intermediaries, or the most important aspect, that there are no unknown HTTP intermediaries. To my understanding both the client and the server can potentially bring in HTTP intermediaries unknown to the other part. Thus resulting in a HTTP intermediary to intermediary connection where neither endpoint knows if the requirement is fulfilled. Thus my interpretation is that the only way to be sure that it works it to actually execute on the third bullet in the PR: CONNECT-IP endpoints can also send ICMPv6 echo requests with 1232 bytes of data to ascertain the link MTU and tear down the tunnel if they do not receive a response. Unless endpoints have an out of band means of guaranteeing that one of the two previous techniques is sufficient, they MUST use this method. I would also note that this only verifies from the acting endpoint towards its peer, not the reverse path. For that to be verified the peer need to send ICMP probes the other direction. Thus, my conclusion is that Connect-IP still don't ensure that the minimal MTU is fulfilled.\nIsn't the quick-fix to require that the ICMP reply also be 1232 byte sized?\nNormally I would say now, as you want to ensure that the reply makes it back that shouldn't be padded. Also if the response is anything else than echo reply you also should not pad. However, I think the implication is the same either way. Both masque client and server need to send ICMP or modify its ICMP processing. Given that I would think having each endpoint doing independent processing of its forward path might actually be simpler and enables using a standard off the shelf ICMP implementation and not have to modify it.\nOur implementation has userspace ICMP request/response handling for echo request/response, it wasn't too hard to write. I don't propose modifying anything, but the structures were ~50 lines of code to set up. I agree we shouldn't be modifying responses we did not generate.\nI think the existing text already is generic to \"endpoint\", not client or server: So this implies that both sides need to send the PING if they haven't guaranteed the MTU. This seems like this is already handled?\nIf both endpoints are required to send ping it would be resolved. However, that is not what is written in the draft currently. What is written is the there are various techniques, and if an endpoint choses something else than verifying its forward path using ping, it becomes unclear who is actually responsible for any HTTP intermediaries. Which results in an unclear requirement and very likely failure of the MTU verification resulting in an MTU black hole.\nWell, it does say that endpoints MUST use this method unless the already have other coordination. I could see some clarification text to say that the guarantee for the other methods must include knowing there isn't an intermediary \u2014 which is actually pretty easy to guarantee in many cases.\nThat is what I intended when writing , but perhaps I wasn't clear. Can one of you propose a PR that would clarify this?\nAs per , the IPv6 minimum link MTU is 1280 bytes. However, states that it is possible to have a functional QUIC connection on a path that only allows 1200 bytes of UDP payloads. Such a path would only allow HTTP Datagrams with payloads up to 1180 bytes. If we only send IPv6 packets over QUIC DATAGRAM frames, such a setup would result in violating the IPv6 minimum link MTU. We should describe this problem and how to react to it. Possible solutions include failing the CONNECT-IP request, using the DATAGRAM capsule, or different solutions altogether.\nThis is only a problem for the end-2-end QUIC handshake packets, right? I was assuming that you either have a QUIC tunnel that has a larger MTU in order to use datagrams or you have to use stream-based transmission for the e2e handshake and then need a way to switch to datagram later. In any case we need to say more in the draft!\nFrom the most recent session, I think we may have consensus to address this by noting that CONNECT-IP over H3 requires a minimum link MTU of X bytes on every HTTP hop for IPv6 compliance, leaving support for smaller links to an extension.\nI'm not sure the consensus was clear, but I think it's upon us to make a proposal and see if it flies =)\nThat sounds like the best path forward.\nI propose a simple solution that should be enough for most use-cases: if any endpoint detects that the path MTU is too low, it MUST close the tunnel if the tunnel had IPv6 addresses or routes on it. Endpoints are not required to measure the path MTU. endpoints that wish to use IPv6 SHOULD ensure that their path MTU is sufficient. For example, that can be guaranteed by padding QUIC initial packets to 1320 bytes.\nThat seems reasonable to me as a proposal!\nSo how does the endpoint determine that the Path MTU is too low? In the scenario that looks like this C - I1 -I2 - S and it is the I1-I2 path that can't support 1280 byte datagrams. The I nodes are HTTP intermediaries. The client will not know that the datagram MTU size is too low on these nodes, to my understanding the datagram payloads will simply be dropped by the I nodes in either direction.\nIn that scenario, if the endpoint wants to check it can send an ICMPv6 echo request with 1232 bytes of data - if it receives an ICMPv6 echo reply that means that the path supports the minimum IPv6 MTU.\nNAME then we are making it mandatory to send ICMPv6 echo to the external address assigned for the client, and for the server sending one to the client, this to determine that minimal 1280 bytes are working over MASQUE. Because to me there are no way the MASQUE endpoint can determine in all cases if there are HTTP intermediaries or not. Else we have to put requirements on the HTTP intermediaries to understand MASQUE, and to my understanding what was not desired. And maybe using ICMPv6 is the easiest method with the lowest implementation burden. The issue I see with this aspect is that many applications when establishing a Connect-IP path might not want to wait for the ICMPv6 response before sending other traffic so we have a situation where there is a small, but non-zero risk that these packets goes into a black hole, and then the ICMP response comes back and the MASQUE Connect-IP stream will be closed due to the response to indicate the failure. I think this is all manageable and not different from failures that can occur for other reasons.\nI don't see why this should be mandatory. When I plug in an Ethernet cable into my computer, I don't know whether the IP stack on the next router spuriously drops packets above 1000 bytes long, and my OS has no requirement to measure the path MTU.\nBut for the said router dropping IPv6 packets below 1280 would be a protocol violation. The issue here is the interaction with the HTTP intermediaries and what is under the control of Connect-IP function. So Connect-IP basically provides a IPv6 capable link. Per IPv6 RFC that requires us to support 1280 at least. What have been the rough consensus so far is that we are not defining an fragmentation and reassembly mechanism, or rather how to ensure that one uses QUIC streams when the transport is HTTP/3 for below 1280 IPv6 packets and instead reject the service. Thus we are in the position of having to answer how does one know when to reject the service. The alternatives I see are: 1) Force the Connect-IP endpoints to ICMP ping each other to verify that the path supports 1280 IPv6 packets 2) Have the HTTP intermediaries be Connect-IP aware to the level that they can reject the HTTP request if the path does not support 1280 packets. Where 2) have the downside that we still likely have some black holes in cases where the HTTP intermediaries are not connect-IP aware and ignores the requirements. Where 1) have the downside that the endpoints request can't be rejected during the request response sequence and instead have to be closed with an error message indicating the paths failure to support the packet.\nI'm opposed to requiring ICMP pings because the majority of use-cases will not have this problem. How about we add text like this: CONNECT-IP endpoints that enable IPv6 connectivity in the tunnel MUST ensure that the tunnel MTU is at least 1280 bytes to meet the IPv6 minimum required MTU. This can be accomplished multiple different ways, including: Endpoints padding QUIC initial packets when it is known that there are no intermediaries present Endpoints sending ICMPv6 echo requests with 1232 bytes of data an out of band agreement between one of the endpoints and any involved intermediaries to have them pad QUIC initial packets\nSo I understand your desire to not add complexity anywhere for a likely seldom arising case. However, I do fear that not addressing this with a clear assignment on whose responsibility it is to avoid the issue will result in that none does anything of it. Because an endpoint that uses HTTP/3 can verify its first hop. However, the complexities arise when you have HTTP intermediaries in the mix.\nI get that, but since the complexity is added by intermediaries we just need to make it clear that the intermediary needs to not be silly\nOne part is to detect if the link is capable of 1232 bytes, however, if not, we also not to provide more guidance what to do. I think there are two options: either close the connection or use quic streams instead of data datagrams.", "new_text": "an HTTP Datagram. This prevents infinite loops in the presence of routing loops, and matches the choices in IPsec IPSEC. IPv6 requires that every link have an MTU of at least 1280 bytes IPv6. Since CONNECT-IP conveys IP packets in HTTP Datagrams and those can in turn be sent in QUIC DATAGRAM frames which cannot be fragmented DGRAM, the MTU of a CONNECT-IP link can be limited by the MTU of the QUIC connection that CONNECT-IP is operating over. This can lead to situations where the IPv6 minimum link MTU is violated. CONNECT-IP endpoints that support IPv6 MUST ensure that the CONNECT- IP tunnel link MTU is at least 1280 (i.e., that they can send HTTP Datagrams with payloads of at least 1280 bytes). This can be accomplished using various techniques: if HTTP intermediaries are not in use, both CONNECT-IP endpoints can pad the QUIC INITIAL packets of the underlying QUIC connection that CONNECT-IP is running over. if HTTP intermediaries are in use, CONNECT-IP endpoints can enter in an out of band agreement with the intermediaries to ensure that endpoints and intermediaries pad QUIC INITIAL packets. CONNECT-IP endpoints can also send ICMPv6 echo requests with 1232 bytes of data to ascertain the link MTU and tear down the tunnel if they do not receive a response. Unless endpoints have an out of band means of guaranteeing that one of the two previous techniques is sufficient, they MUST use this method. Endpoints MAY implement additional filtering policies on the IP packets they forward."}
{"id": "q-en-draft-ietf-masque-connect-ip-c2a08b997896e4baae5cc9483d0193174157ab9a79bee53bb6ed0065eb2ef8e5", "old_text": "have a route for the destination address, or if it is configured to reject a destination prefix by policy, or if the MTU of the outgoing link is lower than the size of the packet to be forwarded. In such scenarios, CONNECT-IP endpoints SHOULD use ICMP ICMP to signal the forwarding error to its peer. 8.", "comments": "If one is using IPPROTO to select a particular protocol to be tunneled, one may still received ICMP in the context of this request. To my understanding there are two reasons for this. a) ICMP generated by the Connect-IP peer endpoint b) ICMP sent back by node beyond the Connect-IP that is directly related to forward traffic sent by this endpoint and which the tunnel endpoint maps back to the Connect-IP request. I think we could add a clarification of this in , or we could do it later.\nTotally agree. I added fb1578626de4d3073ac31f5cb89c06efae2ebc51 to address this as part of .\nBased on the discussion in issue we recommend to use ICMP for error signal. ICMP works for MTU issue or destination unreachable, however, there is no appropriate message type if the IP header of the packet provided by the client contains an IP that wasn't assigned to the client. So we need a different kind of error handling for this case?\nI think ICMP works for this, I'd send See . Would that work?\nI guess that could do it. Should we be more specific in the draft? Also, we probably would need to be more explicit in the draft that you should request a scope for your CONNECT request that allows you to received ICMP messages.\nCan you write a PR for this please?\nIETF 114: Recommend including ICMP if providing routes to the peer, and clients should always be prepared to receive ICMP. Also add note that ICMP may have other source/destination addresses so as to not surprise endpoints.\nAlso look at RFC 4301 for some potentially helpful text.\nWhere is the ICMPv4 detail?", "new_text": "have a route for the destination address, or if it is configured to reject a destination prefix by policy, or if the MTU of the outgoing link is lower than the size of the packet to be forwarded. In such scenarios, CONNECT-IP endpoints SHOULD use ICMP ICMP ICMPV6 to signal the forwarding error to its peer. Endpoints are free to select the most appropriate ICMP errors to send. Some examples that are relevant for CONNECT-IP include: For invalid source addresses, send Destination Unreachable ICMPV6 with code 5, \"Source address failed ingress/egress policy\". For unroutable destination addresses, send Destination Unreachable ICMPV6 with a code 0, \"No route to destination\", or code 1, \"Communication with destination administratively prohibited\". For packets that cannot fit within the MTU of the outgoing link, send Packet Too Big ICMPV6. In order to receive these errors, endpoints need to be prepared to receive ICMP packets. If an endpoint sends ROUTE_ADVERTISEMENT capsules, its routes SHOULD include an allowance for receiving ICMP messages. If an endpoint does not send ROUTE_ADVERTISEMENT capsules, such as a client opening an IP flow through a proxy, it SHOULD process proxied ICMP packets from its peer in order to receive these errors. Note that ICMP messages can originate from a source address different from that of the CONNECT-IP peer. 8."}
{"id": "q-en-draft-ietf-masque-connect-udp-0d1c7ef449fdf80fd58ccea70e8c27e1efdb593c4003811f9f933081cf08a367", "old_text": "There are significant risks in allowing arbitrary clients to establish a tunnel to arbitrary targets, as that could allow bad actors to send traffic and have it attributed to the proxy. Proxies that support UDP proxying SHOULD restrict its use to authenticated users. Because the CONNECT method creates a TCP connection to the target,", "comments": "People shout at me for doing this, so I'm paying it forward. Perhaps you could move the SHOULD requirement to a statement in Section 3, so that it covers all forms of the CONNECT-UDP request.", "new_text": "There are significant risks in allowing arbitrary clients to establish a tunnel to arbitrary targets, as that could allow bad actors to send traffic and have it attributed to the proxy. Proxies that support UDP proxying ought to restrict its use to authenticated users. Because the CONNECT method creates a TCP connection to the target,"}
{"id": "q-en-draft-ietf-masque-connect-udp-2e8029568d4b12a9562df6bd593d218dd2a8136ebb86712c7005328b8e318e03", "old_text": "contains the unmodified payload of a UDP packet (referred to as \"data octets\" in UDP). Clients MAY optimistically start sending proxied UDP packets before receiving the response to its UDP proxying request. However, implementors should note that such proxied packets may not be processed by the proxy if it responds to the request with a failure, or if the proxied packets are received by the proxy before the request. By virtue of the definition of the UDP header UDP, it is not possible to encode UDP payloads longer than 65527 bytes. Therefore, endpoints", "comments": "In section 5 in this sentence the term \"proxied UDP packets\" is used the only time and might not be clear: How about to just say \"UDP packets in HTTP datagrams\"?\n+1 good improvement", "new_text": "contains the unmodified payload of a UDP packet (referred to as \"data octets\" in UDP). Clients MAY optimistically start sending UDP packets in HTTP Datagrams before receiving the response to its UDP proxying request. However, implementors should note that such proxied packets may not be processed by the proxy if it responds to the request with a failure, or if the proxied packets are received by the proxy before the request. By virtue of the definition of the UDP header UDP, it is not possible to encode UDP payloads longer than 65527 bytes. Therefore, endpoints"}
{"id": "q-en-draft-ietf-masque-connect-udp-60799f0db06f2b323d1cdcf58f1d21bc54ef39b218fdcddab7b9c5e21f2c12d3", "old_text": "could send more data to an unwilling target than a CONNECT proxy. However, in practice denial of service attacks target open TCP ports so the TCP SYN-ACK does not offer much protection in real scenarios. The security considerations described in HTTP-DGRAM also apply here.", "comments": "Thx!\nShould we maybe add a recommendation to the sec considerations that proxies may limited the amount of data they send as long as they don't see any reply. I guess initially you would limit to 10 packets and then following RFC8085 limit to one datagram per 3 seconds...? Not sure if that really is a good idea but I wanted to bring it up for discussion...\nI thought about that when I first wrote the existing text, and I came to the conclusion that a limit here wouldn't protect against real attacks. If someone is attacking a closed UDP port, they're not accomplishing anything. If someone attacks an open UDP port (for example 443 because they want to attack someone's QUIC endpoint), then the target will reply with some sort of UDP packet (for example a QUIC RETRY) so this protection will disable itself. Does that make sense?\nThe challenge I see with any advice is that the proxy is not an application using UDP, it's just passing opaque data. So it will really struggle to rationalise at all about the various UDP use cases that clients have for the proxy. Perhaps rather than focus on advice, focus on how CONNECT UDP proxy considerations differ from CONNECT TCP proxies. That may well cover, at the same time, considerations like how long should a proxy hold open requests (tunnels) that don't exchange any packets in one or both directions i.e., a DoS attack on the proxy itself via a slow loris kind of mechanism\nThat's discussed in the Proxy Handling section, it references BEHAVE for how long sockets should be kept open\nNAME would it be worth adding your reasoning (which makes sense to me) to the draft somewhere?\nfair point, I'd read that and forgotten about it, maybe worth a back reference from the security considerations. For instance, HTTP/2 has some special callouts to CONNECT in its security considerations URL edit: I'm not claiming the same considerations apply here, just that server resource usage as a byproduct of proxying might be different than other request loads\nThat makes sense, I've written up to address these comments.\nThank you", "new_text": "could send more data to an unwilling target than a CONNECT proxy. However, in practice denial of service attacks target open TCP ports so the TCP SYN-ACK does not offer much protection in real scenarios. While a proxy could potentially limit the number of UDP packets it is willing to forward until it has observed a response from the target, that is unlikely to provide any protection against denial of service attacks because such attacks target open UDP ports where the protocol running over UDP would respond, and that would be interpreted as willingness to accept UDP by the proxy. UDP sockets for UDP proxying have a different lifetime than TCP sockets for CONNECT, therefore implementors would be well served to follow the advice in handling if they base their UDP proxying implementation on a preexisting implementation of CONNECT. The security considerations described in HTTP-DGRAM also apply here."}
{"id": "q-en-draft-ietf-masque-connect-udp-432c9b179cc86fd521e7595f72623d62c1402a270120ccb51f1f8fb9608a58f8", "old_text": "Therefore it needs to respond to the request without waiting for a packet from the target. However, if the target_host is a DNS name, the UDP proxy MUST perform DNS resolution before replying to the HTTP request. If errors occur during this process (for example, a DNS resolution failure), the UDP proxy MUST fail the request and SHOULD send details using an appropriate \"Proxy-Status\" header field PROXY- STATUS. UDP proxies can use connected UDP sockets if their operating system supports them, as that allows the UDP proxy to rely on the kernel to", "comments": "I also want to point an issue due to this stance. The fact that we are not explicit and discuss the expectations and realities of different IP versions we have interesting failure cases due to this text from Section 3.1: \u201cHowever, if the target_host is a DNS name, the UDP proxy MUST perform DNS resolution before replying to the HTTP request.\u201d No where in the follow paragraphs in this section do it discuss the slight issue that a DNS resolution can return only an AAAA record and the MASQUE server may only support IPv4. What is the error response? Should this be a proxy-status response. I would not there are currently no response for \u201cIP version not supported\u201d.\nThese concerns apply equally to TCP proxying via HTTP. We should strive to provide parity to how CONNECT treats these matters. HTTP semantics don't say anything\n+1 to Lucas' meta-comment about striving for parity with CONNECT There is no such thing as a \"unsupported IP version\" DNS failure. Let's take an example where the client wants to CONNECT-UDP via the UDP proxy to the target . This particular host is configured such that querying AAAA will get a valid address, and querying A will yield a \"no answer no error\" DNS result. Let's assume that in this example the UDP proxy is IPv4-only. In that scenario, the UDP proxy will only query A (because it doesn't know what AAAA is) and it will get \"no result no answer\" so it will send\nNAME in light of the previous comments, can you clarify what you think is missing from the document please?\nFirst of all, I think that just answering lets do like Connect is a poor answer. There are benefits to alignment, but also one should ask if one are preventing useful functionality. In this case I think there are a relevant difference between what could be answered depending on how one implement stuff. Your suggest road says that for a IPv4 only proxy with a DNS name that has AAAA record but no A record the answer is: a) There is no answer For a node that would query both the answer could actually be: b) There is host, but I can't reach it as I don't support the right IP version, try another proxy. Sure, limiting ourselves to a) is easy, but is it the most useful for the protocol?\nDisagree it's a poor answer. An HTTP proxy that inplements CONNECT for UDP is likely going to implement CONNECT for TCP. How to treat name resolution for both of these has to be consistent. The Proxy-Status header field is the appropriate way to signal such failures. Any text more complicated than what we have now, should be pursued in a separate document IMO. Not added at this late stage. It would be straightforward to form up a document like \"DNS considerations for HTTP proxying\" that is additive to the specifications we have.\nI agree: Proxy-Status is the right place for this discussion, and CONNECT-UDP already recommends Proxy-Status. Also, as a technical matter, this is not how dual-stack connectivity works. As noted above, an IPv4-only UDP client will never issue a AAAA query for a destination hostname.\nI agree with what Lucas and Ben said above. Let's move this discussion to an extension draft.\nI agree that this discussion probably deserves to be in an extension draft. As is stands today, a DNS failure as described in this thread can just as easily result in a destination-unreachable / connection-closed error for the CONNECT-UDP stream, and an extension to provide detailed information as to why seems like the natural path forward.\nProxy status is absolutely the right thing to do here. The proxy-status is what we are already doing in production for CONNECT and CONNECT-UDP. The current text seems entirely appropriate for this: The only thing I could see saying further is to mention the error parameter, like this: I do not think we need any extension draft for this. This is very straightforward, and doesn't need to say much.\nThanks Tommy, I think that's a useful clarification. I wrote up PR based on your proposal to make this clearer\nOk, I am fine with this resolution.\nThank you for confirming Magnus!", "new_text": "Therefore it needs to respond to the request without waiting for a packet from the target. However, if the target_host is a DNS name, the UDP proxy MUST perform DNS resolution before replying to the HTTP request. If errors occur during this process, the UDP proxy MUST fail the request and SHOULD send details using an appropriate \"Proxy- Status\" header field PROXY-STATUS (for example, if DNS resolution returns an error, the proxy can use the dns_error Proxy Error Type from PROXY-STATUS). UDP proxies can use connected UDP sockets if their operating system supports them, as that allows the UDP proxy to rely on the kernel to"}
{"id": "q-en-draft-ietf-masque-connect-udp-17a9895709ff7bd541e1ec5ab3726ce5dbe472f49114295c4c726bac217cc80e", "old_text": "servers that support UDP proxying ought to restrict its use to authenticated users. UDP proxies share many similarities to TCP CONNECT proxies when considering them as infrastructure for abuse to enable denial of service attacks. Both can obfuscate the attacker's source address", "comments": "This was prompted by .\nLGTM. Nice and tight. URL Erik Kline has entered the following ballot position for draft-ietf-masque-connect-udp-14: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about how to handle DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: CC NAME Should the 2xx discussion here include additional reference to the extended commentary in masque-h3-datagram#section-3.2? I really wish there could be more said about tunnel security. I was tempted to try to formulate some DISCUSS on the matter, but I cannot seem to see where any more extensive text was ever written for the CONNECT method. Nevertheless, there are plenty of concerns that could be mentioned, like what should a proxy do if the target_host is one if its own IP addresses, or a link-local address, and so on. I'm not entirely convinced that it suffices to claim that because the tunnel is not an IP-in-IP tunnel certain considerations don't apply. Anything and everything can be tunneled over UDP, especially if RFC 8086 GRE-in-UDP is what the payloads carry (or ESP, of course). It seems like most of the points raised in RFC 6169 apply; perhaps worth a mention. But as I said, I think all these concerns apply equally to CONNECT (in principle, anyway), and the text here is commensurate with the text for its TCP cousin (that I could find, anyway).", "new_text": "servers that support UDP proxying ought to restrict its use to authenticated users. There exist software and network deployments that perform access control checks based on the source IP address of incoming requests. For example, some software allows unauthenticated configuration changes if they originated from 127.0.0.1. Such software could be running on the same host as the UDP proxy, or in the same broadcast domain. Proxied UDP traffic would then be received with a source IP address belonging to the UDP proxy. If this source address is used for access control, UDP proxying clients could use the UDP proxy to escalate their access privileges beyond those they might otherwise have. This could lead to unauthorized access by UDP proxying clients unless the UDP proxy disallows UDP proxying requests to vulnerable targets, such as localhost names and addresses, link-local addresses, the UDP proxy's own addresses, multicast and broadcast addresses. UDP proxies can use the destination_ip_prohibited Proxy Error Type from PROXY-STATUS when rejecting such requests. UDP proxies share many similarities to TCP CONNECT proxies when considering them as infrastructure for abuse to enable denial of service attacks. Both can obfuscate the attacker's source address"}
{"id": "q-en-draft-ietf-masque-connect-udp-17a9895709ff7bd541e1ec5ab3726ce5dbe472f49114295c4c726bac217cc80e", "old_text": "could also cause issues for valid traffic. The security considerations described in HTTP-DGRAM also apply here. 8.", "comments": "This was prompted by .\nLGTM. Nice and tight. URL Erik Kline has entered the following ballot position for draft-ietf-masque-connect-udp-14: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about how to handle DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: CC NAME Should the 2xx discussion here include additional reference to the extended commentary in masque-h3-datagram#section-3.2? I really wish there could be more said about tunnel security. I was tempted to try to formulate some DISCUSS on the matter, but I cannot seem to see where any more extensive text was ever written for the CONNECT method. Nevertheless, there are plenty of concerns that could be mentioned, like what should a proxy do if the target_host is one if its own IP addresses, or a link-local address, and so on. I'm not entirely convinced that it suffices to claim that because the tunnel is not an IP-in-IP tunnel certain considerations don't apply. Anything and everything can be tunneled over UDP, especially if RFC 8086 GRE-in-UDP is what the payloads carry (or ESP, of course). It seems like most of the points raised in RFC 6169 apply; perhaps worth a mention. But as I said, I think all these concerns apply equally to CONNECT (in principle, anyway), and the text here is commensurate with the text for its TCP cousin (that I could find, anyway).", "new_text": "could also cause issues for valid traffic. The security considerations described in HTTP-DGRAM also apply here. Since it is possible to tunnel IP packets over UDP, the guidance in TUNNEL-SECURITY can apply. 8."}
{"id": "q-en-draft-ietf-masque-connect-udp-b8e27560cb26f1ac638096636cfdbe665920c03bde3f02c7e37819776d534494", "old_text": "3. This document defines the \"masque-udp\" HTTP Upgrade Token. \"masque- udp\" uses the Capsule Protocol as defined in HTTP-DGRAM. A \"masque-udp\" request requests that the recipient establish a tunnel over a single HTTP stream to the destination target server identified by the \"target_host\" and \"target_port\" variables of the URI template (see client-config). If the request is successful, the proxy commits to converting received HTTP Datagrams into UDP packets and vice versa until the tunnel is closed. Tunnels are commonly used to create an end-to-end virtual connection, which can then be secured using QUIC QUIC or another protocol running over UDP. When sending its UDP proxying request, the client SHALL perform URI template expansion to determine the path and query of its request.", "comments": "Should the upgrade token be \"connect-udp\" or \"masque-udp\" ? (issue based on by NAME\nI vote for or something along those lines. \"MASQUE\" is good as a working group name, but I don't think it's helpful in the protocol to help people understand what the variant of CONNECT does.", "new_text": "3. This document defines the \"connect-udp\" HTTP Upgrade Token. \"connect- udp\" uses the Capsule Protocol as defined in HTTP-DGRAM. A \"connect-udp\" request requests that the recipient establish a tunnel over a single HTTP stream to the destination target server identified by the \"target_host\" and \"target_port\" variables of the URI template (see client-config). If the request is successful, the proxy commits to converting received HTTP Datagrams into UDP packets and vice versa until the tunnel is closed. Tunnels are commonly used to create an end-to-end virtual connection, which can then be secured using QUIC QUIC or another protocol running over UDP. When sending its UDP proxying request, the client SHALL perform URI template expansion to determine the path and query of its request."}
{"id": "q-en-draft-ietf-masque-connect-udp-b8e27560cb26f1ac638096636cfdbe665920c03bde3f02c7e37819776d534494", "old_text": "\"Upgrade\". the request SHALL include a single \"Upgrade\" header with value \"masque-udp\". For example, if the client is configured with URI template \"https://proxy.example.org/{target_host}/{target_port}/\" and wishes", "comments": "Should the upgrade token be \"connect-udp\" or \"masque-udp\" ? (issue based on by NAME\nI vote for or something along those lines. \"MASQUE\" is good as a working group name, but I don't think it's helpful in the protocol to help people understand what the variant of CONNECT does.", "new_text": "\"Upgrade\". the request SHALL include a single \"Upgrade\" header with value \"connect-udp\". For example, if the client is configured with URI template \"https://proxy.example.org/{target_host}/{target_port}/\" and wishes"}
{"id": "q-en-draft-ietf-masque-connect-udp-b8e27560cb26f1ac638096636cfdbe665920c03bde3f02c7e37819776d534494", "old_text": "\"Upgrade\". the response SHALL include a single \"Upgrade\" header with value \"masque-udp\". the response SHALL NOT include any Transfer-Encoding or Content- Length header fields.", "comments": "Should the upgrade token be \"connect-udp\" or \"masque-udp\" ? (issue based on by NAME\nI vote for or something along those lines. \"MASQUE\" is good as a working group name, but I don't think it's helpful in the protocol to help people understand what the variant of CONNECT does.", "new_text": "\"Upgrade\". the response SHALL include a single \"Upgrade\" header with value \"connect-udp\". the response SHALL NOT include any Transfer-Encoding or Content- Length header fields."}
{"id": "q-en-draft-ietf-masque-connect-udp-b8e27560cb26f1ac638096636cfdbe665920c03bde3f02c7e37819776d534494", "old_text": "The \":method\" pseudo-header field SHALL be \"CONNECT\". The \":protocol\" pseudo-header field SHALL be \"masque-udp\". The \":authority\" pseudo-header field SHALL contain the authority of the proxy.", "comments": "Should the upgrade token be \"connect-udp\" or \"masque-udp\" ? (issue based on by NAME\nI vote for or something along those lines. \"MASQUE\" is good as a working group name, but I don't think it's helpful in the protocol to help people understand what the variant of CONNECT does.", "new_text": "The \":method\" pseudo-header field SHALL be \"CONNECT\". The \":protocol\" pseudo-header field SHALL be \"connect-udp\". The \":authority\" pseudo-header field SHALL contain the authority of the proxy."}
{"id": "q-en-draft-ietf-masque-connect-udp-b8e27560cb26f1ac638096636cfdbe665920c03bde3f02c7e37819776d534494", "old_text": "7.1. This document will request IANA to register \"masque-udp\" in the HTTP Upgrade Token Registry maintained at <>. masque-udp Proxying of UDP Payloads.", "comments": "Should the upgrade token be \"connect-udp\" or \"masque-udp\" ? (issue based on by NAME\nI vote for or something along those lines. \"MASQUE\" is good as a working group name, but I don't think it's helpful in the protocol to help people understand what the variant of CONNECT does.", "new_text": "7.1. This document will request IANA to register \"connect-udp\" in the HTTP Upgrade Token Registry maintained at <>. connect-udp Proxying of UDP Payloads."}
{"id": "q-en-draft-ietf-masque-connect-udp-f1fa4c1702f6775f5e2278ea58b723eca37f0af9f7a5c2430f7c02922b34b506", "old_text": "to configure the proxy host and the proxy port. Client implementations of this specification that are constrained by such limitations MUST use the default template which is defined as: \"https://$PROXY_HOST:$PROXY_PORT/{target_host}/{target_port}/\" where $PROXY_HOST and $PROXY_PORT are the configured host and port of the proxy respectively. Proxy deployments SHOULD use the default template to facilitate interoperability with such clients. 3.", "comments": "The default template's purpose is to enable compatibility with origin-only proxy configuration schemes. This is a worthwhile goal, but the default template is not a suitable solution: It violates RFC 8615's definition of .well-known/ as the location for metadata about an origin. A MASQUE service is not metadata. It is not representable as a .well-known \"URI suffix\" as required by the .well-known registry (unless we register all (2^128 + 2^32)2^16 distinct paths). It conflicts with better discovery mechanisms, like the one in the MASQUE development process. It is not compatible with the use of HTTPS records. (In the targeted legacy context, the client has no other DNS server that can be used when a proxy is in use.) This is somewhat ironic, as HTTPS records allow bootstrapping QUIC connections, and proxying for QUIC is the primary motivation for CONNECT-UDP. We should defer this problem to a subsequent draft, and provide a cohesive proxy setup flow, presumably with a standardized configuration object that could be served from .well-known/ and extended to cover CONNECT-IP, DoH and other relevant capabilities.\nFundamentally I disagree with your assessment that this solution isn't good and should be removed. This serves a real purpose. Let's go into the issues individually: Those points are not an argument to remove the default template, they are an argument to revert PR and switch back to a non-well-known prefix. Let's set those aside for now. I don't see a conflict here: the default template doesn't preclude using other templates that have been discovered through other means. Can you clarify what the conflict is? Can you clarify what the incompatibility is here? I don't understand what the relationship between the default template and HTTPS records is. Clients are welcome to query their own HTTPS records before using CONNECT or CONNECT-UDP. The way that the client picks its template has no impact on its DNS settings.\n... This is called \"squatting on the HTTP namespace\", and it would violate the in RFC 7320. It's also messy to implement, as it would require mixing MASQUE-specific \"routing\" rules with the user's application-specific routing rules. It doesn't prevent better discovery mechanisms, but it doesn't encourage them either. From the server side, there would be no way to \"opt out\" of the default path, due to the potential for clients that have implemented this specification but not a later, better discovery mechanism. Servers that would prefer a different path would still have to operate the default path for backwards-compatibility. The relationship is that the default template cannot express the existence of a DNS server affiliated with the MASQUE server. No, they are not, as this would violate the privacy guarantees implicit in the legacy proxy settings UI that this feature is trying to preserve.\nYou're conflating two types of proxies here: traditional enterprise proxies that speak CONNECT today and can be updated to speak CONNECT-UDP privacy proxies designed to protect a user's privacy from the network Folks won't ever be deploying both of these use cases on the same proxy. Your concerns are valid but they only apply to (2). (2) would never use the default template. The default template only applies to (1).\nClient software generally can't tell why a proxy is being used. It just knows that an HTTP CONNECT proxy has been configured with a particular protocol, domain, and port. Its behavior has to be correct for any existing use case. I don't see why privacy proxies wouldn't use the default template. Privacy proxies that are configured via the existing Secure Web Proxy configuration mechanisms (e.g. browser extensions, system settings) would have no other way to use MASQUE. Also, I think this distinction is false. Enterprises are free to use HTTP CONNECT proxies to conceal certain information from the network (e.g. using an employees-only browser extension), and enterprises are often opinionated about which DNS server ought to be used (not controllable via a browser extension).\nFor completeness, here's how I think this ought to look: URL\nYour discovery draft sounds great - however the default template doesn't get in the way of that. I don't think it's reasonable to remove a useful feature because we might have a more complete solution later when the useful feature doesn't get in the way of the future complete solution.\nShould the default URI template have a /.well-known prefix? For example, ? (issue based on by NAME\nWhile you are at it..., how about something simpler? would put the host and port in query parameters AND it would have smaller, shorter names.\nHaving implemented this, placing the variables in the path is easier to parse on the proxy than placing them in query parameters\nFrom the discussion in the room at IETF 113, it appeared folks were OK with . I'm going to pick that to close this issue, and we can open another issue during WGLC if we really want to bikeshed this further\nIt sound like you're writing your own URL parser? Any reasonable URL library will handle this for you.\nI don't think placing these things in path components by default is reasonable.", "new_text": "to configure the proxy host and the proxy port. Client implementations of this specification that are constrained by such limitations MUST use the default template which is defined as: \"https://$PROXY_HOST:$PROXY_PORT/.well-known/masque/ udp/{target_host}/{target_port}/\" where $PROXY_HOST and $PROXY_PORT are the configured host and port of the proxy respectively. Proxy deployments SHOULD use the default template to facilitate interoperability with such clients. 3."}
{"id": "q-en-draft-ietf-masque-connect-udp-f1fa4c1702f6775f5e2278ea58b723eca37f0af9f7a5c2430f7c02922b34b506", "old_text": "\"connect-udp\". For example, if the client is configured with URI template \"https://proxy.example.org/{target_host}/{target_port}/\" and wishes to open a UDP proxying tunnel to target 192.0.2.42:443, it could send the following request: 3.3.", "comments": "The default template's purpose is to enable compatibility with origin-only proxy configuration schemes. This is a worthwhile goal, but the default template is not a suitable solution: It violates RFC 8615's definition of .well-known/ as the location for metadata about an origin. A MASQUE service is not metadata. It is not representable as a .well-known \"URI suffix\" as required by the .well-known registry (unless we register all (2^128 + 2^32)2^16 distinct paths). It conflicts with better discovery mechanisms, like the one in the MASQUE development process. It is not compatible with the use of HTTPS records. (In the targeted legacy context, the client has no other DNS server that can be used when a proxy is in use.) This is somewhat ironic, as HTTPS records allow bootstrapping QUIC connections, and proxying for QUIC is the primary motivation for CONNECT-UDP. We should defer this problem to a subsequent draft, and provide a cohesive proxy setup flow, presumably with a standardized configuration object that could be served from .well-known/ and extended to cover CONNECT-IP, DoH and other relevant capabilities.\nFundamentally I disagree with your assessment that this solution isn't good and should be removed. This serves a real purpose. Let's go into the issues individually: Those points are not an argument to remove the default template, they are an argument to revert PR and switch back to a non-well-known prefix. Let's set those aside for now. I don't see a conflict here: the default template doesn't preclude using other templates that have been discovered through other means. Can you clarify what the conflict is? Can you clarify what the incompatibility is here? I don't understand what the relationship between the default template and HTTPS records is. Clients are welcome to query their own HTTPS records before using CONNECT or CONNECT-UDP. The way that the client picks its template has no impact on its DNS settings.\n... This is called \"squatting on the HTTP namespace\", and it would violate the in RFC 7320. It's also messy to implement, as it would require mixing MASQUE-specific \"routing\" rules with the user's application-specific routing rules. It doesn't prevent better discovery mechanisms, but it doesn't encourage them either. From the server side, there would be no way to \"opt out\" of the default path, due to the potential for clients that have implemented this specification but not a later, better discovery mechanism. Servers that would prefer a different path would still have to operate the default path for backwards-compatibility. The relationship is that the default template cannot express the existence of a DNS server affiliated with the MASQUE server. No, they are not, as this would violate the privacy guarantees implicit in the legacy proxy settings UI that this feature is trying to preserve.\nYou're conflating two types of proxies here: traditional enterprise proxies that speak CONNECT today and can be updated to speak CONNECT-UDP privacy proxies designed to protect a user's privacy from the network Folks won't ever be deploying both of these use cases on the same proxy. Your concerns are valid but they only apply to (2). (2) would never use the default template. The default template only applies to (1).\nClient software generally can't tell why a proxy is being used. It just knows that an HTTP CONNECT proxy has been configured with a particular protocol, domain, and port. Its behavior has to be correct for any existing use case. I don't see why privacy proxies wouldn't use the default template. Privacy proxies that are configured via the existing Secure Web Proxy configuration mechanisms (e.g. browser extensions, system settings) would have no other way to use MASQUE. Also, I think this distinction is false. Enterprises are free to use HTTP CONNECT proxies to conceal certain information from the network (e.g. using an employees-only browser extension), and enterprises are often opinionated about which DNS server ought to be used (not controllable via a browser extension).\nFor completeness, here's how I think this ought to look: URL\nYour discovery draft sounds great - however the default template doesn't get in the way of that. I don't think it's reasonable to remove a useful feature because we might have a more complete solution later when the useful feature doesn't get in the way of the future complete solution.\nShould the default URI template have a /.well-known prefix? For example, ? (issue based on by NAME\nWhile you are at it..., how about something simpler? would put the host and port in query parameters AND it would have smaller, shorter names.\nHaving implemented this, placing the variables in the path is easier to parse on the proxy than placing them in query parameters\nFrom the discussion in the room at IETF 113, it appeared folks were OK with . I'm going to pick that to close this issue, and we can open another issue during WGLC if we really want to bikeshed this further\nIt sound like you're writing your own URL parser? Any reasonable URL library will handle this for you.\nI don't think placing these things in path components by default is reasonable.", "new_text": "\"connect-udp\". For example, if the client is configured with URI template \"https://proxy.example.org/.well-known/masque/ udp/{target_host}/{target_port}/\" and wishes to open a UDP proxying tunnel to target 192.0.2.42:443, it could send the following request: 3.3."}
{"id": "q-en-draft-ietf-masque-connect-udp-f1fa4c1702f6775f5e2278ea58b723eca37f0af9f7a5c2430f7c02922b34b506", "old_text": "This document will request IANA to register \"connect-udp\" in the HTTP Upgrade Token Registry maintained at <>. ", "comments": "The default template's purpose is to enable compatibility with origin-only proxy configuration schemes. This is a worthwhile goal, but the default template is not a suitable solution: It violates RFC 8615's definition of .well-known/ as the location for metadata about an origin. A MASQUE service is not metadata. It is not representable as a .well-known \"URI suffix\" as required by the .well-known registry (unless we register all (2^128 + 2^32)2^16 distinct paths). It conflicts with better discovery mechanisms, like the one in the MASQUE development process. It is not compatible with the use of HTTPS records. (In the targeted legacy context, the client has no other DNS server that can be used when a proxy is in use.) This is somewhat ironic, as HTTPS records allow bootstrapping QUIC connections, and proxying for QUIC is the primary motivation for CONNECT-UDP. We should defer this problem to a subsequent draft, and provide a cohesive proxy setup flow, presumably with a standardized configuration object that could be served from .well-known/ and extended to cover CONNECT-IP, DoH and other relevant capabilities.\nFundamentally I disagree with your assessment that this solution isn't good and should be removed. This serves a real purpose. Let's go into the issues individually: Those points are not an argument to remove the default template, they are an argument to revert PR and switch back to a non-well-known prefix. Let's set those aside for now. I don't see a conflict here: the default template doesn't preclude using other templates that have been discovered through other means. Can you clarify what the conflict is? Can you clarify what the incompatibility is here? I don't understand what the relationship between the default template and HTTPS records is. Clients are welcome to query their own HTTPS records before using CONNECT or CONNECT-UDP. The way that the client picks its template has no impact on its DNS settings.\n... This is called \"squatting on the HTTP namespace\", and it would violate the in RFC 7320. It's also messy to implement, as it would require mixing MASQUE-specific \"routing\" rules with the user's application-specific routing rules. It doesn't prevent better discovery mechanisms, but it doesn't encourage them either. From the server side, there would be no way to \"opt out\" of the default path, due to the potential for clients that have implemented this specification but not a later, better discovery mechanism. Servers that would prefer a different path would still have to operate the default path for backwards-compatibility. The relationship is that the default template cannot express the existence of a DNS server affiliated with the MASQUE server. No, they are not, as this would violate the privacy guarantees implicit in the legacy proxy settings UI that this feature is trying to preserve.\nYou're conflating two types of proxies here: traditional enterprise proxies that speak CONNECT today and can be updated to speak CONNECT-UDP privacy proxies designed to protect a user's privacy from the network Folks won't ever be deploying both of these use cases on the same proxy. Your concerns are valid but they only apply to (2). (2) would never use the default template. The default template only applies to (1).\nClient software generally can't tell why a proxy is being used. It just knows that an HTTP CONNECT proxy has been configured with a particular protocol, domain, and port. Its behavior has to be correct for any existing use case. I don't see why privacy proxies wouldn't use the default template. Privacy proxies that are configured via the existing Secure Web Proxy configuration mechanisms (e.g. browser extensions, system settings) would have no other way to use MASQUE. Also, I think this distinction is false. Enterprises are free to use HTTP CONNECT proxies to conceal certain information from the network (e.g. using an employees-only browser extension), and enterprises are often opinionated about which DNS server ought to be used (not controllable via a browser extension).\nFor completeness, here's how I think this ought to look: URL\nYour discovery draft sounds great - however the default template doesn't get in the way of that. I don't think it's reasonable to remove a useful feature because we might have a more complete solution later when the useful feature doesn't get in the way of the future complete solution.\nShould the default URI template have a /.well-known prefix? For example, ? (issue based on by NAME\nWhile you are at it..., how about something simpler? would put the host and port in query parameters AND it would have smaller, shorter names.\nHaving implemented this, placing the variables in the path is easier to parse on the proxy than placing them in query parameters\nFrom the discussion in the room at IETF 113, it appeared folks were OK with . I'm going to pick that to close this issue, and we can open another issue during WGLC if we really want to bikeshed this further\nIt sound like you're writing your own URL parser? Any reasonable URL library will handle this for you.\nI don't think placing these things in path components by default is reasonable.", "new_text": "This document will request IANA to register \"connect-udp\" in the HTTP Upgrade Token Registry maintained at <>. 8.2. This document will request IANA to register \"masque/udp\" in the Well- Known URIs Registry maintained at <>. "}
{"id": "q-en-draft-ietf-masque-h3-datagram-84e22876b3374b92f7ee591218a325a0841cde9d80aadd58c24a9ef7534ea512", "old_text": "directions. In HTTP/1.x, the data stream consists of all bytes on the connection that follow the blank line that concludes either the request header section, or the 2xx (Successful) response header section. In HTTP/2 and HTTP/3, the data stream of a given HTTP request consists of all bytes sent in DATA frames with the corresponding stream ID. The concept of a data stream is particularly relevant for methods such as CONNECT where there is no HTTP message content after the headers. Note that use of the Capsule Protocol is not required to use HTTP Datagrams. If a new HTTP Upgrade Token is only defined over", "comments": "In Section 2: When running over HTTP/1, requests are strictly serialized in the connection, therefore the first layer of demultiplexing is not needed. Is there any actual possibility to run more than a single request with HTTP datagram support over an transport connection? When one runs an HTTP request that enables datagram I don't see how it would be possible to close that request in any other way than closing the underlying transport connection. Or are there a way that I am missing that would enable one to close this request, and then issue another pipelined HTTP request that also uses HTTP datagrams? Is there a need to clarify this, or make it clear that only a single HTTP request with datagram can be sent?\nIn section 2, we've yet to mention anything about how the rest of the spec operates. So the comment in isolation is correct when talking about the general concept of multiplexin. I see your point but I'm not sure saying anything more here will help. Section 4.1 provides the details that implementers need to do the right thing.\nOkay, but lets include Section 4.1 also into this. I assume you are referring to this sentence: In HTTP/1.x, the data stream consists of all bytes on the connection that follow the blank line that concludes either the request header section, or the 2xx (Successful) response header section. But, per URL this body length length rules will be those that could govern a closure of a HTTP request enabling datagram. Would it be realistic to actually make it clear that if one enable HTTP Datagram transport close is the only, way or is that just creating additional rules making implementation even more? Otherwise do we need additional analysis if there are possibility that any of these rules could actually result in a closing a HTTP request?\nThere is no HTTP content on upgrade requests, we should link to the definition of Upgrade URL and make things slightly more clear in Section 4.1.\nOk, I guess it becomes a question of which method for enabling datagram one uses, if this is an upgrade or a new method. And I would note that Connect method appears to have some interesting exceptions compared to other methods. Which I think makes this into a requirement on the using method/upgrade that uses HTTP datagrams to actually specify this aspect.\nI've written up to clarify that pipelining is not supported.", "new_text": "directions. In HTTP/1.x, the data stream consists of all bytes on the connection that follow the blank line that concludes either the request header section, or the 2xx (Successful) response header section. (Note that only a single HTTP request starting the capsule protocol can be sent on HTTP/1.x connections.) In HTTP/2 and HTTP/3, the data stream of a given HTTP request consists of all bytes sent in DATA frames with the corresponding stream ID. The concept of a data stream is particularly relevant for methods such as CONNECT where there is no HTTP message content after the headers. Note that use of the Capsule Protocol is not required to use HTTP Datagrams. If a new HTTP Upgrade Token is only defined over"}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "versions. format defines how QUIC DATAGRAM frames are used with HTTP/3. setting defines an HTTP/3 setting that endpoints can use to advertise support of the frame. capsule introduces the Capsule Protocol and the \"data stream\" concept. Data streams are initiated using special-purpose HTTP", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "versions. format defines how QUIC DATAGRAM frames are used with HTTP/3. setting defines an HTTP/3 setting that endpoints can use to advertise support of the frame. capsule introduces the Capsule Protocol and the \"data stream\" concept. Data streams are initiated using special-purpose HTTP"}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "if they define semantics for such HTTP Datagrams and negotiate mutual support. 4. This specification introduces the Capsule Protocol. The Capsule", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "if they define semantics for such HTTP Datagrams and negotiate mutual support. 3.1. Implementations of HTTP/3 that support HTTP Datagrams can indicate that to their peer by sending the H3_DATAGRAM SETTINGS parameter with a value of 1. The value of the H3_DATAGRAM SETTINGS parameter MUST be either 0 or 1. A value of 0 indicates that HTTP Datagrams are not supported. An endpoint that receives the H3_DATAGRAM SETTINGS parameter with a value that is neither 0 or 1 MUST terminate the connection with error H3_SETTINGS_ERROR. Endpoints MUST NOT send QUIC DATAGRAM frames until they have both sent and received the H3_DATAGRAM SETTINGS parameter with a value of 1. When clients use 0-RTT, they MAY store the value of the server's H3_DATAGRAM SETTINGS parameter. Doing so allows the client to send QUIC DATAGRAM frames in 0-RTT packets. When servers decide to accept 0-RTT data, they MUST send a H3_DATAGRAM SETTINGS parameter greater than or equal to the value they sent to the client in the connection where they sent them the NewSessionTicket message. If a client stores the value of the H3_DATAGRAM SETTINGS parameter with their 0-RTT state, they MUST validate that the new value of the H3_DATAGRAM SETTINGS parameter sent by the server in the handshake is greater than or equal to the stored value; if not, the client MUST terminate the connection with error H3_SETTINGS_ERROR. In all cases, the maximum permitted value of the H3_DATAGRAM SETTINGS parameter is 1. It is RECOMMENDED that implementations that support receiving HTTP Datagrams using QUIC always send the H3_DATAGRAM SETTINGS parameter with a value of 1, even if the application does not intend to use HTTP Datagrams. This helps to avoid \"sticking out\"; see security. 3.1.1. [[RFC editor: please remove this section before publication.]] Some revisions of this draft specification use a different value (the Identifier field of a Setting in the HTTP/3 SETTINGS frame) for the H3_DATAGRAM Settings Parameter. This allows new draft revisions to make incompatible changes. Multiple draft versions MAY be supported by either endpoint in a connection. Such endpoints MUST send multiple values for H3_DATAGRAM. Once an endpoint has sent and received SETTINGS, it MUST compute the intersection of the values it has sent and received, and then it MUST select and use the most recent draft version from the intersection set. This ensures that both endpoints negotiate the same draft version. 4. This specification introduces the Capsule Protocol. The Capsule"}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "5. Implementations of HTTP/3 that support HTTP Datagrams can indicate that to their peer by sending the H3_DATAGRAM SETTINGS parameter with a value of 1. The value of the H3_DATAGRAM SETTINGS parameter MUST be either 0 or 1. A value of 0 indicates that HTTP Datagrams are not supported. An endpoint that receives the H3_DATAGRAM SETTINGS parameter with a value that is neither 0 or 1 MUST terminate the connection with error H3_SETTINGS_ERROR. Endpoints MUST NOT send QUIC DATAGRAM frames until they have both sent and received the H3_DATAGRAM SETTINGS parameter with a value of 1. When clients use 0-RTT, they MAY store the value of the server's H3_DATAGRAM SETTINGS parameter. Doing so allows the client to send QUIC DATAGRAM frames in 0-RTT packets. When servers decide to accept 0-RTT data, they MUST send a H3_DATAGRAM SETTINGS parameter greater than or equal to the value they sent to the client in the connection where they sent them the NewSessionTicket message. If a client stores the value of the H3_DATAGRAM SETTINGS parameter with their 0-RTT state, they MUST validate that the new value of the H3_DATAGRAM SETTINGS parameter sent by the server in the handshake is greater than or equal to the stored value; if not, the client MUST terminate the connection with error H3_SETTINGS_ERROR. In all cases, the maximum permitted value of the H3_DATAGRAM SETTINGS parameter is 1. It is RECOMMENDED that implementations that support receiving HTTP Datagrams using QUIC always send the H3_DATAGRAM SETTINGS parameter with a value of 1, even if the application does not intend to use HTTP Datagrams. This helps to avoid \"sticking out\"; see security. 5.1. [[RFC editor: please remove this section before publication.]] Some revisions of this draft specification use a different value (the Identifier field of a Setting in the HTTP/3 SETTINGS frame) for the H3_DATAGRAM Settings Parameter. This allows new draft revisions to make incompatible changes. Multiple draft versions MAY be supported by either endpoint in a connection. Such endpoints MUST send multiple values for H3_DATAGRAM. Once an endpoint has sent and received SETTINGS, it MUST compute the intersection of the values it has sent and received, and then it MUST select and use the most recent draft version from the intersection set. This ensures that both endpoints negotiate the same draft version. 6. Data streams (see capsule-protocol) can be prioritized using any means suited to stream or request prioritization. For example, see Section 11 of PRIORITY.", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "5. Data streams (see capsule-protocol) can be prioritized using any means suited to stream or request prioritization. For example, see Section 11 of PRIORITY."}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "define signaling to allow endpoints to communicate their prioritization preferences. 7. Since transmitting HTTP Datagrams using QUIC DATAGRAM frames requires sending an HTTP/3 Settings parameter, it \"sticks out\". In other", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "define signaling to allow endpoints to communicate their prioritization preferences. 6. Since transmitting HTTP Datagrams using QUIC DATAGRAM frames requires sending an HTTP/3 Settings parameter, it \"sticks out\". In other"}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "Tokens, it is not accessible from Web Platform APIs (such as those commonly accessed via JavaScript in web browsers). 8. 8.1. This document will request IANA to register the following entry in the \"HTTP/3 Settings\" registry:", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "Tokens, it is not accessible from Web Platform APIs (such as those commonly accessed via JavaScript in web browsers). 7. 7.1. This document will request IANA to register the following entry in the \"HTTP/3 Settings\" registry:"}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "HTTP_WG; HTTP working group; ietf-http-wg@w3.org 8.2. This document will request IANA to register the following entry in the \"HTTP/3 Error Codes\" registry:", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "HTTP_WG; HTTP working group; ietf-http-wg@w3.org 7.2. This document will request IANA to register the following entry in the \"HTTP/3 Error Codes\" registry:"}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "HTTP_WG; HTTP working group; ietf-http-wg@w3.org 8.3. This document will request IANA to register the following entry in the \"HTTP Field Name\" registry:", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "HTTP_WG; HTTP working group; ietf-http-wg@w3.org 7.3. This document will request IANA to register the following entry in the \"HTTP Field Name\" registry:"}
{"id": "q-en-draft-ietf-masque-h3-datagram-73d18062665e17e45f31526d4ffd24943b771444a9715b10793b423611982fde", "old_text": "None 8.4. This document establishes a registry for HTTP capsule type codes. The \"HTTP Capsule Types\" registry governs a 62-bit space.", "comments": "When reviewing the latest editors copy, the document structure leaps out at a bit odd by jumping from section 3 to section 5. That indicates to me these concepts are linked and also independent of the capsule protocol. So here I propose a minor restructure by moving the H3_DATAGRAM setting stuff to be a subsection of the HTTP/3 datagram format. This makes all the top-layer sections loosely coupled, which is nicer in my opinion.", "new_text": "None 7.4. This document establishes a registry for HTTP capsule type codes. The \"HTTP Capsule Types\" registry governs a 62-bit space."}
{"id": "q-en-draft-ietf-masque-h3-datagram-9377fb75b6b8dc453f586db481c5672b53409545de26f60f703db48a8e5f557d", "old_text": "connection error of type H3_DATAGRAM_ERROR. HTTP/3 Datagrams MUST NOT be sent unless the corresponding stream's send side is open. Once the receive side of a stream is closed, incoming datagrams for this stream are no longer expected so related state can be released. State MAY be kept for a short time to account for reordering. Once the state is released, the received associated datagrams MUST be silently dropped. If an HTTP/3 datagram is received and its Quarter Stream ID maps to a stream that has not yet been created, the receiver SHALL either drop", "comments": "The draft currently states: However in PR , NAME points out that this \"state\" is not well defined. I agree, we should fix it. Thinking about this more, I don't think we need the concept of related state. We can instead just say that received datagrams MUST be silently dropped if the stream receive side is closed.\nI agree with removing the concept of state is better. I don't want to be overly nitty and the proposed sentence is better but there also nothing like \"the stream receive side\", as there can be multiple streams (open and closed), that's why I proposed Or if we don't want state, then it would be:\nThat's not quite right: there cannot be multiple streams, as datagrams are strongly associated with a client-initiated bidirectional stream. I'd phrase the paragraph as: open. If a datagram is received after the receive side of the corresponding stream is closed, the received datagrams MUST be silently dropped.\nThat works. Thanks!\nThis answer is necessary because QUIC doesn't guarantee ordering, but is this necessarily true indefinitely? But can I regard a datagram that arrives an hour after the stream closes as an error?\nIf you keep track of QUIC packet numbers, you can see that the peer closed the stream before it sent the datagram which means they violated a MUST. You're welcome to error if you detect that, but I don't think we should mandate this level of tracking.\nI'm not sure that that is a good strategy given how some deployments manage their packet numbers. I'm talking about maybe running a really long timer.\nThat introduces a way for an attacker to cause connection closure by delaying a specific packet. What problem are you trying to solve?\nGood point. I thought that we did similar in a few places in QUIC. As you say, it's possible to enforce, but tricky. The concern I'd have is that this effectively forbids enforcement, but maybe that's OK.", "new_text": "connection error of type H3_DATAGRAM_ERROR. HTTP/3 Datagrams MUST NOT be sent unless the corresponding stream's send side is open. If a datagram is received after the corresponding stream's receive side is closed, the received datagrams MUST be silently dropped. If an HTTP/3 datagram is received and its Quarter Stream ID maps to a stream that has not yet been created, the receiver SHALL either drop"}
{"id": "q-en-draft-ietf-masque-h3-datagram-4fcf79eab3d9a5234fc02d442a9bde8861e6afbf6aa8855739e6adf1b46c711b", "old_text": "An intermediary can reencode HTTP Datagrams as it forwards them. In other words, an intermediary MAY send a DATAGRAM capsule to forward an HTTP Datagram which was received in a QUIC DATAGRAM frame, and vice versa. Note that while DATAGRAM capsules that are sent on a stream are reliably delivered in order, intermediaries can reencode DATAGRAM", "comments": "End of section 3.5 () says: First of all this paragraph seems to be in the wrong section because it's more generally about the capsule protocol and not the datagram capsule. However, it's also confusing. I believe if you negotiate http datagram with the H3DATAGRAM SETTINGS parameter, you always need to also support capsules even if QUIC datagrams are supported, as the other end (e.g. an intermediary) could send you anyway a capsule. Or is this text meant to say that new extensions that use the http datagram but have a new settings parameter for negotiation may not need to support the capsule protocol? I think it would be good to clarify both: what this sentence/paragraph means and what is required if you negotiate http datagrams using the H3DATAGRAM SETTINGS parameter.\nI want to extended on this issue a bit more. A lot about the implementation requirement on different nodes are implied rather than being explicit. \"Secondly, I will note that the above text combined with what is written in Section 3.4: Intermediaries MAY use this header field to allow processing of HTTP Datagrams for unknown HTTP Upgrade Tokens; note that this is only possible for HTTP Upgrade or Extended CONNECT.\" If an intermediary is doing this unaware re-encoding and the HTTP extension is an extended connect version it can result in interoperability failure when a Datagram capsule arrives at the endpoint that only supports HTTP/3 Datagrams over QUIC Datagram.\nIf we build off the notion that H3DATAGRAM setting communicates a willingness to receive the HTTP/3 datagrams (in QUIC datagrams), they still require some request to bootstrap the association, otherwise those datagrams would be ignored. In theory, I could define a new method called \"ALWAYSUNRELIABLE\" that can be used to bootstrap a request stream and it be defined to never use a capsule protocol. If a server sends me data on that stream, I could ignore it or throw an error. I don't need a new setting to caveat this behaviour.\nNAME but then you can't have \"blind\" re-encoding of HTTP/3 Datagram to Datagram capsules, I think it is that simple. So we need to choose if HTTP Intermediaries can re-encode when they don't know the Upgrade Token or if they can't and must be aware and thus enable cases where some have different behaviors. See\nThere is no such thing as \"blind re-encoding\". Intermediaries can only perform re-encoding when they know that the capsule protocol is in use. Intermediaries can learn that by knowing the HTTP Upgrade Token, or by watching for the Capsule-Protocol header. But I think we can make that clearer by spelling out in the \"DATAGRAM Capsule\" section that intermediaries MUST NOT re-encode to capsules unless they know for sure that the capsule protocol is in use.\nPR did not address the editorial point that this paragraph is in the wrong section.\nI don't agree with that editorial point. The paragraph is about the interaction between HTTP datagrams and the capsule protocol, so having in the DATAGRAM capsule section makes sense to me.\nIt about the interaction about http datagrams (section 2) and the capsule protocol (section 3). There is nothing related to datagram capsules (section 3.5) in this paragraph. I guess you could just add a new subsection 3.6 if you prefer to have it at the end.\nLGTM", "new_text": "An intermediary can reencode HTTP Datagrams as it forwards them. In other words, an intermediary MAY send a DATAGRAM capsule to forward an HTTP Datagram that was received in a QUIC DATAGRAM frame, and vice versa. Intermediaries MUST NOT perform this reencoding unless they have identified the use of the Capsule Protocol on the corresponding request stream; see capsule-protocol. Note that while DATAGRAM capsules that are sent on a stream are reliably delivered in order, intermediaries can reencode DATAGRAM"}
{"id": "q-en-draft-ietf-masque-h3-datagram-40eae41d74d2051e475083309a2a95791fcd23ce97c8dde18646a43a1d74a532", "old_text": "Tokens, it is not accessible from Web Platform APIs (such as those commonly accessed via JavaScript in web browsers). 5. 5.1.", "comments": "Written in response to\nFrom in reference to the text in this PR: Issues ietf-wg-masque/draft-ietf-masque-connect-udp-listen (+0/-2/0) 2 issues closed: Improve document title URL [editorial] Should \"connect-udp-listen\" be registered as \"Connect-UDP-Listen\"? URL ietf-wg-masque/draft-ietf-masque-quic-proxy (+0/-0/2) 1 issues received 2 new comments: Virtual connection IDs in multi-hop settings (2 by DavidSchinazi, mirjak) URL Pull requests ietf-wg-masque/draft-ietf-masque-connect-udp-listen (+2/-2/1) 2 pull requests submitted: Update Listener to bind (by asingh-g) URL update connect-udp-listen to use H1 style (by asingh-g) URL 1 pull requests received 1 new comments: Compress away IP and Port using Context IDs (1 by asingh-g) URL 2 pull requests merged: Update Listener to bind ( URL update connect-udp-listen to use H1 style, URL ietf-wg-masque/draft-ietf-masque-quic-proxy (+0/-0/3) 1 pull requests received 3 new comments: Proxy-Chosen Virtual Client Connection ID (3 by DavidSchinazi, ehaydenr) URL Repositories tracked by this digest: URL URL URL URL URL URL URL\nRoman Danyliw has entered the following ballot position for draft-ietf-masque-h3-datagram-10: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about how to handle DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you to David Mandelberg for the SECDIR review. ** Section 4. Consider adding the following clarification: NEW HTTP Datagrams and the Capsule Protocol are building blocks for HTTP extensions to define new behaviors or features and do not constitute an independent protocol. Any extension adopting them will need to perform an appropriate security analysis which considers the impact of these features in the context of a complete protocol.", "new_text": "Tokens, it is not accessible from Web Platform APIs (such as those commonly accessed via JavaScript in web browsers). Definitions of new HTTP Upgrade Tokens that use the Capsule Protocol need to perform a security analysis that considers the impact of HTTP Datagrams and Capsules in the context of their protocol. 5. 5.1."}
{"id": "q-en-draft-ietf-masque-h3-datagram-0f240ea7e78ea93b681426458668d2e355f32d72e9eb6c22302dbdfdfc75e156", "old_text": "QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames when the application protocol running over QUIC is HTTP/3. It defines logical flows identified by a non-negative integer that are present at the start of the DATAGRAM frame payload. Flows are associated with HTTP messages using the Datagram-Flow-Id header field, allowing endpoints to match unreliable DATAGRAMS frames to the HTTP messages that they are related to. Discussion of this work is encouraged to happen on the MASQUE IETF mailing list (masque@ietf.org [1]) or on the GitHub repository which", "comments": "This PR aims to take the design discussions we had at the last interim, and propose a potential solution. This PR should in theory contain enough information to allow folks to implement this new design. That will allow us to perform interop testing and confirm whether the design is sound.\nI'm confused by this pull request (probably because I was out of the loop for a while). My understanding, at least when I first read the proposal a month ago was that we would separate multiplexing streams and multiplexing contexts into different layers, so that methods that need context would get it (CONNECT-UDP), and features that don't (WebTransport/extended CONNECT) could use stream-associated datagrams directly.\nNAME you're referring to which this PR isn't trying to address. That'll happen in a subsequent PR.\nIn , capsules, contexts and context IDs are explicitly defined as end-to-end and there are requirements for intermediaries to not modify them, and to not even parse them. The motivation for this was to ensure that we retain the ability to deploy extensions end-to-end without having to modify intermediaries. NAME mentioned that these requirements might be too strict, as he would like to have his intermediary be able to parse the context ID, for logging purposes.\nPart of the comment from the PR is also about consistency of interface for generic HTTP libraries that can be used to build either endpoints or intermediaries. If the context ID is always present, it's overly prescriptive to say that an intermediary can't parse it, and such a requirement couldn't be enforced anyways. If context IDs are negotiated on/off, whether this is possible depends on the negotiation mechanism.\nI think its fine for a component that has bytes passing through it to look at those bytes. The import thing to do is set expectations on what that means for the protocol behaviour we are defining. We don't expect intermediaries to do anything with a context ID; they should not enforce the protocol semantic rules (odd, even, reuse etc). H2 and H3 somewhat solve this problem by decoupling frame parsing from frame payload interpretation. E.g. if the form of a request or response is invalid, an intermediary shouldn't pass it on. DATAGRAM is different because it isn't defined as an actual HTTP/3 frame with frame headers and frame payload. Something that might work is to formalise that quarter stream ID is part of the envelope that intermediaries have to be able to handle the syntax of, but context ID is part of content and separate.\nUnless you encrypt capsules end-to-end, I don't think any text in the spec can in practice prevent intermediaries from inspecting and modifying capsules. You can send something like GREASE capsules, though.\nIn , we introduce the CLOSEDATAGRAMCONTEXT capsule which allows endpoints to close a context. That message currently only contains the context ID to close. NAME suggests that we may want to add more information there. Should we differentiate between \"context close\" and \"context registration rejected\"? Should we add an error code? In particular, do folks have use cases for these features?\nMy first concern is with infinite loops. If the context was \"garbage collected\" by the recipient, the sender can simply re-create it if they still want it. However, if it was closed due to incompatibility (i.e. rejected) or hitting a resource limit (e.g. max # of contexts), reopening it in response will produce a tight infinite loop. This could be maybe be avoided by some kind of heuristic, but an explicit indication of what's going on seems better.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nDuring the 2021-04 MASQUE Interim, we discussed multiple options for registering context IDs. Without going into the specific of the encoding, there are two broad classes of solutions: \"Header\" design Once-at-setup registration This happens during the HTTP request/response Simplest solution is to use an HTTP header such as (encoding TBD) Example use-case: CONNECT-UDP without extensions \"Message\" design Mid-stream registration This can happen at any point during the lifetime of the request stream This would involve some sort of \"register\" message (encoding TBD, see separate issue) Example use-case: CONNECT-IP on-the-fly compression (this can't be done at tunnel setup time because the bits top be compressed aren't always known at that time) It's possible to implement once-at-setup registration using the Message design, but it isn't possible to implement mid-stream registration. Therefore I think we should go with the Message design. It would also be possible to implement both but I don't think that provides much value.\nI'd like to here some more from the WebTransport use case on the issue. My understanding is that the Requests are used to create WebTransport sessions and that creation of datagram flows(contexts) could be a lot more ad-hoc / server generated after the request phase, when compared to the CONNECT-UDP case. Cc NAME\nPersonally, I'd vote for either \"message\" or \"message + header\" (a la priorities), but not just \"header\". Mid-stream annotation of context/flow/etc is quite useful.\nI prefer \"Message\" only, since the dynamic use case seems like a fairly compelling use of sub-flows(contexts?). I'm not sure there are many cases when one allocates all sub-flows up front AND needs to use a header to do that in a flexible way. If an application needs multiple sub-flows, but they're fixed in number/format(ie: data and control), I don't think there's a need for a header or a \"Message\", since the application can consider that part of its internal format.\nDuring the 2021-04 MASQUE Interim, we discussed having the ability to accept or reject registration of context IDs. An important motivation for this is the fact that context ID registrations take up memory (for example it could be a compression context which contains the data that is elided) and therefore we need to prevent endpoints from arbitrarily making their peer allocate memory. While we could flow control context ID registration, a much simpler solution would be to have a way for endpoints to close/shutdown a registration.\nIIUC there's an opposing view that the constraints can be negotiated apiori. Either statically (such as defining an extension which states that there can only be N contenxts of a given type) or dynamically using something like a HTTP header. Failure to meet these constraints could lead to an HTTP/3 stream or connection error rather than a flow rejection.\nNAME I've filed to discuss whether to register/negotiate at request start or mid-stream.\nI think that if there is the ability to register mid-stream (), we need the ability to close/de-register/reject mid-stream as well. If we don't have mid-stream changes, then no need to negotiate. However, someone trying to register something needs to be able to know whether or not the thing they registered will be accepted.\nI agree that notification of the registration status is particularly important if we allow mid-stream registration.\nIn draft-ietf-masque-h3-datagram-00, flow IDs are bidirectional. During the 2021-04 MASQUE Interim, we discussed the possibility of making them unidirectional. Here are the differences: Bidirectional single shared namespace for both endpoints (even for client, odd for server) it's easier to refer to peer's IDs, which makes negotiation easier Unidirectional one namespace per direction every bidirectional use needs to associate both directions somehow I'm personally leaning towards keeping them bidirectional. Unlike QUIC streams, these do not have a state machine or flow control credits, so there is no cost to only using a single direction of a bidirectional flow for unidirectional use-cases.\nThere's another option where flows are unidirectional but they share a namespace (cleint even/server odd). I'm undecided. And I also don't know how important it is to choose. The 2 layer design means that DATAGRAMs always have a bidirectional tie via the stream ID. On balance, I think bidirectional would be good enough today but I would be receptive to extension use cases that make a strong point of unidirectional.\nBidirectional is my personal preference, but either way could work ultimately. It seems to me that any extension that wants unidirectional can just only use this context/flow/etc ID for one direction if it wants.\nI think bidirectional flows are likely overcomplicated and unnecessary. For example, if either peer can \"close\" a flow, this requires a much more complex state machine. In my experience, half-closed states are hard to get right. Closing unidirectional flows seems much easier to reason about. Many, perhaps most, of our use cases for flows are really unidirectional. For example, DSCP marking likely only makes sense from client to server, while timestamps for congestion control tuning likely only make sense from server to client. QUIC connection IDs are different in each direction, so QUIC header compression may be easiest to understand with unidirectional flows.\nBidirectional doesn't imply half-close states. I'm thinking of a bidirectional flow that can only be closed atomically as a whole. I don't think that's true. The default extension-less CONNECT-UDP is bidirectional. So is ECN, and so would a timestamp extension.\nI would prefer bidirectional without support for half-close. I feel that having unidirectional flow/context IDs adds overhead we don't need; peers now need to track those associations and it's unclear what the benefits of having them separated out are.\nThis issue is conflating consent (unilateral/bilateral) with scope (unidirectional/bidirectional). IIUC there are two questions here: 1) Can I send a flow-id forever once declared, or can the endpoint tell me to stop without killing the whole stream (consent)? 2) Can the receiver of a flow-id assignment use that same flow-id when sending datagrams, or could this mean something completely different? I think the answer to (1) is \"yes, it can tell me to stop\" (i.e. it is negotiated while allowing speculative sending). For (2), I lean toward bidirectional, but not strongly. There are plenty of codepoints, so it seems wasteful for each endpoint to define the same semantic explicitly.\nNAME the intent of this issue wan't to conflate those two things. This issue is about scope. For consent, see issue .\nConceptually, I like unidirectional better. I tell the peer that when I send with context-ID=X, it has a given semantic. For bidirectional the sender is saying both that and that if the peer sends on that ID the original sender will interpret with the same semantic. That might not even make sense. Is it possible to associate datagrams with a unidirectional stream (push?). If so, does allowing for bidirectional context imply that even though the receiver cannot send frames on the stream, it can send datagrams? Every negotiation does run the risk of loss/reordering resulting in a drop or buffering event at the receiver. The appeal of bidirectional is that it removes half of this risk in the cases where the bidirectionality is used.\nTo second NAME I would also prefer bidirectional without support for half-close. Given you can always not send something in one direction, and there's none of the overhead of streams(ie: flow control, etc) I can't see any benefits of unidirectional over bidirectional in this case. Associating h3-datagrams with a server push seems like unnecessary complexity to me, but you're right that it should be clarified.\nDuring the 2021-04 MASQUE Interim, we discussed a design proposal from NAME to replace the one-layer flow ID design with a two-layer design. This design replaces the flow ID with two numbers: a stream ID and a context ID. The contents of the QUIC DATAGRAM frame would now look like: While the flow ID in the draft was per-hop, the proposal has better separation: the stream ID is per-hop, it maps to an HTTP request the context ID is end-to-end, it maps to context information inside that request Intermediaries now only look at stream IDs and can be blissfully ignorant of context IDs. In the room at the 2021-04 MASQUE Interim, multiple participants spoke in support of this design and no one raised objections. This issue exists to ensure that we have consensus on this change - please speak up if you disagree.\nI am fine with the 2-layer design for CONNECT-UDP, but I am not sure if the second identifier should be in H3 DATAGRAM or should be specific to CONNECT-UDP. As I indicated to the list, webtrans and httpbis might weigh in on this with broader consideration of other use cases.\nHi NAME the topic of the optionality of context IDs is discussed in issue .\nI think this is a good design, because it minimizes the core functionality of the draft, but avoids every application that needs multiple dynamically created sub-flows from re-inventing a different mechanism. Nit: I would not call this a 'Frame', since it's not an HTTP(TLV) frame.\nI think this is a good change. Thanks! There are some things that I think we should still discuss, but we shouldn't hold up this change on those details.", "new_text": "QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames when the application protocol running over QUIC is HTTP/3. It associates datagrams with client-initiated bidirectional streams and defines an optional additional demultiplexing layer. Discussion of this work is encouraged to happen on the MASQUE IETF mailing list (masque@ietf.org [1]) or on the GitHub repository which"}
{"id": "q-en-draft-ietf-masque-h3-datagram-0f240ea7e78ea93b681426458668d2e355f32d72e9eb6c22302dbdfdfc75e156", "old_text": "However, QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames when the application protocol running over QUIC is HTTP/3 H3. It defines logical flows identified by a non-negative integer that are present at the start of the DATAGRAM frame payload. Flows are associated with HTTP messages using the Datagram-Flow-Id header field, allowing endpoints to match unreliable DATAGRAMS frames to the HTTP messages that they are related to. This design mimics the use of Stream Types in HTTP/3, which provide a demultiplexing identifier at the start of each unidirectional stream. Discussion of this work is encouraged to happen on the MASQUE IETF mailing list (masque@ietf.org [2]) or on the GitHub repository which", "comments": "This PR aims to take the design discussions we had at the last interim, and propose a potential solution. This PR should in theory contain enough information to allow folks to implement this new design. That will allow us to perform interop testing and confirm whether the design is sound.\nI'm confused by this pull request (probably because I was out of the loop for a while). My understanding, at least when I first read the proposal a month ago was that we would separate multiplexing streams and multiplexing contexts into different layers, so that methods that need context would get it (CONNECT-UDP), and features that don't (WebTransport/extended CONNECT) could use stream-associated datagrams directly.\nNAME you're referring to which this PR isn't trying to address. That'll happen in a subsequent PR.\nIn , capsules, contexts and context IDs are explicitly defined as end-to-end and there are requirements for intermediaries to not modify them, and to not even parse them. The motivation for this was to ensure that we retain the ability to deploy extensions end-to-end without having to modify intermediaries. NAME mentioned that these requirements might be too strict, as he would like to have his intermediary be able to parse the context ID, for logging purposes.\nPart of the comment from the PR is also about consistency of interface for generic HTTP libraries that can be used to build either endpoints or intermediaries. If the context ID is always present, it's overly prescriptive to say that an intermediary can't parse it, and such a requirement couldn't be enforced anyways. If context IDs are negotiated on/off, whether this is possible depends on the negotiation mechanism.\nI think its fine for a component that has bytes passing through it to look at those bytes. The import thing to do is set expectations on what that means for the protocol behaviour we are defining. We don't expect intermediaries to do anything with a context ID; they should not enforce the protocol semantic rules (odd, even, reuse etc). H2 and H3 somewhat solve this problem by decoupling frame parsing from frame payload interpretation. E.g. if the form of a request or response is invalid, an intermediary shouldn't pass it on. DATAGRAM is different because it isn't defined as an actual HTTP/3 frame with frame headers and frame payload. Something that might work is to formalise that quarter stream ID is part of the envelope that intermediaries have to be able to handle the syntax of, but context ID is part of content and separate.\nUnless you encrypt capsules end-to-end, I don't think any text in the spec can in practice prevent intermediaries from inspecting and modifying capsules. You can send something like GREASE capsules, though.\nIn , we introduce the CLOSEDATAGRAMCONTEXT capsule which allows endpoints to close a context. That message currently only contains the context ID to close. NAME suggests that we may want to add more information there. Should we differentiate between \"context close\" and \"context registration rejected\"? Should we add an error code? In particular, do folks have use cases for these features?\nMy first concern is with infinite loops. If the context was \"garbage collected\" by the recipient, the sender can simply re-create it if they still want it. However, if it was closed due to incompatibility (i.e. rejected) or hitting a resource limit (e.g. max # of contexts), reopening it in response will produce a tight infinite loop. This could be maybe be avoided by some kind of heuristic, but an explicit indication of what's going on seems better.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nDuring the 2021-04 MASQUE Interim, we discussed multiple options for registering context IDs. Without going into the specific of the encoding, there are two broad classes of solutions: \"Header\" design Once-at-setup registration This happens during the HTTP request/response Simplest solution is to use an HTTP header such as (encoding TBD) Example use-case: CONNECT-UDP without extensions \"Message\" design Mid-stream registration This can happen at any point during the lifetime of the request stream This would involve some sort of \"register\" message (encoding TBD, see separate issue) Example use-case: CONNECT-IP on-the-fly compression (this can't be done at tunnel setup time because the bits top be compressed aren't always known at that time) It's possible to implement once-at-setup registration using the Message design, but it isn't possible to implement mid-stream registration. Therefore I think we should go with the Message design. It would also be possible to implement both but I don't think that provides much value.\nI'd like to here some more from the WebTransport use case on the issue. My understanding is that the Requests are used to create WebTransport sessions and that creation of datagram flows(contexts) could be a lot more ad-hoc / server generated after the request phase, when compared to the CONNECT-UDP case. Cc NAME\nPersonally, I'd vote for either \"message\" or \"message + header\" (a la priorities), but not just \"header\". Mid-stream annotation of context/flow/etc is quite useful.\nI prefer \"Message\" only, since the dynamic use case seems like a fairly compelling use of sub-flows(contexts?). I'm not sure there are many cases when one allocates all sub-flows up front AND needs to use a header to do that in a flexible way. If an application needs multiple sub-flows, but they're fixed in number/format(ie: data and control), I don't think there's a need for a header or a \"Message\", since the application can consider that part of its internal format.\nDuring the 2021-04 MASQUE Interim, we discussed having the ability to accept or reject registration of context IDs. An important motivation for this is the fact that context ID registrations take up memory (for example it could be a compression context which contains the data that is elided) and therefore we need to prevent endpoints from arbitrarily making their peer allocate memory. While we could flow control context ID registration, a much simpler solution would be to have a way for endpoints to close/shutdown a registration.\nIIUC there's an opposing view that the constraints can be negotiated apiori. Either statically (such as defining an extension which states that there can only be N contenxts of a given type) or dynamically using something like a HTTP header. Failure to meet these constraints could lead to an HTTP/3 stream or connection error rather than a flow rejection.\nNAME I've filed to discuss whether to register/negotiate at request start or mid-stream.\nI think that if there is the ability to register mid-stream (), we need the ability to close/de-register/reject mid-stream as well. If we don't have mid-stream changes, then no need to negotiate. However, someone trying to register something needs to be able to know whether or not the thing they registered will be accepted.\nI agree that notification of the registration status is particularly important if we allow mid-stream registration.\nIn draft-ietf-masque-h3-datagram-00, flow IDs are bidirectional. During the 2021-04 MASQUE Interim, we discussed the possibility of making them unidirectional. Here are the differences: Bidirectional single shared namespace for both endpoints (even for client, odd for server) it's easier to refer to peer's IDs, which makes negotiation easier Unidirectional one namespace per direction every bidirectional use needs to associate both directions somehow I'm personally leaning towards keeping them bidirectional. Unlike QUIC streams, these do not have a state machine or flow control credits, so there is no cost to only using a single direction of a bidirectional flow for unidirectional use-cases.\nThere's another option where flows are unidirectional but they share a namespace (cleint even/server odd). I'm undecided. And I also don't know how important it is to choose. The 2 layer design means that DATAGRAMs always have a bidirectional tie via the stream ID. On balance, I think bidirectional would be good enough today but I would be receptive to extension use cases that make a strong point of unidirectional.\nBidirectional is my personal preference, but either way could work ultimately. It seems to me that any extension that wants unidirectional can just only use this context/flow/etc ID for one direction if it wants.\nI think bidirectional flows are likely overcomplicated and unnecessary. For example, if either peer can \"close\" a flow, this requires a much more complex state machine. In my experience, half-closed states are hard to get right. Closing unidirectional flows seems much easier to reason about. Many, perhaps most, of our use cases for flows are really unidirectional. For example, DSCP marking likely only makes sense from client to server, while timestamps for congestion control tuning likely only make sense from server to client. QUIC connection IDs are different in each direction, so QUIC header compression may be easiest to understand with unidirectional flows.\nBidirectional doesn't imply half-close states. I'm thinking of a bidirectional flow that can only be closed atomically as a whole. I don't think that's true. The default extension-less CONNECT-UDP is bidirectional. So is ECN, and so would a timestamp extension.\nI would prefer bidirectional without support for half-close. I feel that having unidirectional flow/context IDs adds overhead we don't need; peers now need to track those associations and it's unclear what the benefits of having them separated out are.\nThis issue is conflating consent (unilateral/bilateral) with scope (unidirectional/bidirectional). IIUC there are two questions here: 1) Can I send a flow-id forever once declared, or can the endpoint tell me to stop without killing the whole stream (consent)? 2) Can the receiver of a flow-id assignment use that same flow-id when sending datagrams, or could this mean something completely different? I think the answer to (1) is \"yes, it can tell me to stop\" (i.e. it is negotiated while allowing speculative sending). For (2), I lean toward bidirectional, but not strongly. There are plenty of codepoints, so it seems wasteful for each endpoint to define the same semantic explicitly.\nNAME the intent of this issue wan't to conflate those two things. This issue is about scope. For consent, see issue .\nConceptually, I like unidirectional better. I tell the peer that when I send with context-ID=X, it has a given semantic. For bidirectional the sender is saying both that and that if the peer sends on that ID the original sender will interpret with the same semantic. That might not even make sense. Is it possible to associate datagrams with a unidirectional stream (push?). If so, does allowing for bidirectional context imply that even though the receiver cannot send frames on the stream, it can send datagrams? Every negotiation does run the risk of loss/reordering resulting in a drop or buffering event at the receiver. The appeal of bidirectional is that it removes half of this risk in the cases where the bidirectionality is used.\nTo second NAME I would also prefer bidirectional without support for half-close. Given you can always not send something in one direction, and there's none of the overhead of streams(ie: flow control, etc) I can't see any benefits of unidirectional over bidirectional in this case. Associating h3-datagrams with a server push seems like unnecessary complexity to me, but you're right that it should be clarified.\nDuring the 2021-04 MASQUE Interim, we discussed a design proposal from NAME to replace the one-layer flow ID design with a two-layer design. This design replaces the flow ID with two numbers: a stream ID and a context ID. The contents of the QUIC DATAGRAM frame would now look like: While the flow ID in the draft was per-hop, the proposal has better separation: the stream ID is per-hop, it maps to an HTTP request the context ID is end-to-end, it maps to context information inside that request Intermediaries now only look at stream IDs and can be blissfully ignorant of context IDs. In the room at the 2021-04 MASQUE Interim, multiple participants spoke in support of this design and no one raised objections. This issue exists to ensure that we have consensus on this change - please speak up if you disagree.\nI am fine with the 2-layer design for CONNECT-UDP, but I am not sure if the second identifier should be in H3 DATAGRAM or should be specific to CONNECT-UDP. As I indicated to the list, webtrans and httpbis might weigh in on this with broader consideration of other use cases.\nHi NAME the topic of the optionality of context IDs is discussed in issue .\nI think this is a good design, because it minimizes the core functionality of the draft, but avoids every application that needs multiple dynamically created sub-flows from re-inventing a different mechanism. Nit: I would not call this a 'Frame', since it's not an HTTP(TLV) frame.\nI think this is a good change. Thanks! There are some things that I think we should still discuss, but we shouldn't hold up this change on those details.", "new_text": "However, QUIC DATAGRAM frames do not provide a means to demultiplex application contexts. This document describes how to use QUIC DATAGRAM frames when the application protocol running over QUIC is HTTP/3 H3. It associates datagrams with client-initiated bidirectional streams and defines an optional additional demultiplexing layer. Discussion of this work is encouraged to happen on the MASQUE IETF mailing list (masque@ietf.org [2]) or on the GitHub repository which"}
{"id": "q-en-draft-ietf-masque-h3-datagram-0f240ea7e78ea93b681426458668d2e355f32d72e9eb6c22302dbdfdfc75e156", "old_text": "2. Flows are bidirectional exchanges of datagrams within a single QUIC connection. These are conceptually similar to streams in the sense that they allow multiplexing of application data. Flows are identified within a connection by a numeric value, referred to as the flow ID. A flow ID is a 62-bit integer (0 to 2^62-1). Flows lack any of the ordering or reliability guarantees of streams. Beyond this, a sender SHOULD ensure that DATAGRAM frames within a single flow are transmitted in order relative to one another. 3. Implementations of HTTP/3 that support the DATAGRAM extension MUST provide a flow ID allocation service. That service will allow applications co-located with HTTP/3 to request a unique flow ID that they can subsequently use for their own purposes. The HTTP/3 implementation will then parse the flow ID of incoming DATAGRAM frames and use it to deliver the frame to the appropriate application context. Even-numbered flow IDs are client-initiated, while odd-numbered flow IDs are server-initiated. This means that an HTTP/3 client implementation of the flow ID allocation service MUST only provide even-numbered IDs, while a server implementation MUST only provide odd-numbered IDs. Note that, once allocated, any flow ID can be used by both client and server - only allocation carries separate namespaces to avoid requiring synchronization. 4. When used with HTTP/3, the Datagram Data field of QUIC DATAGRAM frames uses the following format (using the notation from the \"Notational Conventions\" section of QUIC): A variable-length integer indicating the Flow ID of the datagram (see datagram-flows). The payload of the datagram, whose semantics are defined by individual applications. Note that this field can be empty. Endpoints MUST treat receipt of a DATAGRAM frame whose payload is too short to parse the flow ID as an HTTP/3 connection error of type H3_GENERAL_PROTOCOL_ERROR. 5.", "comments": "This PR aims to take the design discussions we had at the last interim, and propose a potential solution. This PR should in theory contain enough information to allow folks to implement this new design. That will allow us to perform interop testing and confirm whether the design is sound.\nI'm confused by this pull request (probably because I was out of the loop for a while). My understanding, at least when I first read the proposal a month ago was that we would separate multiplexing streams and multiplexing contexts into different layers, so that methods that need context would get it (CONNECT-UDP), and features that don't (WebTransport/extended CONNECT) could use stream-associated datagrams directly.\nNAME you're referring to which this PR isn't trying to address. That'll happen in a subsequent PR.\nIn , capsules, contexts and context IDs are explicitly defined as end-to-end and there are requirements for intermediaries to not modify them, and to not even parse them. The motivation for this was to ensure that we retain the ability to deploy extensions end-to-end without having to modify intermediaries. NAME mentioned that these requirements might be too strict, as he would like to have his intermediary be able to parse the context ID, for logging purposes.\nPart of the comment from the PR is also about consistency of interface for generic HTTP libraries that can be used to build either endpoints or intermediaries. If the context ID is always present, it's overly prescriptive to say that an intermediary can't parse it, and such a requirement couldn't be enforced anyways. If context IDs are negotiated on/off, whether this is possible depends on the negotiation mechanism.\nI think its fine for a component that has bytes passing through it to look at those bytes. The import thing to do is set expectations on what that means for the protocol behaviour we are defining. We don't expect intermediaries to do anything with a context ID; they should not enforce the protocol semantic rules (odd, even, reuse etc). H2 and H3 somewhat solve this problem by decoupling frame parsing from frame payload interpretation. E.g. if the form of a request or response is invalid, an intermediary shouldn't pass it on. DATAGRAM is different because it isn't defined as an actual HTTP/3 frame with frame headers and frame payload. Something that might work is to formalise that quarter stream ID is part of the envelope that intermediaries have to be able to handle the syntax of, but context ID is part of content and separate.\nUnless you encrypt capsules end-to-end, I don't think any text in the spec can in practice prevent intermediaries from inspecting and modifying capsules. You can send something like GREASE capsules, though.\nIn , we introduce the CLOSEDATAGRAMCONTEXT capsule which allows endpoints to close a context. That message currently only contains the context ID to close. NAME suggests that we may want to add more information there. Should we differentiate between \"context close\" and \"context registration rejected\"? Should we add an error code? In particular, do folks have use cases for these features?\nMy first concern is with infinite loops. If the context was \"garbage collected\" by the recipient, the sender can simply re-create it if they still want it. However, if it was closed due to incompatibility (i.e. rejected) or hitting a resource limit (e.g. max # of contexts), reopening it in response will produce a tight infinite loop. This could be maybe be avoided by some kind of heuristic, but an explicit indication of what's going on seems better.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nDuring the 2021-04 MASQUE Interim, we discussed multiple options for registering context IDs. Without going into the specific of the encoding, there are two broad classes of solutions: \"Header\" design Once-at-setup registration This happens during the HTTP request/response Simplest solution is to use an HTTP header such as (encoding TBD) Example use-case: CONNECT-UDP without extensions \"Message\" design Mid-stream registration This can happen at any point during the lifetime of the request stream This would involve some sort of \"register\" message (encoding TBD, see separate issue) Example use-case: CONNECT-IP on-the-fly compression (this can't be done at tunnel setup time because the bits top be compressed aren't always known at that time) It's possible to implement once-at-setup registration using the Message design, but it isn't possible to implement mid-stream registration. Therefore I think we should go with the Message design. It would also be possible to implement both but I don't think that provides much value.\nI'd like to here some more from the WebTransport use case on the issue. My understanding is that the Requests are used to create WebTransport sessions and that creation of datagram flows(contexts) could be a lot more ad-hoc / server generated after the request phase, when compared to the CONNECT-UDP case. Cc NAME\nPersonally, I'd vote for either \"message\" or \"message + header\" (a la priorities), but not just \"header\". Mid-stream annotation of context/flow/etc is quite useful.\nI prefer \"Message\" only, since the dynamic use case seems like a fairly compelling use of sub-flows(contexts?). I'm not sure there are many cases when one allocates all sub-flows up front AND needs to use a header to do that in a flexible way. If an application needs multiple sub-flows, but they're fixed in number/format(ie: data and control), I don't think there's a need for a header or a \"Message\", since the application can consider that part of its internal format.\nDuring the 2021-04 MASQUE Interim, we discussed having the ability to accept or reject registration of context IDs. An important motivation for this is the fact that context ID registrations take up memory (for example it could be a compression context which contains the data that is elided) and therefore we need to prevent endpoints from arbitrarily making their peer allocate memory. While we could flow control context ID registration, a much simpler solution would be to have a way for endpoints to close/shutdown a registration.\nIIUC there's an opposing view that the constraints can be negotiated apiori. Either statically (such as defining an extension which states that there can only be N contenxts of a given type) or dynamically using something like a HTTP header. Failure to meet these constraints could lead to an HTTP/3 stream or connection error rather than a flow rejection.\nNAME I've filed to discuss whether to register/negotiate at request start or mid-stream.\nI think that if there is the ability to register mid-stream (), we need the ability to close/de-register/reject mid-stream as well. If we don't have mid-stream changes, then no need to negotiate. However, someone trying to register something needs to be able to know whether or not the thing they registered will be accepted.\nI agree that notification of the registration status is particularly important if we allow mid-stream registration.\nIn draft-ietf-masque-h3-datagram-00, flow IDs are bidirectional. During the 2021-04 MASQUE Interim, we discussed the possibility of making them unidirectional. Here are the differences: Bidirectional single shared namespace for both endpoints (even for client, odd for server) it's easier to refer to peer's IDs, which makes negotiation easier Unidirectional one namespace per direction every bidirectional use needs to associate both directions somehow I'm personally leaning towards keeping them bidirectional. Unlike QUIC streams, these do not have a state machine or flow control credits, so there is no cost to only using a single direction of a bidirectional flow for unidirectional use-cases.\nThere's another option where flows are unidirectional but they share a namespace (cleint even/server odd). I'm undecided. And I also don't know how important it is to choose. The 2 layer design means that DATAGRAMs always have a bidirectional tie via the stream ID. On balance, I think bidirectional would be good enough today but I would be receptive to extension use cases that make a strong point of unidirectional.\nBidirectional is my personal preference, but either way could work ultimately. It seems to me that any extension that wants unidirectional can just only use this context/flow/etc ID for one direction if it wants.\nI think bidirectional flows are likely overcomplicated and unnecessary. For example, if either peer can \"close\" a flow, this requires a much more complex state machine. In my experience, half-closed states are hard to get right. Closing unidirectional flows seems much easier to reason about. Many, perhaps most, of our use cases for flows are really unidirectional. For example, DSCP marking likely only makes sense from client to server, while timestamps for congestion control tuning likely only make sense from server to client. QUIC connection IDs are different in each direction, so QUIC header compression may be easiest to understand with unidirectional flows.\nBidirectional doesn't imply half-close states. I'm thinking of a bidirectional flow that can only be closed atomically as a whole. I don't think that's true. The default extension-less CONNECT-UDP is bidirectional. So is ECN, and so would a timestamp extension.\nI would prefer bidirectional without support for half-close. I feel that having unidirectional flow/context IDs adds overhead we don't need; peers now need to track those associations and it's unclear what the benefits of having them separated out are.\nThis issue is conflating consent (unilateral/bilateral) with scope (unidirectional/bidirectional). IIUC there are two questions here: 1) Can I send a flow-id forever once declared, or can the endpoint tell me to stop without killing the whole stream (consent)? 2) Can the receiver of a flow-id assignment use that same flow-id when sending datagrams, or could this mean something completely different? I think the answer to (1) is \"yes, it can tell me to stop\" (i.e. it is negotiated while allowing speculative sending). For (2), I lean toward bidirectional, but not strongly. There are plenty of codepoints, so it seems wasteful for each endpoint to define the same semantic explicitly.\nNAME the intent of this issue wan't to conflate those two things. This issue is about scope. For consent, see issue .\nConceptually, I like unidirectional better. I tell the peer that when I send with context-ID=X, it has a given semantic. For bidirectional the sender is saying both that and that if the peer sends on that ID the original sender will interpret with the same semantic. That might not even make sense. Is it possible to associate datagrams with a unidirectional stream (push?). If so, does allowing for bidirectional context imply that even though the receiver cannot send frames on the stream, it can send datagrams? Every negotiation does run the risk of loss/reordering resulting in a drop or buffering event at the receiver. The appeal of bidirectional is that it removes half of this risk in the cases where the bidirectionality is used.\nTo second NAME I would also prefer bidirectional without support for half-close. Given you can always not send something in one direction, and there's none of the overhead of streams(ie: flow control, etc) I can't see any benefits of unidirectional over bidirectional in this case. Associating h3-datagrams with a server push seems like unnecessary complexity to me, but you're right that it should be clarified.\nDuring the 2021-04 MASQUE Interim, we discussed a design proposal from NAME to replace the one-layer flow ID design with a two-layer design. This design replaces the flow ID with two numbers: a stream ID and a context ID. The contents of the QUIC DATAGRAM frame would now look like: While the flow ID in the draft was per-hop, the proposal has better separation: the stream ID is per-hop, it maps to an HTTP request the context ID is end-to-end, it maps to context information inside that request Intermediaries now only look at stream IDs and can be blissfully ignorant of context IDs. In the room at the 2021-04 MASQUE Interim, multiple participants spoke in support of this design and no one raised objections. This issue exists to ensure that we have consensus on this change - please speak up if you disagree.\nI am fine with the 2-layer design for CONNECT-UDP, but I am not sure if the second identifier should be in H3 DATAGRAM or should be specific to CONNECT-UDP. As I indicated to the list, webtrans and httpbis might weigh in on this with broader consideration of other use cases.\nHi NAME the topic of the optionality of context IDs is discussed in issue .\nI think this is a good design, because it minimizes the core functionality of the draft, but avoids every application that needs multiple dynamically created sub-flows from re-inventing a different mechanism. Nit: I would not call this a 'Frame', since it's not an HTTP(TLV) frame.\nI think this is a good change. Thanks! There are some things that I think we should still discuss, but we shouldn't hold up this change on those details.", "new_text": "2. In order to allow multiple exchanges of datagrams to coexist on a given QUIC connection, HTTP datagrams contain two layers of multiplexing. First, the QUIC DATAGRAM frame payload starts with an encoded stream identifier that associates the datagram with a given QUIC stream. Second, datagrams carry a context identifier (see datagram-contexts) that allows multiplexing multiple datagram contexts related to a given HTTP request. Conceptually, the first layer of multiplexing is per-hop, while the second is end-to-end. 2.1. Within the scope of a given HTTP request, contexts provide an additional demultiplexing layer. Contexts determine the encoding of datagrams, and can be used to implicitly convey metadata. For example, contexts can be used for compression to elide some parts of the datagram: the context identifier then maps to a compression context that the receiver can use to reconstruct the elided data. Contexts are identified within the scope of a given request by a numeric value, referred to as the context ID. A context ID is a 62-bit integer (0 to 2^62-1). While stream IDs are a per-hop concept, context IDs are an end-to-end concept. In other words, if a datagram travels through one or more intermediaries on its way from client to server, the stream ID will most likely change from hop to hop, but the context ID will remain the same. Context IDs are opaque to intermediaries. 2.2. Implementations of HTTP/3 that support the DATAGRAM extension MUST provide a context ID allocation service. That service will allow applications co-located with HTTP/3 to request a unique context ID that they can subsequently use for their own purposes. The HTTP/3 implementation will then parse the context ID of incoming DATAGRAM frames and use it to deliver the frame to the appropriate application context. Even-numbered context IDs are client-initiated, while odd-numbered context IDs are server-initiated. This means that an HTTP/3 client implementation of the context ID allocation service MUST only provide even-numbered IDs, while a server implementation MUST only provide odd-numbered IDs. Note that, once allocated, any context ID can be used by both client and server - only allocation carries separate namespaces to avoid requiring synchronization. Additionally, note that the context ID namespace is tied to a given HTTP request: it is possible for the same numeral context ID to be used simultaneously in distinct requests. 3. When used with HTTP/3, the Datagram Data field of QUIC DATAGRAM frames uses the following format (using the notation from the \"Notational Conventions\" section of QUIC): A variable-length integer that contains the value of the client- initiated bidirectional stream that this datagram is associated with, divided by four. (The division by four stems from the fact that HTTP requests are sent on client-initiated bidirectional streams, and those have stream IDs that are divisible by four.) A variable-length integer indicating the context ID of the datagram (see datagram-contexts). The payload of the datagram, whose semantics are defined by individual applications. Note that this field can be empty. Intermediaries parse the Quarter Stream ID field in order to associate the QUIC DATAGRAM frame with a stream. If an intermediary receives a QUIC DATAGRAM frame whose payload is too short to allow parsing the Quarter Stream ID field, the intermediary MUST treat it as an HTTP/3 connection error of type H3_GENERAL_PROTOCOL_ERROR. Intermediaries MUST ignore any HTTP/3 Datagram fields after the Quarter Stream ID. Endpoints parse both the Quarter Stream ID field and the Context ID field in order to associate the QUIC DATAGRAM frame with a stream and context within that stream. If an endpoint receives a QUIC DATAGRAM frame whose payload is too short to allow parsing the Quarter Stream ID field, the endpoint MUST treat it as an HTTP/3 connection error of type H3_GENERAL_PROTOCOL_ERROR. If an endpoint receives a QUIC DATAGRAM frame whose payload is long enough to allow parsing the Quarter Stream ID field but too short to allow parsing the Context ID field, the endpoint MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. If a DATAGRAM frame is received and its Quarter Stream ID maps to a stream that has already been closed, the receiver MUST silently drop that frame. If a DATAGRAM frame is received and its Quarter Stream ID maps to a stream that has not yet been created, the receiver SHALL either drop that frame silently or buffer it temporarily while awaiting the creation of the corresponding stream. 4. CAPSULE allows reliably sending request-related information end-to- end, even in the presence of HTTP intermediaries. CAPSULE is an HTTP/3 Frame (as opposed to a QUIC frame) which SHALL only be sent in client-initiated bidirectional streams. Intermediaries MUST forward all received CAPSULE frames in their unmodified entirety on the same stream where it would forward DATA frames. Intermediaries MUST NOT send any CAPSULE frames other than the ones it is forwarding. This specification of CAPSULE currently uses HTTP/3 frame type 0xffcab5. If this document is approved, a lower number will be requested from IANA. The Type and Length fields follows the definition of HTTP/3 frames from H3. The payload consists of: The type of this capsule. Data whose semantics depends on the Capsule Type. Endpoints which receive a Capsule with an unknown Capsule Type MUST silently drop that Capsule. Intermediaries MUST forward Capsules, even if they do not know the Capsule Type or cannot parse the Capsule Data. 4.1. The REGISTER_DATAGRAM_CONTEXT capsule (type=0x00) allows an endpoint to inform its peer of the encoding and semantics of datagrams associated with a given context ID. Its Capsule Data field consists of: The context ID to register. A string of comma-separated key-value pairs to enable extensibility. Keys are registered with IANA, see iana-keys. The ABNF for the Extension String field is as follows (using syntax from Section 3.2.6 of RFC7230): Note that these registrations are unilateral and bidirectional: the sender of the frame unilaterally defines the semantics it will apply to the datagrams it sends and receives using this context ID. Once a context ID is registered, it can be used in both directions. Endpoints MUST NOT send DATAGRAM frames using a Context ID until they have either sent or received a REGISTER_DATAGRAM_CONTEXT Capsule with the same Context ID. However, due to reordering, an endpoint that receives a DATAGRAM frame with an unknown Context ID MUST NOT treat it as an error, it SHALL instead drop the DATAGRAM frame silently, or buffer it temporarily while awaiting the corresponding REGISTER_DATAGRAM_CONTEXT Capsule. Endpoints MUST NOT register the same Context ID twice on the same stream. This also applies to Context IDs that have been closed using a CLOSE_DATAGRAM_CONTEXT capsule. Clients MUST NOT register server- initiated Context IDs and servers MUST NOT register client-initiated Context IDs. If an endpoint receives a REGISTER_DATAGRAM_CONTEXT capsule that violates one or more of these requirements, the endpoint MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.2. The CLOSE_DATAGRAM_CONTEXT capsule (type=0x01) allows an endpoint to inform its peer that it will no longer send or parse received datagrams associated with a given context ID. Its Capsule Data field consists of: The context ID to close. A string of comma-separated key-value pairs to enable extensibility, see the definition of the same field in register- capsule for details. Note that this close is unilateral and bidirectional: the sender of the frame unilaterally informs its peer of the closure. Endpoints can use CLOSE_DATAGRAM_CONTEXT capsules to close a context that was initially registered by either themselves, or by their peer. Endpoints MAY use the CLOSE_DATAGRAM_CONTEXT capsule to immediately reject a context that was just registered using a REGISTER_DATAGRAM_CONTEXT capsule if they find its Extension String to be unacceptable. After an endpoint has either sent or received a CLOSE_DATAGRAM_CONTEXT frame, it MUST NOT send any DATAGRAM frames with that Context ID. However, due to reordering, an endpoint that receives a DATAGRAM frame with a closed Context ID MUST NOT treat it as an error, it SHALL instead drop the DATAGRAM frame silently. Endpoints MUST NOT close a Context ID that was not previously registered. Endpoints MUST NOT close a Context ID that has already been closed. If an endpoint receives a CLOSE_DATAGRAM_CONTEXT capsule that violates one or more of these requirements, the endpoint MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.3. The DATAGRAM capsule (type=0x02) allows an endpoint to send a datagram frame over an HTTP stream. This is particularly useful when using a version of HTTP that does not support QUIC DATAGRAM frames. Its Capsule Data field consists of: A variable-length integer indicating the context ID of the datagram (see datagram-contexts). The payload of the datagram, whose semantics are defined by individual applications. Note that this field can be empty. Datagrams sent using the DATAGRAM Capsule have the exact same semantics as datagrams sent in QUIC DATAGRAM frames. 5."}
{"id": "q-en-draft-ietf-masque-h3-datagram-0f240ea7e78ea93b681426458668d2e355f32d72e9eb6c22302dbdfdfc75e156", "old_text": "6. \"Datagram-Flow-Id\" is a List Structured Field STRUCT-FIELD, whose members MUST all be Items of type Integer. Integers MUST be non- negative. Its ABNF is: The \"Datagram-Flow-Id\" header field is used to associate one or more datagram flows with an HTTP message. As a simple example using a single flow, the definition of an HTTP method could instruct the client to use its flow ID allocation service to allocate a new flow ID, and then the client will add the \"Datagram-Flow-Id\" header field to its request to communicate that value to the server. In this example, the resulting header field could look like: List members are flow ID elements, which can be named or unnamed. One element in the list is allowed to be unnamed, but all but one elements MUST carry a name. The name of an element is encoded in the key of the first parameter of that element (parameters are defined in Section 3.1.2 of STRUCT-FIELD). Each name MUST NOT appear more than once in the list. The value of the first parameter of each named element (whose corresponding key conveys the element name) MUST be of type Boolean and equal to true. The value of the first parameter of the unnamed element MUST NOT be of type Boolean. The ordering of the list does not carry any semantics. For example, an HTTP method that wishes to use four datagram flows for the lifetime of its request stream could look like this: In this example, 42 is the unnamed flow, 44 represents the name \"foo\", 46 represents \"bar\", and 48 represents \"baz\". Note that, since the list ordering does not carry semantics, this example can be equivalently encoded as: Even if a sender attempts to communicate the meaning of a flow before it uses it in an HTTP/3 datagram, it is possible that its peer will receive an HTTP/3 datagram with a flow ID that it does not know as it has not yet received the corresponding \"Datagram-Flow-Id\" header field. (For example, this could happen if the QUIC STREAM frame that contains the \"Datagram-Flow-Id\" header field is reordered and arrives afer the DATAGRAM frame.) Endpoints MUST NOT treat that scenario as an error; they MUST either silently discard the datagram or buffer it until they receive the \"Datagram-Flow-Id\" header field. Distinct HTTP requests MAY refer to the same flow in their respective \"Datagram-Flow-Id\" header fields. Note that integer structured fields can only encode values up to 10^15-1, therefore the maximum possible value of an element of the \"Datagram-Flow-Id\" header field is lower then the theoretical maximum value of a flow ID which is 2^62-1 due to the QUIC variable length integer encoding. If the flow allocation service of an endpoint runs out of flow ID values lower than 10^15-1, the endpoint MUST fail the flow ID allocation. An HTTP message that carries a \"Datagram-Flow- Id\" header field with a flow ID value above 10^15-1 is malformed (see Section 8.1.2.6 of H2). 7. HTTP/3 DATAGRAM flows are specific to a given HTTP/3 connection. However, in some cases, an HTTP request may travel across multiple HTTP connections if there are HTTP intermediaries involved; see Section 2.3 of RFC7230. If an intermediary has sent the H3_DATAGRAM SETTINGS parameter with a value of 1 on its client-facing connection, it MUST inspect all HTTP requests from that connection and check for the presence of the \"Datagram-Flow-Id\" header field. If the HTTP method of the request is not supported by the intermediary, it MUST remove the \"Datagram- Flow-Id\" header field before forwarding the request. If the intermediary supports the method, it MUST either remove the header field or adhere to the requirements leveraged by that method on intermediaries. If an intermediary has sent the H3_DATAGRAM SETTINGS parameter with a value of 1 on its server-facing connection, it MUST inspect all HTTP responses from that connection and check for the presence of the \"Datagram-Flow-Id\" header field. If the HTTP method of the request is not supported by the intermediary, it MUST remove the \"Datagram- Flow-Id\" header field before forwarding the response. If the intermediary supports the method, it MUST either remove the header field or adhere to the requirements leveraged by that method on intermediaries. If an intermediary processes distinct HTTP requests that refer to the same flow ID in their respective \"Datagram-Flow-Id\" header fields, it MUST ensure that those requests are routed to the same backend. 8. Since this feature requires sending an HTTP/3 Settings parameter, it \"sticks out\". In other words, probing clients can learn whether a", "comments": "This PR aims to take the design discussions we had at the last interim, and propose a potential solution. This PR should in theory contain enough information to allow folks to implement this new design. That will allow us to perform interop testing and confirm whether the design is sound.\nI'm confused by this pull request (probably because I was out of the loop for a while). My understanding, at least when I first read the proposal a month ago was that we would separate multiplexing streams and multiplexing contexts into different layers, so that methods that need context would get it (CONNECT-UDP), and features that don't (WebTransport/extended CONNECT) could use stream-associated datagrams directly.\nNAME you're referring to which this PR isn't trying to address. That'll happen in a subsequent PR.\nIn , capsules, contexts and context IDs are explicitly defined as end-to-end and there are requirements for intermediaries to not modify them, and to not even parse them. The motivation for this was to ensure that we retain the ability to deploy extensions end-to-end without having to modify intermediaries. NAME mentioned that these requirements might be too strict, as he would like to have his intermediary be able to parse the context ID, for logging purposes.\nPart of the comment from the PR is also about consistency of interface for generic HTTP libraries that can be used to build either endpoints or intermediaries. If the context ID is always present, it's overly prescriptive to say that an intermediary can't parse it, and such a requirement couldn't be enforced anyways. If context IDs are negotiated on/off, whether this is possible depends on the negotiation mechanism.\nI think its fine for a component that has bytes passing through it to look at those bytes. The import thing to do is set expectations on what that means for the protocol behaviour we are defining. We don't expect intermediaries to do anything with a context ID; they should not enforce the protocol semantic rules (odd, even, reuse etc). H2 and H3 somewhat solve this problem by decoupling frame parsing from frame payload interpretation. E.g. if the form of a request or response is invalid, an intermediary shouldn't pass it on. DATAGRAM is different because it isn't defined as an actual HTTP/3 frame with frame headers and frame payload. Something that might work is to formalise that quarter stream ID is part of the envelope that intermediaries have to be able to handle the syntax of, but context ID is part of content and separate.\nUnless you encrypt capsules end-to-end, I don't think any text in the spec can in practice prevent intermediaries from inspecting and modifying capsules. You can send something like GREASE capsules, though.\nIn , we introduce the CLOSEDATAGRAMCONTEXT capsule which allows endpoints to close a context. That message currently only contains the context ID to close. NAME suggests that we may want to add more information there. Should we differentiate between \"context close\" and \"context registration rejected\"? Should we add an error code? In particular, do folks have use cases for these features?\nMy first concern is with infinite loops. If the context was \"garbage collected\" by the recipient, the sender can simply re-create it if they still want it. However, if it was closed due to incompatibility (i.e. rejected) or hitting a resource limit (e.g. max # of contexts), reopening it in response will produce a tight infinite loop. This could be maybe be avoided by some kind of heuristic, but an explicit indication of what's going on seems better.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nDuring the 2021-04 MASQUE Interim, we discussed multiple options for registering context IDs. Without going into the specific of the encoding, there are two broad classes of solutions: \"Header\" design Once-at-setup registration This happens during the HTTP request/response Simplest solution is to use an HTTP header such as (encoding TBD) Example use-case: CONNECT-UDP without extensions \"Message\" design Mid-stream registration This can happen at any point during the lifetime of the request stream This would involve some sort of \"register\" message (encoding TBD, see separate issue) Example use-case: CONNECT-IP on-the-fly compression (this can't be done at tunnel setup time because the bits top be compressed aren't always known at that time) It's possible to implement once-at-setup registration using the Message design, but it isn't possible to implement mid-stream registration. Therefore I think we should go with the Message design. It would also be possible to implement both but I don't think that provides much value.\nI'd like to here some more from the WebTransport use case on the issue. My understanding is that the Requests are used to create WebTransport sessions and that creation of datagram flows(contexts) could be a lot more ad-hoc / server generated after the request phase, when compared to the CONNECT-UDP case. Cc NAME\nPersonally, I'd vote for either \"message\" or \"message + header\" (a la priorities), but not just \"header\". Mid-stream annotation of context/flow/etc is quite useful.\nI prefer \"Message\" only, since the dynamic use case seems like a fairly compelling use of sub-flows(contexts?). I'm not sure there are many cases when one allocates all sub-flows up front AND needs to use a header to do that in a flexible way. If an application needs multiple sub-flows, but they're fixed in number/format(ie: data and control), I don't think there's a need for a header or a \"Message\", since the application can consider that part of its internal format.\nDuring the 2021-04 MASQUE Interim, we discussed having the ability to accept or reject registration of context IDs. An important motivation for this is the fact that context ID registrations take up memory (for example it could be a compression context which contains the data that is elided) and therefore we need to prevent endpoints from arbitrarily making their peer allocate memory. While we could flow control context ID registration, a much simpler solution would be to have a way for endpoints to close/shutdown a registration.\nIIUC there's an opposing view that the constraints can be negotiated apiori. Either statically (such as defining an extension which states that there can only be N contenxts of a given type) or dynamically using something like a HTTP header. Failure to meet these constraints could lead to an HTTP/3 stream or connection error rather than a flow rejection.\nNAME I've filed to discuss whether to register/negotiate at request start or mid-stream.\nI think that if there is the ability to register mid-stream (), we need the ability to close/de-register/reject mid-stream as well. If we don't have mid-stream changes, then no need to negotiate. However, someone trying to register something needs to be able to know whether or not the thing they registered will be accepted.\nI agree that notification of the registration status is particularly important if we allow mid-stream registration.\nIn draft-ietf-masque-h3-datagram-00, flow IDs are bidirectional. During the 2021-04 MASQUE Interim, we discussed the possibility of making them unidirectional. Here are the differences: Bidirectional single shared namespace for both endpoints (even for client, odd for server) it's easier to refer to peer's IDs, which makes negotiation easier Unidirectional one namespace per direction every bidirectional use needs to associate both directions somehow I'm personally leaning towards keeping them bidirectional. Unlike QUIC streams, these do not have a state machine or flow control credits, so there is no cost to only using a single direction of a bidirectional flow for unidirectional use-cases.\nThere's another option where flows are unidirectional but they share a namespace (cleint even/server odd). I'm undecided. And I also don't know how important it is to choose. The 2 layer design means that DATAGRAMs always have a bidirectional tie via the stream ID. On balance, I think bidirectional would be good enough today but I would be receptive to extension use cases that make a strong point of unidirectional.\nBidirectional is my personal preference, but either way could work ultimately. It seems to me that any extension that wants unidirectional can just only use this context/flow/etc ID for one direction if it wants.\nI think bidirectional flows are likely overcomplicated and unnecessary. For example, if either peer can \"close\" a flow, this requires a much more complex state machine. In my experience, half-closed states are hard to get right. Closing unidirectional flows seems much easier to reason about. Many, perhaps most, of our use cases for flows are really unidirectional. For example, DSCP marking likely only makes sense from client to server, while timestamps for congestion control tuning likely only make sense from server to client. QUIC connection IDs are different in each direction, so QUIC header compression may be easiest to understand with unidirectional flows.\nBidirectional doesn't imply half-close states. I'm thinking of a bidirectional flow that can only be closed atomically as a whole. I don't think that's true. The default extension-less CONNECT-UDP is bidirectional. So is ECN, and so would a timestamp extension.\nI would prefer bidirectional without support for half-close. I feel that having unidirectional flow/context IDs adds overhead we don't need; peers now need to track those associations and it's unclear what the benefits of having them separated out are.\nThis issue is conflating consent (unilateral/bilateral) with scope (unidirectional/bidirectional). IIUC there are two questions here: 1) Can I send a flow-id forever once declared, or can the endpoint tell me to stop without killing the whole stream (consent)? 2) Can the receiver of a flow-id assignment use that same flow-id when sending datagrams, or could this mean something completely different? I think the answer to (1) is \"yes, it can tell me to stop\" (i.e. it is negotiated while allowing speculative sending). For (2), I lean toward bidirectional, but not strongly. There are plenty of codepoints, so it seems wasteful for each endpoint to define the same semantic explicitly.\nNAME the intent of this issue wan't to conflate those two things. This issue is about scope. For consent, see issue .\nConceptually, I like unidirectional better. I tell the peer that when I send with context-ID=X, it has a given semantic. For bidirectional the sender is saying both that and that if the peer sends on that ID the original sender will interpret with the same semantic. That might not even make sense. Is it possible to associate datagrams with a unidirectional stream (push?). If so, does allowing for bidirectional context imply that even though the receiver cannot send frames on the stream, it can send datagrams? Every negotiation does run the risk of loss/reordering resulting in a drop or buffering event at the receiver. The appeal of bidirectional is that it removes half of this risk in the cases where the bidirectionality is used.\nTo second NAME I would also prefer bidirectional without support for half-close. Given you can always not send something in one direction, and there's none of the overhead of streams(ie: flow control, etc) I can't see any benefits of unidirectional over bidirectional in this case. Associating h3-datagrams with a server push seems like unnecessary complexity to me, but you're right that it should be clarified.\nDuring the 2021-04 MASQUE Interim, we discussed a design proposal from NAME to replace the one-layer flow ID design with a two-layer design. This design replaces the flow ID with two numbers: a stream ID and a context ID. The contents of the QUIC DATAGRAM frame would now look like: While the flow ID in the draft was per-hop, the proposal has better separation: the stream ID is per-hop, it maps to an HTTP request the context ID is end-to-end, it maps to context information inside that request Intermediaries now only look at stream IDs and can be blissfully ignorant of context IDs. In the room at the 2021-04 MASQUE Interim, multiple participants spoke in support of this design and no one raised objections. This issue exists to ensure that we have consensus on this change - please speak up if you disagree.\nI am fine with the 2-layer design for CONNECT-UDP, but I am not sure if the second identifier should be in H3 DATAGRAM or should be specific to CONNECT-UDP. As I indicated to the list, webtrans and httpbis might weigh in on this with broader consideration of other use cases.\nHi NAME the topic of the optionality of context IDs is discussed in issue .\nI think this is a good design, because it minimizes the core functionality of the draft, but avoids every application that needs multiple dynamically created sub-flows from re-inventing a different mechanism. Nit: I would not call this a 'Frame', since it's not an HTTP(TLV) frame.\nI think this is a good change. Thanks! There are some things that I think we should still discuss, but we shouldn't hold up this change on those details.", "new_text": "6. We can provide DATAGRAM support in HTTP/2 by defining the CAPSULE frame in HTTP/2. We can provide DATAGRAM support in HTTP/1.x by defining its data stream format to a sequence of length-value capsules. TODO: Refactor this document into \"HTTP Datagrams\" with definitions for HTTP/1.x, HTTP/2, and HTTP/3. 7. Since this feature requires sending an HTTP/3 Settings parameter, it \"sticks out\". In other words, probing clients can learn whether a"}
{"id": "q-en-draft-ietf-masque-h3-datagram-0f240ea7e78ea93b681426458668d2e355f32d72e9eb6c22302dbdfdfc75e156", "old_text": "the fact that there are applications using HTTP/3 datagrams enabled on this endpoint. 9. 9.1. This document will request IANA to register the following entry in the \"HTTP/3 Settings\" registry: 9.2. This document will request IANA to register the \"Datagram-Flow-Id\" header field in the \"Permanent Message Header Field Names\" registry maintained at <>. 9.3. This document will request IANA to create an \"HTTP Datagram Flow Parameters\" registry. Registrations in this registry MUST include the following fields: The key of a parameter that is associated with a datagram flow list member (see header). Keys MUST be valid structured field parameter keys (see Section 3.1.2 of STRUCT-FIELD). A brief description of the parameter semantics, which MAY be a summary if a specification reference is provided. This field MUST be either Yes or No. Yes indicates that this parameter is the name of a named element (see header). No indicates that it is a parameter that is not a name. An optional reference to a specification for the parameter. This field MAY be empty.", "comments": "This PR aims to take the design discussions we had at the last interim, and propose a potential solution. This PR should in theory contain enough information to allow folks to implement this new design. That will allow us to perform interop testing and confirm whether the design is sound.\nI'm confused by this pull request (probably because I was out of the loop for a while). My understanding, at least when I first read the proposal a month ago was that we would separate multiplexing streams and multiplexing contexts into different layers, so that methods that need context would get it (CONNECT-UDP), and features that don't (WebTransport/extended CONNECT) could use stream-associated datagrams directly.\nNAME you're referring to which this PR isn't trying to address. That'll happen in a subsequent PR.\nIn , capsules, contexts and context IDs are explicitly defined as end-to-end and there are requirements for intermediaries to not modify them, and to not even parse them. The motivation for this was to ensure that we retain the ability to deploy extensions end-to-end without having to modify intermediaries. NAME mentioned that these requirements might be too strict, as he would like to have his intermediary be able to parse the context ID, for logging purposes.\nPart of the comment from the PR is also about consistency of interface for generic HTTP libraries that can be used to build either endpoints or intermediaries. If the context ID is always present, it's overly prescriptive to say that an intermediary can't parse it, and such a requirement couldn't be enforced anyways. If context IDs are negotiated on/off, whether this is possible depends on the negotiation mechanism.\nI think its fine for a component that has bytes passing through it to look at those bytes. The import thing to do is set expectations on what that means for the protocol behaviour we are defining. We don't expect intermediaries to do anything with a context ID; they should not enforce the protocol semantic rules (odd, even, reuse etc). H2 and H3 somewhat solve this problem by decoupling frame parsing from frame payload interpretation. E.g. if the form of a request or response is invalid, an intermediary shouldn't pass it on. DATAGRAM is different because it isn't defined as an actual HTTP/3 frame with frame headers and frame payload. Something that might work is to formalise that quarter stream ID is part of the envelope that intermediaries have to be able to handle the syntax of, but context ID is part of content and separate.\nUnless you encrypt capsules end-to-end, I don't think any text in the spec can in practice prevent intermediaries from inspecting and modifying capsules. You can send something like GREASE capsules, though.\nIn , we introduce the CLOSEDATAGRAMCONTEXT capsule which allows endpoints to close a context. That message currently only contains the context ID to close. NAME suggests that we may want to add more information there. Should we differentiate between \"context close\" and \"context registration rejected\"? Should we add an error code? In particular, do folks have use cases for these features?\nMy first concern is with infinite loops. If the context was \"garbage collected\" by the recipient, the sender can simply re-create it if they still want it. However, if it was closed due to incompatibility (i.e. rejected) or hitting a resource limit (e.g. max # of contexts), reopening it in response will produce a tight infinite loop. This could be maybe be avoided by some kind of heuristic, but an explicit indication of what's going on seems better.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nDuring the 2021-04 MASQUE Interim, we discussed multiple options for registering context IDs. Without going into the specific of the encoding, there are two broad classes of solutions: \"Header\" design Once-at-setup registration This happens during the HTTP request/response Simplest solution is to use an HTTP header such as (encoding TBD) Example use-case: CONNECT-UDP without extensions \"Message\" design Mid-stream registration This can happen at any point during the lifetime of the request stream This would involve some sort of \"register\" message (encoding TBD, see separate issue) Example use-case: CONNECT-IP on-the-fly compression (this can't be done at tunnel setup time because the bits top be compressed aren't always known at that time) It's possible to implement once-at-setup registration using the Message design, but it isn't possible to implement mid-stream registration. Therefore I think we should go with the Message design. It would also be possible to implement both but I don't think that provides much value.\nI'd like to here some more from the WebTransport use case on the issue. My understanding is that the Requests are used to create WebTransport sessions and that creation of datagram flows(contexts) could be a lot more ad-hoc / server generated after the request phase, when compared to the CONNECT-UDP case. Cc NAME\nPersonally, I'd vote for either \"message\" or \"message + header\" (a la priorities), but not just \"header\". Mid-stream annotation of context/flow/etc is quite useful.\nI prefer \"Message\" only, since the dynamic use case seems like a fairly compelling use of sub-flows(contexts?). I'm not sure there are many cases when one allocates all sub-flows up front AND needs to use a header to do that in a flexible way. If an application needs multiple sub-flows, but they're fixed in number/format(ie: data and control), I don't think there's a need for a header or a \"Message\", since the application can consider that part of its internal format.\nDuring the 2021-04 MASQUE Interim, we discussed having the ability to accept or reject registration of context IDs. An important motivation for this is the fact that context ID registrations take up memory (for example it could be a compression context which contains the data that is elided) and therefore we need to prevent endpoints from arbitrarily making their peer allocate memory. While we could flow control context ID registration, a much simpler solution would be to have a way for endpoints to close/shutdown a registration.\nIIUC there's an opposing view that the constraints can be negotiated apiori. Either statically (such as defining an extension which states that there can only be N contenxts of a given type) or dynamically using something like a HTTP header. Failure to meet these constraints could lead to an HTTP/3 stream or connection error rather than a flow rejection.\nNAME I've filed to discuss whether to register/negotiate at request start or mid-stream.\nI think that if there is the ability to register mid-stream (), we need the ability to close/de-register/reject mid-stream as well. If we don't have mid-stream changes, then no need to negotiate. However, someone trying to register something needs to be able to know whether or not the thing they registered will be accepted.\nI agree that notification of the registration status is particularly important if we allow mid-stream registration.\nIn draft-ietf-masque-h3-datagram-00, flow IDs are bidirectional. During the 2021-04 MASQUE Interim, we discussed the possibility of making them unidirectional. Here are the differences: Bidirectional single shared namespace for both endpoints (even for client, odd for server) it's easier to refer to peer's IDs, which makes negotiation easier Unidirectional one namespace per direction every bidirectional use needs to associate both directions somehow I'm personally leaning towards keeping them bidirectional. Unlike QUIC streams, these do not have a state machine or flow control credits, so there is no cost to only using a single direction of a bidirectional flow for unidirectional use-cases.\nThere's another option where flows are unidirectional but they share a namespace (cleint even/server odd). I'm undecided. And I also don't know how important it is to choose. The 2 layer design means that DATAGRAMs always have a bidirectional tie via the stream ID. On balance, I think bidirectional would be good enough today but I would be receptive to extension use cases that make a strong point of unidirectional.\nBidirectional is my personal preference, but either way could work ultimately. It seems to me that any extension that wants unidirectional can just only use this context/flow/etc ID for one direction if it wants.\nI think bidirectional flows are likely overcomplicated and unnecessary. For example, if either peer can \"close\" a flow, this requires a much more complex state machine. In my experience, half-closed states are hard to get right. Closing unidirectional flows seems much easier to reason about. Many, perhaps most, of our use cases for flows are really unidirectional. For example, DSCP marking likely only makes sense from client to server, while timestamps for congestion control tuning likely only make sense from server to client. QUIC connection IDs are different in each direction, so QUIC header compression may be easiest to understand with unidirectional flows.\nBidirectional doesn't imply half-close states. I'm thinking of a bidirectional flow that can only be closed atomically as a whole. I don't think that's true. The default extension-less CONNECT-UDP is bidirectional. So is ECN, and so would a timestamp extension.\nI would prefer bidirectional without support for half-close. I feel that having unidirectional flow/context IDs adds overhead we don't need; peers now need to track those associations and it's unclear what the benefits of having them separated out are.\nThis issue is conflating consent (unilateral/bilateral) with scope (unidirectional/bidirectional). IIUC there are two questions here: 1) Can I send a flow-id forever once declared, or can the endpoint tell me to stop without killing the whole stream (consent)? 2) Can the receiver of a flow-id assignment use that same flow-id when sending datagrams, or could this mean something completely different? I think the answer to (1) is \"yes, it can tell me to stop\" (i.e. it is negotiated while allowing speculative sending). For (2), I lean toward bidirectional, but not strongly. There are plenty of codepoints, so it seems wasteful for each endpoint to define the same semantic explicitly.\nNAME the intent of this issue wan't to conflate those two things. This issue is about scope. For consent, see issue .\nConceptually, I like unidirectional better. I tell the peer that when I send with context-ID=X, it has a given semantic. For bidirectional the sender is saying both that and that if the peer sends on that ID the original sender will interpret with the same semantic. That might not even make sense. Is it possible to associate datagrams with a unidirectional stream (push?). If so, does allowing for bidirectional context imply that even though the receiver cannot send frames on the stream, it can send datagrams? Every negotiation does run the risk of loss/reordering resulting in a drop or buffering event at the receiver. The appeal of bidirectional is that it removes half of this risk in the cases where the bidirectionality is used.\nTo second NAME I would also prefer bidirectional without support for half-close. Given you can always not send something in one direction, and there's none of the overhead of streams(ie: flow control, etc) I can't see any benefits of unidirectional over bidirectional in this case. Associating h3-datagrams with a server push seems like unnecessary complexity to me, but you're right that it should be clarified.\nDuring the 2021-04 MASQUE Interim, we discussed a design proposal from NAME to replace the one-layer flow ID design with a two-layer design. This design replaces the flow ID with two numbers: a stream ID and a context ID. The contents of the QUIC DATAGRAM frame would now look like: While the flow ID in the draft was per-hop, the proposal has better separation: the stream ID is per-hop, it maps to an HTTP request the context ID is end-to-end, it maps to context information inside that request Intermediaries now only look at stream IDs and can be blissfully ignorant of context IDs. In the room at the 2021-04 MASQUE Interim, multiple participants spoke in support of this design and no one raised objections. This issue exists to ensure that we have consensus on this change - please speak up if you disagree.\nI am fine with the 2-layer design for CONNECT-UDP, but I am not sure if the second identifier should be in H3 DATAGRAM or should be specific to CONNECT-UDP. As I indicated to the list, webtrans and httpbis might weigh in on this with broader consideration of other use cases.\nHi NAME the topic of the optionality of context IDs is discussed in issue .\nI think this is a good design, because it minimizes the core functionality of the draft, but avoids every application that needs multiple dynamically created sub-flows from re-inventing a different mechanism. Nit: I would not call this a 'Frame', since it's not an HTTP(TLV) frame.\nI think this is a good change. Thanks! There are some things that I think we should still discuss, but we shouldn't hold up this change on those details.", "new_text": "the fact that there are applications using HTTP/3 datagrams enabled on this endpoint. 8. 8.1. This document will request IANA to register the following entry in the \"HTTP/3 Frames\" registry: 8.2. This document will request IANA to register the following entry in the \"HTTP/3 Settings\" registry: 8.3. This document establishes a registry for HTTP/3 frame type codes. The \"HTTP Capsule Types\" registry governs a 62-bit space. Registrations in this registry MUST include the following fields: Type: A name or label for the capsule type. The value of the Capsule Type field (see capsule-frame) is a 62bit integer. An optional reference to a specification for the type. This field MAY be empty. Registrations follow the \"First Come First Served\" policy (see Section 4.4 of IANA-POLICY) where two registrations MUST NOT have the same Type. This registry initially contains the following entries: 8.4. REGISTER_DATAGRAM_CONTEXT capsules carry key-value pairs, see register-capsule. This document will request IANA to create an \"HTTP Datagram Context Extension Keys\" registry. Registrations in this registry MUST include the following fields: The key (see register-capsule). Keys MUST be valid tokens as defined in Section 3.2.6 of RFC7230. A brief description of the key semantics, which MAY be a summary if a specification reference is provided. An optional reference to a specification for the parameter. This field MAY be empty."}
{"id": "q-en-draft-ietf-masque-h3-datagram-0f240ea7e78ea93b681426458668d2e355f32d72e9eb6c22302dbdfdfc75e156", "old_text": "Section 4.4 of IANA-POLICY) where two registrations MUST NOT have the same Key. This registry is initially empty. 10. References 10.1. URIs [1] mailto:masque@ietf.org", "comments": "This PR aims to take the design discussions we had at the last interim, and propose a potential solution. This PR should in theory contain enough information to allow folks to implement this new design. That will allow us to perform interop testing and confirm whether the design is sound.\nI'm confused by this pull request (probably because I was out of the loop for a while). My understanding, at least when I first read the proposal a month ago was that we would separate multiplexing streams and multiplexing contexts into different layers, so that methods that need context would get it (CONNECT-UDP), and features that don't (WebTransport/extended CONNECT) could use stream-associated datagrams directly.\nNAME you're referring to which this PR isn't trying to address. That'll happen in a subsequent PR.\nIn , capsules, contexts and context IDs are explicitly defined as end-to-end and there are requirements for intermediaries to not modify them, and to not even parse them. The motivation for this was to ensure that we retain the ability to deploy extensions end-to-end without having to modify intermediaries. NAME mentioned that these requirements might be too strict, as he would like to have his intermediary be able to parse the context ID, for logging purposes.\nPart of the comment from the PR is also about consistency of interface for generic HTTP libraries that can be used to build either endpoints or intermediaries. If the context ID is always present, it's overly prescriptive to say that an intermediary can't parse it, and such a requirement couldn't be enforced anyways. If context IDs are negotiated on/off, whether this is possible depends on the negotiation mechanism.\nI think its fine for a component that has bytes passing through it to look at those bytes. The import thing to do is set expectations on what that means for the protocol behaviour we are defining. We don't expect intermediaries to do anything with a context ID; they should not enforce the protocol semantic rules (odd, even, reuse etc). H2 and H3 somewhat solve this problem by decoupling frame parsing from frame payload interpretation. E.g. if the form of a request or response is invalid, an intermediary shouldn't pass it on. DATAGRAM is different because it isn't defined as an actual HTTP/3 frame with frame headers and frame payload. Something that might work is to formalise that quarter stream ID is part of the envelope that intermediaries have to be able to handle the syntax of, but context ID is part of content and separate.\nUnless you encrypt capsules end-to-end, I don't think any text in the spec can in practice prevent intermediaries from inspecting and modifying capsules. You can send something like GREASE capsules, though.\nIn , we introduce the CLOSEDATAGRAMCONTEXT capsule which allows endpoints to close a context. That message currently only contains the context ID to close. NAME suggests that we may want to add more information there. Should we differentiate between \"context close\" and \"context registration rejected\"? Should we add an error code? In particular, do folks have use cases for these features?\nMy first concern is with infinite loops. If the context was \"garbage collected\" by the recipient, the sender can simply re-create it if they still want it. However, if it was closed due to incompatibility (i.e. rejected) or hitting a resource limit (e.g. max # of contexts), reopening it in response will produce a tight infinite loop. This could be maybe be avoided by some kind of heuristic, but an explicit indication of what's going on seems better.\nThis issue assumes that decide to go with the two-layer design described in . Given that design, some applications might not need the multiplexing provided by context IDs. We have multiple options here: Make context ID mandatory Applications that don't need it waste a byte per datagram Negotiate the presence of context IDs using an HTTP header Context IDs would still be mandatory to implement on servers because the client might send that header Have the method determine whether context IDs are present or not This would prevent extensibility on those methods Create a REGISTERNOCONTEXT message This assumes we use the Message design from We add a register message that means that this stream does not use context IDs This allows avoiding the byte overhead without sacrificing extensibility\nMaking the context ID mandatory is simple, although slightly wasteful. For the variations where it is optional, I prefer the one that uses a header. It could work with a variant of where we do Header + Message, and the Header determines whether or not you have contexts, and whether or not they can be dynamic.\nWhat are the benefits of message over header to make it optional? To me it seems like header is sufficient, because the context IDs are end-to-end, and should not affect intermediaries, making the header a natural place to put it. I think having the context ID be the default behavior and the header is to opt-out is the right decision if we were to allow removing it.\nThis is a bit of a bike-shed, but why would it default to having context-IDs instead of not having them?\nI think the second option \"Negotiate the presence of context IDs using an HTTP header\" is the best personally. In terms of: \"Context IDs would still be mandatory to implement on servers because the client might send that header\", I assume server in this context is the final application server, not a proxy? Even so, I'd argue it's applications which decide if they need this functionality, so if a server only implemented one application and it didn't use Context-IDs, then I don't see why it would need to implement it.\nIan, in your comment, are you talking about an intermediary when you say \"proxy\"? If so, then an intermediary need not implement anything based on the payload of than H3 DATAGRAM, it only has to pass the data along. That may be worth a clarification.\nHaving context ID optional has a nice future-proof property - other uses of datagrams in HTTP/3 can decide if they want the second layer or not. I wouldn't even state it as application needing multiplexing or not. Other applications might find it very useful to include a second ((third, fourth, fifth) layer of frame header. I think its fine for us to state as much in this in the spec, while also defining an explicit second layer for multiplexing.\nYes, by proxy I meant intermediary. And I agree about what intermediaries need to do, I just wanted to ensure I understood what server meant when David used it above. Yup, whenever all this gets written up, we can make it clear that intermediaries can ignore much of this.\nThe method defines the semantics of datagrams for a particular request, thus I think the third option makes most sense. I don't see a point in using explicit negotiation, since in the two cases I can think of the top of the head, the answer is always clear from the method (CONNECT-IP seems to always want context IDs, WebTransport does not ever want context IDs). If some method starts out without context, but wants to add context later, it can add its own negotiation.\nOne set of use cases for which the method approach(3) is best are those where there are multiple context IDs but they're predefined by the method and don't need negotiation. My strawman is WebRTC.\nDuring the 2021-04 MASQUE Interim, we discussed multiple options for registering context IDs. Without going into the specific of the encoding, there are two broad classes of solutions: \"Header\" design Once-at-setup registration This happens during the HTTP request/response Simplest solution is to use an HTTP header such as (encoding TBD) Example use-case: CONNECT-UDP without extensions \"Message\" design Mid-stream registration This can happen at any point during the lifetime of the request stream This would involve some sort of \"register\" message (encoding TBD, see separate issue) Example use-case: CONNECT-IP on-the-fly compression (this can't be done at tunnel setup time because the bits top be compressed aren't always known at that time) It's possible to implement once-at-setup registration using the Message design, but it isn't possible to implement mid-stream registration. Therefore I think we should go with the Message design. It would also be possible to implement both but I don't think that provides much value.\nI'd like to here some more from the WebTransport use case on the issue. My understanding is that the Requests are used to create WebTransport sessions and that creation of datagram flows(contexts) could be a lot more ad-hoc / server generated after the request phase, when compared to the CONNECT-UDP case. Cc NAME\nPersonally, I'd vote for either \"message\" or \"message + header\" (a la priorities), but not just \"header\". Mid-stream annotation of context/flow/etc is quite useful.\nI prefer \"Message\" only, since the dynamic use case seems like a fairly compelling use of sub-flows(contexts?). I'm not sure there are many cases when one allocates all sub-flows up front AND needs to use a header to do that in a flexible way. If an application needs multiple sub-flows, but they're fixed in number/format(ie: data and control), I don't think there's a need for a header or a \"Message\", since the application can consider that part of its internal format.\nDuring the 2021-04 MASQUE Interim, we discussed having the ability to accept or reject registration of context IDs. An important motivation for this is the fact that context ID registrations take up memory (for example it could be a compression context which contains the data that is elided) and therefore we need to prevent endpoints from arbitrarily making their peer allocate memory. While we could flow control context ID registration, a much simpler solution would be to have a way for endpoints to close/shutdown a registration.\nIIUC there's an opposing view that the constraints can be negotiated apiori. Either statically (such as defining an extension which states that there can only be N contenxts of a given type) or dynamically using something like a HTTP header. Failure to meet these constraints could lead to an HTTP/3 stream or connection error rather than a flow rejection.\nNAME I've filed to discuss whether to register/negotiate at request start or mid-stream.\nI think that if there is the ability to register mid-stream (), we need the ability to close/de-register/reject mid-stream as well. If we don't have mid-stream changes, then no need to negotiate. However, someone trying to register something needs to be able to know whether or not the thing they registered will be accepted.\nI agree that notification of the registration status is particularly important if we allow mid-stream registration.\nIn draft-ietf-masque-h3-datagram-00, flow IDs are bidirectional. During the 2021-04 MASQUE Interim, we discussed the possibility of making them unidirectional. Here are the differences: Bidirectional single shared namespace for both endpoints (even for client, odd for server) it's easier to refer to peer's IDs, which makes negotiation easier Unidirectional one namespace per direction every bidirectional use needs to associate both directions somehow I'm personally leaning towards keeping them bidirectional. Unlike QUIC streams, these do not have a state machine or flow control credits, so there is no cost to only using a single direction of a bidirectional flow for unidirectional use-cases.\nThere's another option where flows are unidirectional but they share a namespace (cleint even/server odd). I'm undecided. And I also don't know how important it is to choose. The 2 layer design means that DATAGRAMs always have a bidirectional tie via the stream ID. On balance, I think bidirectional would be good enough today but I would be receptive to extension use cases that make a strong point of unidirectional.\nBidirectional is my personal preference, but either way could work ultimately. It seems to me that any extension that wants unidirectional can just only use this context/flow/etc ID for one direction if it wants.\nI think bidirectional flows are likely overcomplicated and unnecessary. For example, if either peer can \"close\" a flow, this requires a much more complex state machine. In my experience, half-closed states are hard to get right. Closing unidirectional flows seems much easier to reason about. Many, perhaps most, of our use cases for flows are really unidirectional. For example, DSCP marking likely only makes sense from client to server, while timestamps for congestion control tuning likely only make sense from server to client. QUIC connection IDs are different in each direction, so QUIC header compression may be easiest to understand with unidirectional flows.\nBidirectional doesn't imply half-close states. I'm thinking of a bidirectional flow that can only be closed atomically as a whole. I don't think that's true. The default extension-less CONNECT-UDP is bidirectional. So is ECN, and so would a timestamp extension.\nI would prefer bidirectional without support for half-close. I feel that having unidirectional flow/context IDs adds overhead we don't need; peers now need to track those associations and it's unclear what the benefits of having them separated out are.\nThis issue is conflating consent (unilateral/bilateral) with scope (unidirectional/bidirectional). IIUC there are two questions here: 1) Can I send a flow-id forever once declared, or can the endpoint tell me to stop without killing the whole stream (consent)? 2) Can the receiver of a flow-id assignment use that same flow-id when sending datagrams, or could this mean something completely different? I think the answer to (1) is \"yes, it can tell me to stop\" (i.e. it is negotiated while allowing speculative sending). For (2), I lean toward bidirectional, but not strongly. There are plenty of codepoints, so it seems wasteful for each endpoint to define the same semantic explicitly.\nNAME the intent of this issue wan't to conflate those two things. This issue is about scope. For consent, see issue .\nConceptually, I like unidirectional better. I tell the peer that when I send with context-ID=X, it has a given semantic. For bidirectional the sender is saying both that and that if the peer sends on that ID the original sender will interpret with the same semantic. That might not even make sense. Is it possible to associate datagrams with a unidirectional stream (push?). If so, does allowing for bidirectional context imply that even though the receiver cannot send frames on the stream, it can send datagrams? Every negotiation does run the risk of loss/reordering resulting in a drop or buffering event at the receiver. The appeal of bidirectional is that it removes half of this risk in the cases where the bidirectionality is used.\nTo second NAME I would also prefer bidirectional without support for half-close. Given you can always not send something in one direction, and there's none of the overhead of streams(ie: flow control, etc) I can't see any benefits of unidirectional over bidirectional in this case. Associating h3-datagrams with a server push seems like unnecessary complexity to me, but you're right that it should be clarified.\nDuring the 2021-04 MASQUE Interim, we discussed a design proposal from NAME to replace the one-layer flow ID design with a two-layer design. This design replaces the flow ID with two numbers: a stream ID and a context ID. The contents of the QUIC DATAGRAM frame would now look like: While the flow ID in the draft was per-hop, the proposal has better separation: the stream ID is per-hop, it maps to an HTTP request the context ID is end-to-end, it maps to context information inside that request Intermediaries now only look at stream IDs and can be blissfully ignorant of context IDs. In the room at the 2021-04 MASQUE Interim, multiple participants spoke in support of this design and no one raised objections. This issue exists to ensure that we have consensus on this change - please speak up if you disagree.\nI am fine with the 2-layer design for CONNECT-UDP, but I am not sure if the second identifier should be in H3 DATAGRAM or should be specific to CONNECT-UDP. As I indicated to the list, webtrans and httpbis might weigh in on this with broader consideration of other use cases.\nHi NAME the topic of the optionality of context IDs is discussed in issue .\nI think this is a good design, because it minimizes the core functionality of the draft, but avoids every application that needs multiple dynamically created sub-flows from re-inventing a different mechanism. Nit: I would not call this a 'Frame', since it's not an HTTP(TLV) frame.\nI think this is a good change. Thanks! There are some things that I think we should still discuss, but we shouldn't hold up this change on those details.", "new_text": "Section 4.4 of IANA-POLICY) where two registrations MUST NOT have the same Key. This registry is initially empty. 9. References 9.1. URIs [1] mailto:masque@ietf.org"}
{"id": "q-en-draft-ietf-masque-h3-datagram-caa4d1ec37e10b968165adb90a21294cb24e107ce3e6931ff5c7b082860ae808", "old_text": "CAPSULE is an HTTP/3 Frame (as opposed to a QUIC frame) which SHALL only be sent in client-initiated bidirectional streams. Intermediaries MUST forward all received CAPSULE frames in their unmodified entirety on the same stream where it would forward DATA frames. Intermediaries MUST NOT send any CAPSULE frames other than the ones it is forwarding. Intermediaries respect the order of CAPSULE frames: if an intermediary receives two CAPSULE frames in a given order, it MUST forward them in the same order. This specification of CAPSULE currently uses HTTP/3 frame type 0xffcab5. If this document is approved, a lower number will be", "comments": "NAME points out that in order to correctly forward DATAGRAMs between transports where the QUIC DATAGRAM frame isn't always available, intermediaries need the ability to parse some capsules.\nCAPSULE was easier to understand when it was fully opaque. When should I choose to define a semi-opaque CAPSULE vs defining a new frame? Aren't both constructs effectively hop-by-hop?\nNAME you're the one who asked for this :-) The benefit of a semi-opaque capsule is that it allows defining new capsules without having to update every single intermediary in the chain, because intermediaries forward all unknown capsules.", "new_text": "CAPSULE is an HTTP/3 Frame (as opposed to a QUIC frame) which SHALL only be sent in client-initiated bidirectional streams. Intermediaries forward received CAPSULE frames on the same stream where it would forward DATA frames. Each Capsule Type determines whether it is opaque or transparent to intermediaries: opaque capsules are forwarded unmodified while transparent ones can be parsed, added, or removed by intermediaries. This specification of CAPSULE currently uses HTTP/3 frame type 0xffcab5. If this document is approved, a lower number will be"}
{"id": "q-en-draft-ietf-masque-h3-datagram-caa4d1ec37e10b968165adb90a21294cb24e107ce3e6931ff5c7b082860ae808", "old_text": "Data whose semantics depends on the Capsule Type. Endpoints which receive a Capsule with an unknown Capsule Type MUST silently drop that Capsule. Intermediaries MUST forward Capsules, even if they do not know the Capsule Type or cannot parse the Capsule Data. 4.1.", "comments": "NAME points out that in order to correctly forward DATAGRAMs between transports where the QUIC DATAGRAM frame isn't always available, intermediaries need the ability to parse some capsules.\nCAPSULE was easier to understand when it was fully opaque. When should I choose to define a semi-opaque CAPSULE vs defining a new frame? Aren't both constructs effectively hop-by-hop?\nNAME you're the one who asked for this :-) The benefit of a semi-opaque capsule is that it allows defining new capsules without having to update every single intermediary in the chain, because intermediaries forward all unknown capsules.", "new_text": "Data whose semantics depends on the Capsule Type. Unless otherwise specified, all Capsule Types are defined as opaque to intermediaries. Intermediaries MUST forward all received opaque CAPSULE frames in their unmodified entirety. Intermediaries MUST NOT send any opaque CAPSULE frames other than the ones it is forwarding. All Capsule Types defined in this document are opaque, with the exception of the DATAGRAM Capsule, see datagram-capsule. Definitions of new Capsule Types MAY specify that the newly introduced type is transparent. Intermediaries MUST treat unknown Capsule Types as opaque. Intermediaries respect the order of opaque CAPSULE frames: if an intermediary receives two opaque CAPSULE frames in a given order, it MUST forward them in the same order. Endpoints which receive a Capsule with an unknown Capsule Type MUST silently drop that Capsule. Receipt of a CAPSULE HTTP/3 Frame on a stream that is not a client- initiated bidirectional stream MUST be treated as a connection error of type H3_FRAME_UNEXPECTED. 4.1."}
{"id": "q-en-draft-ietf-masque-h3-datagram-caa4d1ec37e10b968165adb90a21294cb24e107ce3e6931ff5c7b082860ae808", "old_text": "how to process them from format also apply to HTTP/3 datagrams sent and received using the DATAGRAM capsule. 5. In order to facilitate extensibility of contexts, the", "comments": "NAME points out that in order to correctly forward DATAGRAMs between transports where the QUIC DATAGRAM frame isn't always available, intermediaries need the ability to parse some capsules.\nCAPSULE was easier to understand when it was fully opaque. When should I choose to define a semi-opaque CAPSULE vs defining a new frame? Aren't both constructs effectively hop-by-hop?\nNAME you're the one who asked for this :-) The benefit of a semi-opaque capsule is that it allows defining new capsules without having to update every single intermediary in the chain, because intermediaries forward all unknown capsules.", "new_text": "how to process them from format also apply to HTTP/3 datagrams sent and received using the DATAGRAM capsule. The DATAGRAM Capsule is transparent to intermediaries, meaning that intermediaries MAY parse it and send DATAGRAM Capsules that they did not receive. This allows an intermediary to reencode HTTP/3 Datagrams as it forwards them: in other words, an intermediary MAY send a DATAGRAM Capsule to forward an HTTP/3 Datagram which was received in a QUIC DATAGRAM frame, and vice versa. Note that while DATAGRAM capsules are sent on a stream, intermediaries can reencode HTTP/3 datagrams into QUIC DATAGRAM frames over the next hop, and those could be dropped. Because of this, applications have to always consider HTTP/3 datagrams to be unreliable, even if they were initially sent in a capsule. 5. In order to facilitate extensibility of contexts, the"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "4. CAPSULE allows reliably sending request-related information end-to- end, even in the presence of HTTP intermediaries. CAPSULE is an HTTP/3 Frame (as opposed to a QUIC frame) which SHALL only be sent in client-initiated bidirectional streams. Intermediaries forward received CAPSULE frames on the same stream where it would forward DATA frames. Each Capsule Type determines whether it is opaque or transparent to intermediaries: opaque capsules are forwarded unmodified while transparent ones can be parsed, added, or removed by intermediaries. This specification of CAPSULE currently uses HTTP/3 frame type 0xffcab5. If this document is approved, a lower number will be requested from IANA. The Type and Length fields follows the definition of HTTP/3 frames from H3. The payload consists of: The type of this capsule. Data whose semantics depends on the Capsule Type. Unless otherwise specified, all Capsule Types are defined as opaque to intermediaries. Intermediaries MUST forward all received opaque", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "4. This specification introduces the Capsule Protocol. The Capsule Protocol is a sequence of type-length-value tuples that allows endpoints to reliably communicate request-related information end-to- end, even in the presence of HTTP intermediaries. 4.1. This specification defines the \"data stream\" of an HTTP request as the bidirectional stream of bytes that follow the headers in both directions. In HTTP/1.x, the data stream consists of all bytes on the connection that follow the blank line that concludes either the request header section, or the 2xx (Successful) response header section. In HTTP/2 and HTTP/3, the data stream of a given HTTP request consists of all bytes sent in DATA frames with the corresponding stream ID. The concept of a data stream is particularly relevant for methods such as CONNECT where there is no HTTP message content after the headers. Definitions of new HTTP Methods or of new HTTP Upgrade Tokens can state that their data stream uses the Capsule Protocol. If they do so, that means that the contents of their data stream uses the following format (using the notation from the \"Notational Conventions\" section of QUIC): A variable-length integer indicating the Type of the capsule. Endpoints that receive a capsule with an unknown Capsule Type MUST silently skip over that capsule. The length of the Capsule Value field following this field, encoded as a variable-length integer. Note that this field can have a value of zero. The payload of this capsule. Its semantics are determined by the value of the Capsule Type field. 4.2. If the definition of an HTTP Method or HTTP Upgrade Token states that it uses the capsule protocol, its implementations MUST follow the following requirements: A server MUST NOT send any Transfer-Encoding or Content-Length header fields in a 2xx (Successful) response. If a client receives a Content-Length or Transfer-Encoding header fields in a successful response, it MUST treat that response as malformed. A request message does not have content. A successful response message does not have content. Responses are not cacheable. 4.3. Intermediaries MUST operate in one of the two following modes: In this mode, the intermediary forwards the data stream between two associated streams without any modification of the data stream. In this mode, the intermediary terminates the data stream and parses all Capsule Type and Capsule Length fields it receives. Each Capsule Type determines whether it is opaque or transparent to intermediaries in participant mode: opaque capsules are forwarded unmodified while transparent ones can be parsed, added, or removed by intermediaries. Intermediaries MAY modify the contents of the Capsule Data field of transparent capsule types. Unless otherwise specified, all Capsule Types are defined as opaque to intermediaries. Intermediaries MUST forward all received opaque"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "Endpoints which receive a Capsule with an unknown Capsule Type MUST silently drop that Capsule. Receipt of a CAPSULE HTTP/3 Frame on a stream that is not a client- initiated bidirectional stream MUST be treated as a connection error of type H3_FRAME_UNEXPECTED. CAPSULE frames MUST NOT be sent on a stream before at least one HEADERS frame has been sent on that stream. This removes the need to buffer capsules when the endpoint needs information from headers to determine how to react to the capsule. If a CAPSULE frame is received on a stream before a HEADERS frame, the receiver MUST treat this as a connection error of type H3_FRAME_UNEXPECTED. 4.1. The REGISTER_DATAGRAM_CONTEXT capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer of the", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "Endpoints which receive a Capsule with an unknown Capsule Type MUST silently drop that Capsule. 4.4. 4.4.1. The REGISTER_DATAGRAM_CONTEXT capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer of the"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "client MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.2. The REGISTER_DATAGRAM_NO_CONTEXT capsule (see iana-types for the value of the capsule type) allows a client to inform the server that", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "client MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.4.2. The REGISTER_DATAGRAM_NO_CONTEXT capsule (see iana-types for the value of the capsule type) allows a client to inform the server that"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "of contexts, and they MAY do so in a way which is opaque to intermediaries. 4.3. The CLOSE_DATAGRAM_CONTEXT capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer that it will", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "of contexts, and they MAY do so in a way which is opaque to intermediaries. 4.4.3. The CLOSE_DATAGRAM_CONTEXT capsule (see iana-types for the value of the capsule type) allows an endpoint to inform its peer that it will"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "extension, the endpoint MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.4. The DATAGRAM capsule (see iana-types for the value of the capsule type) allows an endpoint to send a datagram frame over an HTTP", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "extension, the endpoint MUST abruptly terminate the corresponding stream with a stream error of type H3_GENERAL_PROTOCOL_ERROR. 4.4.4. The DATAGRAM capsule (see iana-types for the value of the capsule type) allows an endpoint to send a datagram frame over an HTTP"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "8. We can provide DATAGRAM support in HTTP/2 by defining the CAPSULE frame in HTTP/2. We can provide DATAGRAM support in HTTP/1.x by defining its data stream format to a sequence of length-value capsules. TODO: Refactor this document and add definitions for HTTP/1.x and HTTP/2. 9. Since this feature requires sending an HTTP/3 Settings parameter, it \"sticks out\". In other words, probing clients can learn whether a server supports this feature. Implementations that support this", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "8. Since this feature requires sending an HTTP/3 Settings parameter, it \"sticks out\". In other words, probing clients can learn whether a server supports this feature. Implementations that support this"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "the fact that there are applications using HTTP/3 datagrams enabled on this endpoint. 10. 10.1. This document will request IANA to register the following entry in the \"HTTP/3 Frames\" registry: 10.2. This document will request IANA to register the following entry in the \"HTTP/3 Settings\" registry: 10.3. This document establishes a registry for HTTP capsule type codes. The \"HTTP Capsule Types\" registry governs a 62-bit space.", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "the fact that there are applications using HTTP/3 datagrams enabled on this endpoint. 9. 9.1. This document will request IANA to register the following entry in the \"HTTP/3 Settings\" registry: 9.2. This document establishes a registry for HTTP capsule type codes. The \"HTTP Capsule Types\" registry governs a 62-bit space."}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "A name or label for the capsule type. The value of the Capsule Type field (see capsule-frame) is a 62bit integer. An optional reference to a specification for the type. This field MAY be empty.", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "A name or label for the capsule type. The value of the Capsule Type field (see capsule-protocol) is a 62bit integer. An optional reference to a specification for the type. This field MAY be empty."}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "arbitrary values. These values MUST NOT be assigned by IANA and MUST NOT appear in the listing of assigned values. 10.4. This document establishes a registry for HTTP datagram context extension type codes. The \"HTTP Context Extension Types\" registry", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "arbitrary values. These values MUST NOT be assigned by IANA and MUST NOT appear in the listing of assigned values. 9.3. This document establishes a registry for HTTP datagram context extension type codes. The \"HTTP Context Extension Types\" registry"}
{"id": "q-en-draft-ietf-masque-h3-datagram-c5d4242c4ee65517c79c7abd6daad09c280c95a42a229bbb2f36fe9028e6c171", "old_text": "assigned by IANA and MUST NOT appear in the listing of assigned values. 10.5. This document establishes a registry for HTTP context extension type codes. The \"HTTP Context Close Codes\" registry governs a 62-bit", "comments": "Had offline conversation with Lucas, conclusion was that this is good enough to merge to allow interop - we'll have plenty of time to figure out details in subsequent issues and PRs.", "new_text": "assigned by IANA and MUST NOT appear in the listing of assigned values. 9.4. This document establishes a registry for HTTP context extension type codes. The \"HTTP Context Close Codes\" registry governs a 62-bit"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-3f831b38617c92f4702dca03b19f1ba572e09dd0e5c7d2e9a1e97d70a2af3472", "old_text": "5.6. In addition to measurements media players use to guide their segment- by-segment adaptive streaming requests, streaming media providers may also rely on measurements collected from media players to provide analytics that can be used for decisions such as whether the adaptive encoding bitrates in use are the best ones to provide to media players, or whether current media content caching is providing the best experience for viewers. To that effect, the Consumer Technology Association (CTA) who owns the Web Application Video Ecosystem (WAVE) project has published two important specifications. 5.6.1. CTA-2066 specifies a set of media player events, properties, quality of experience (QoE) metrics and associated terminology for", "comments": "LGTM\nFrom NAME The very first sentence is mostly unparseable due to its length (7 lines). Please consider splitting it.\nI added this, from NAME review: This section and the next one should be a list with 2 bullets rather than 2 sections to make the text more readable.", "new_text": "5.6. Media players use measurements to guide their segment-by-segment adaptive streaming requests, but may also provide measurements to streaming media providers. In turn, providers may base analytics on these measurements, to guide decisions such as whether adaptive encoding bitrates in use are the best ones to provide to media players, or whether current media content caching is providing the best experience for viewers. To that effect, the Consumer Technology Association (CTA) who owns the Web Application Video Ecosystem (WAVE) project has published two important specifications. CTA-2066: Streaming Quality of Experience Events, Properties and Metrics CTA-2066 specifies a set of media player events, properties, quality of experience (QoE) metrics and associated terminology for"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-3f831b38617c92f4702dca03b19f1ba572e09dd0e5c7d2e9a1e97d70a2af3472", "old_text": "defining a common terminology as well as how each metric should be computed for consistent reporting. 5.6.2. Many assume that the CDNs have a holistic view into the health and performance of the streaming clients. However, this is not the case.", "comments": "LGTM\nFrom NAME The very first sentence is mostly unparseable due to its length (7 lines). Please consider splitting it.\nI added this, from NAME review: This section and the next one should be a list with 2 bullets rather than 2 sections to make the text more readable.", "new_text": "defining a common terminology as well as how each metric should be computed for consistent reporting. CTA-5004: Common Media Client Data (CMCD) Many assume that the CDNs have a holistic view into the health and performance of the streaming clients. However, this is not the case."}
{"id": "q-en-draft-ietf-mops-streaming-opcons-6f6fa08840d6bed8b6d9ef7446c10a3ef40eb24ad906af69a2ddffa2d7b024a3", "old_text": "different algorithms such as loss-based CUBIC RFC8312, delay-based COPA or BBR, or even something completely different. We do have experience with deploying new congestion controllers without melting the Internet (CUBIC is one example), but the point mentioned in tcp-behavior about TCP being implemented in operating system kernels is also different with QUIC. Although QUIC can be implemented in operating system kernels, one of the design goals when this work was chartered was \"QUIC is expected to support rapid, distributed development and testing of features\", and to meet this expectation, many implementers have chosen to implement QUIC in user space, outside the operating system kernel, and to even distribute QUIC libraries with their own applications. The decision to deploy a new version of QUIC is relatively uncontrolled, compared to other widely used transport protocols, and this can include new transport behaviors that appear without much notice except to the QUIC endpoints. At IETF 105, Christian Huitema and Brian Trammell presented a talk on \"Congestion Defense in Depth\" CDiD, that explored potential concerns about new QUIC congestion controllers being broadly deployed without the testing and instrumentation that current major content providers routinely include. The sense of the room at IETF 105 was that the current major content providers understood what is at stake when they deploy new congestion controllers, but this presentation, and the related discussion in TSVAREA minutes from IETF 105 (tsvarea-105, are still worth a look for new and rapidly growing content providers. It is worth considering that if TCP-based HTTP traffic and UDP-based HTTP/3 traffic are allowed to enter operator networks on roughly", "comments": "From NAME Please consider using the same wording for the title as for 6.2 and 6.1 (nit) Who is \"we\" in \" We do have experience\" ? While I share some concerns about deploying a new transport in user space as opposed to more stable and controlled kernel space, I wonder whether this paragraph is relevant in this document. Does it bring any value ? If no, then please remove it.\nWe'll look at the date for CUBIC being enabled by default in the Linux kernel as \"experience\". We'll delete the paragraph as suggested.\nwas fixed elsewhere.\nLGTM. I'm almost sad to lose the ref to \"Congestion Defense in Depth\", but I guess it makes good sense, that's not a consensus position.", "new_text": "different algorithms such as loss-based CUBIC RFC8312, delay-based COPA or BBR, or even something completely different. The Internet community does have experience with deploying new congestion controllers without melting the Internet. As noted in RFC8312, both the CUBIC congestion controller and its predecessor BIC have significantly different behavior from Reno-style congestion controllers such as TCP NewReno RFC6582, but both CUBIC and BIC were added to the Linux kernel in order to allow experimentation and analysis, and both were then selected as the default TCP congestion controllers in Linux, and both were deployed globally. The point mentioned in tcp-behavior about TCP congestion controllers being implemented in operating system kernels is different with QUIC. Although QUIC can be implemented in operating system kernels, one of the design goals when this work was chartered was \"QUIC is expected to support rapid, distributed development and testing of features\", and to meet this expectation, many implementers have chosen to implement QUIC in user space, outside the operating system kernel, and to even distribute QUIC libraries with their own applications. It is worth noting that streaming operators using HTTP/3, carried over QUIC, can expect more frequent deployment of new congestion controller behavior than has been the case with HTTP/1 and HTTP/2, carried over TCP. It is worth considering that if TCP-based HTTP traffic and UDP-based HTTP/3 traffic are allowed to enter operator networks on roughly"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-47941fc5f599ded465ee51781d15d391b501f493163c75db224bc85dbb77feb6", "old_text": "to predict usage and provision bandwidth, caching, and other mechanisms to meet the needs of users. In some cases (sec-predict), this is relatively routine, but in other cases, it is more difficult (sec-unpredict, sec-extreme). And as with other parts of the ecosystem, new technology brings new challenges. For example, with the emergence of ultra-low-latency", "comments": "Fixed a few typos, otherwise ready to merge.\nMerging on Ali's review.\nFrom NAME (URL) (In Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely) Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "to predict usage and provision bandwidth, caching, and other mechanisms to meet the needs of users. In some cases (sec-predict), this is relatively routine, but in other cases, it is more difficult (sec-unpredict). And as with other parts of the ecosystem, new technology brings new challenges. For example, with the emergence of ultra-low-latency"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-47941fc5f599ded465ee51781d15d391b501f493163c75db224bc85dbb77feb6", "old_text": "still being transmitted to the cache, and while the cache does not yet know the size of the object. Some of the popular caching systems were designed around cache footprint and had deeply ingrained assumptions about knowing the size of objects being stored, so the change in design requirements in long-established systems caused some errors in production. Incidents occurred where a transmission error in the connection from the upstream source to the cache could result in the cache holding a truncated segment and transmitting it to the end user's device. In this case, players rendering the stream often had the video freeze until the player was reset. In some cases the truncated object was even cached that way and served later to other players, causing continued stalls at the same spot in the video for all players playing the segment delivered from that cache node. 3.5.", "comments": "Fixed a few typos, otherwise ready to merge.\nMerging on Ali's review.\nFrom NAME (URL) (In Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely) Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "still being transmitted to the cache, and while the cache does not yet know the size of the object. Some of the popular caching systems were designed around cache footprint and had deeply ingrained assumptions about knowing the size of objects that are being stored, so the change in design requirements in long-established systems caused some errors in production. Incidents occurred where a transmission error in the connection from the upstream source to the cache could result in the cache holding a truncated segment and transmitting it to the end user's device. In this case, players rendering the stream often had the video freeze until the player was reset. In some cases the truncated object was even cached that way and served later to other players as well, causing continued stalls at the same spot in the video for all players playing the segment delivered from that cache node. 3.5."}
{"id": "q-en-draft-ietf-mops-streaming-opcons-47941fc5f599ded465ee51781d15d391b501f493163c75db224bc85dbb77feb6", "old_text": "3.6. Although TCP/IP has been used with a number of widely used applications that have symmetric bandwidth requirements (similar bandwidth requirements in each direction between endpoints), many widely-used Internet applications operate in client-server roles, with asymmetric bandwidth requirements. A common example might be an HTTP GET operation, where a client sends a relatively small HTTP GET request for a resource to an HTTP server and often receives a significantly larger response carrying the requested resource. When HTTP is commonly used to stream movie-length videos, the ratio between response size and request size can become arbitrarily large. For this reason, operators may pay more attention to downstream bandwidth utilization when planning and managing capacity. In addition, operators have been able to deploy access networks for end users using underlying technologies that are inherently asymmetric, favoring downstream bandwidth (e.g., ADSL, cellular technologies, most IEEE 802.11 variants), assuming that users will need less upstream bandwidth than downstream bandwidth. This strategy usually works, except when it fails because application bandwidth usage patterns have changed in ways that were not predicted. One example of this type of change was when peer-to-peer file-sharing applications gained popularity in the early 2000s. To take one well- documented case (RFC5594), the BitTorrent application created \"swarms\" of hosts, uploading and downloading files to each other, rather than communicating with a server. BitTorrent favored peers who uploaded as much as they downloaded, so new BitTorrent users had an incentive to significantly increase their upstream bandwidth utilization. The combination of the large volume of \"torrents\" and the peer-to- peer characteristic of swarm transfers meant that end user hosts were suddenly uploading higher volumes of traffic to more destinations than was the case before BitTorrent. This caused at least one large Internet service provider (ISP) to attempt to \"throttle\" these transfers to mitigate the load these hosts placed on their network. These efforts were met by increased use of encryption in BitTorrent, and complaints to regulators calling for regulatory action. The BitTorrent case study is just one example. However, the example is included here to make it clear that unpredicted and unpredictable massive traffic spikes may not be the result of natural disasters, but they can still have significant impacts. Especially as end users increasingly use video-based social networking applications, it will be helpful for access network providers to watch for increasing numbers of end users uploading significant amounts of content. 3.7. The causes of unpredictable usage described in sec-unpredict were more or less the result of human choices. However, we were reminded during a post-IETF 107 meeting that humans are not always in control, and forces of nature can cause enormous fluctuations in traffic patterns. In his talk, Sanjay Mishra Mishra reported that after the COVID-19 pandemic broke out in early 2020, Comcast's streaming and web video consumption rose by 38%, with their reported peak traffic up 32% overall between March 1 to March 30, AT&T reported a 28% jump in core network traffic (single day in April, as compared to pre stay-at-home daily average traffic), with video accounting for nearly half of all mobile network traffic, while social networking and web browsing remained the highest percentage (almost a quarter each) of overall mobility traffic, and Verizon reported similar trends with video traffic up 36% over an average day (pre COVID-19)}. We note that other operators saw similar spikes during this period. Craig Labowitz Labovitz reported Weekday peak traffic increases over 45%-50% from pre-lockdown levels, A 30% increase in upstream traffic over their pre-pandemic levels, and A steady increase in the overall volume of DDoS traffic, with amounts exceeding the pre-pandemic levels by 40%. (He attributed this increase to the significant rise in gaming-related DDoS attacks (LabovitzDDoS), as gaming usage also increased.) Subsequently, the Internet Architecture Board (IAB) held a COVID-19 Network Impacts Workshop IABcovid in November 2020. Given a larger number of reports and more time to reflect, the following observations from the draft workshop report are worth considering. Participants describing different types of networks reported different kinds of impacts, but all types of networks saw impacts.", "comments": "Fixed a few typos, otherwise ready to merge.\nMerging on Ali's review.\nFrom NAME (URL) (In Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely) Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "3.6. It is also possible for usage profiles to change significantly and suddenly. These changes are more difficult to plan for, but at a minimum, recognizing that sudden changes are happening is critical. Two examples are instructive. 3.6.1. In the first example, described in \"Report from the IETF Workshop on Peer-to-Peer (P2P) Infrastructure, May 28, 2008\" (RFC5594), when the BitTorrent filesharing application came into widespread use in 2005, sudden and unexpected growth in peer-to-peer traffic led to complaints from ISP customers about the performance of delay- sensitive traffic (VoIP and gaming). These performance issues resulted from at least two causes: Many access networks for end users used underlying technologies that are inherently asymmetric, favoring downstream bandwidth (e.g. ADSL, cellular technologies, most IEEE 802.11 variants), assuming that most users will need more downstream bandwidth than upstream bandwidth. This is a good assumption for client-server applications such as streaming video or software downloads, but BitTorrent rewarded peers that uploaded as much as they downloaded, so BitTorrent users had much more symmetric usage profiles which interacted badly with these assymetric access network technologies. BitTorrent also used distributed hash tables to organize peers into a ring topology, where each peer knew its \"next peer\" and \"previous peer\". There was no connection between the application- level ring topology and the lower-level network topology, so a peer's \"next peer\" might be anywhere on the reachable Internet. Traffic models that expected most communication to take place with a relatively small number of servers were unable to cope with peer-to-peer traffic that was much less predictable. Especially as end users increase use of video-based social networking applications, it will be helpful for access network providers to watch for increasing numbers of end users uploading significant amounts of content. 3.6.2. Early in 2020, the CoViD-19 pandemic and resulting quarantines and shutdowns led to significant changes in traffic patterns, due to a large number of people who suddenly started working and attending school remotely and using more interactive applications (video conferencing, in addition to streaming media). Subsequently, the Internet Architecture Board (IAB) held a COVID-19 Network Impacts Workshop IABcovid in November 2020. The following observations from the workshop report are worth considering. Participants describing different types of networks reported different kinds of impacts, but all types of networks saw impacts."}
{"id": "q-en-draft-ietf-mops-streaming-opcons-47941fc5f599ded465ee51781d15d391b501f493163c75db224bc85dbb77feb6", "old_text": "VPNs from residential users as they stopped commuting and switched to work-at-home. 4. Streaming media latency refers to the \"glass-to-glass\" time duration,", "comments": "Fixed a few typos, otherwise ready to merge.\nMerging on Ali's review.\nFrom NAME (URL) (In Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely) Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "VPNs from residential users as they stopped commuting and switched to work-at-home. Again, it will be helpful for streaming operators to monitor traffic as described in measure-coll, watching for sudden changes in performance. 4. Streaming media latency refers to the \"glass-to-glass\" time duration,"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ae7aac4852094283576ec1bfa141dd2791b5e4667b8bcafb9644a1bc7008ad2f", "old_text": "Most applications operating over IP networks and requiring latency this low use the Real-time Transport Protocol (RTP) RFC3550 or WebRTC RFC8825, which uses RTP for the media transport as well as several other protocols necessary for safe operation in browsers. Worth noting is that many applications for ultra low-latency delivery do not need to scale to more than a few users at a time, which", "comments": "NAME - several things here. we probably don't need all of the descriptive text - the idea is that we'd say \"assume roughly this protocol stack, and here's what's not included\" if distinguishing between \"transport protocol\" and \"media transport protocol\" makes sense, that probably needs to be reflected elsewhere in the document (like, a lot of elsewheres) If this approach works, it could either be in the transport protocol section, or in the one you're thinking of for\nI don't think there will be a lot of places. We need to pay attention to only the places where we are explicitly referring to the media transport protocol. The rest can stay the same IMO.\nNAME - I couldn't reply to this inline, but I looked through the document (searching for \"transport protocol\", and you're right. I found one place in Section 4.1, and changed the description of RTP as a transport protocol to RTP as a media transport protocol. NAME - Section 4.1 is before where my explanation is now - if this PR makes sense to you, the explanation probably DOES belong much earlier in the doc, perhaps in .\nI'm merging this, based on Ali's review.\nWe want a paragraph to say something about scope along the lines of: This document is for endpoints delivering streaming media traffic. This can include intermediaries like CDNs as well as content owners and end users, plus their device and player providers. We think this can help address a few issues from last call/iesg review, including: (more to be linked as we notice them)\nConsider adding a subsection in the introduction about things that are out of scope, and (for extra credit), why they are out of scope.\nfrom NAME 's review (URL) Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.)\nNot sure if this is really about scope clarification? I think this section is intending to talk about usage as part of a transport protocol, but tunnels are sort of more like a path overlay issue or something (might affect mtu, extra hops with extra things that can go wrong, etc.).\nNAME - I'm looking more closely at this issue, and I think text that I put in another (individual) draft may be useful here as well. I'll add it in this section (making the obvious substitutions), and I'll see what NAME and NAME think (after the holiday weekend), and then check back with you. If it works for people, the result will likely be clearer to many readers. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "Most applications operating over IP networks and requiring latency this low use the Real-time Transport Protocol (RTP) RFC3550 or WebRTC RFC8825, which uses RTP as its Media Transport Protocol, along with several other protocols necessary for safe operation in browsers. Worth noting is that many applications for ultra low-latency delivery do not need to scale to more than a few users at a time, which"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ae7aac4852094283576ec1bfa141dd2791b5e4667b8bcafb9644a1bc7008ad2f", "old_text": "6.1. For most of the history of the Internet, we have trusted UDP-based applications to limit their impact on other users. One of the strategies used was to use UDP for simple query-response application", "comments": "NAME - several things here. we probably don't need all of the descriptive text - the idea is that we'd say \"assume roughly this protocol stack, and here's what's not included\" if distinguishing between \"transport protocol\" and \"media transport protocol\" makes sense, that probably needs to be reflected elsewhere in the document (like, a lot of elsewheres) If this approach works, it could either be in the transport protocol section, or in the one you're thinking of for\nI don't think there will be a lot of places. We need to pay attention to only the places where we are explicitly referring to the media transport protocol. The rest can stay the same IMO.\nNAME - I couldn't reply to this inline, but I looked through the document (searching for \"transport protocol\", and you're right. I found one place in Section 4.1, and changed the description of RTP as a transport protocol to RTP as a media transport protocol. NAME - Section 4.1 is before where my explanation is now - if this PR makes sense to you, the explanation probably DOES belong much earlier in the doc, perhaps in .\nI'm merging this, based on Ali's review.\nWe want a paragraph to say something about scope along the lines of: This document is for endpoints delivering streaming media traffic. This can include intermediaries like CDNs as well as content owners and end users, plus their device and player providers. We think this can help address a few issues from last call/iesg review, including: (more to be linked as we notice them)\nConsider adding a subsection in the introduction about things that are out of scope, and (for extra credit), why they are out of scope.\nfrom NAME 's review (URL) Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.)\nNot sure if this is really about scope clarification? I think this section is intending to talk about usage as part of a transport protocol, but tunnels are sort of more like a path overlay issue or something (might affect mtu, extra hops with extra things that can go wrong, etc.).\nNAME - I'm looking more closely at this issue, and I think text that I put in another (individual) draft may be useful here as well. I'll add it in this section (making the obvious substitutions), and I'll see what NAME and NAME think (after the holiday weekend), and then check back with you. If it works for people, the result will likely be clearer to many readers. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "6.1. Within sec-trans, the term \"Media Transport Protocol\" is used to describe to describe the protocol of interest. This is easier to understand if the reader assumes a protocol stack that looks something like this: where \"Media Format\" would be something like an RTP payload format RFC2736 or ISOBMFF ISOBMFF, \"Media Transport Protocol\" would be something like RTP or HTTP, and \"Transport Protocol\" would be something like TCP or UDP. Not all possible streaming media applications follow this model, but for the ones that do, it seems useful to have names for the protocol layers between Media Format and Transport Layer It is worth noting explicitly that the Media Transport Protocol and Transport Protocol layers might each include more than one protocol. For example, a Media Transport Protocol might run over HTTP, or over WebTransport, which in turn runs over HTTP. It is worth noting explicitly that more complex network protocol stacks are certainly possible - for instance, packets with this protocol stack may be carried in a tunnel, or in a VPN. If these environments are present, streaming media operators may need to analyze their effects on applications as well. 6.2. For most of the history of the Internet, we have trusted UDP-based applications to limit their impact on other users. One of the strategies used was to use UDP for simple query-response application"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ae7aac4852094283576ec1bfa141dd2791b5e4667b8bcafb9644a1bc7008ad2f", "old_text": "encounter \"tripped circuit breakers\", with resulting user-visible impacts. 6.2. For most of the history of the Internet, we have trusted TCP to limit the impact of applications that sent a significant number of packets,", "comments": "NAME - several things here. we probably don't need all of the descriptive text - the idea is that we'd say \"assume roughly this protocol stack, and here's what's not included\" if distinguishing between \"transport protocol\" and \"media transport protocol\" makes sense, that probably needs to be reflected elsewhere in the document (like, a lot of elsewheres) If this approach works, it could either be in the transport protocol section, or in the one you're thinking of for\nI don't think there will be a lot of places. We need to pay attention to only the places where we are explicitly referring to the media transport protocol. The rest can stay the same IMO.\nNAME - I couldn't reply to this inline, but I looked through the document (searching for \"transport protocol\", and you're right. I found one place in Section 4.1, and changed the description of RTP as a transport protocol to RTP as a media transport protocol. NAME - Section 4.1 is before where my explanation is now - if this PR makes sense to you, the explanation probably DOES belong much earlier in the doc, perhaps in .\nI'm merging this, based on Ali's review.\nWe want a paragraph to say something about scope along the lines of: This document is for endpoints delivering streaming media traffic. This can include intermediaries like CDNs as well as content owners and end users, plus their device and player providers. We think this can help address a few issues from last call/iesg review, including: (more to be linked as we notice them)\nConsider adding a subsection in the introduction about things that are out of scope, and (for extra credit), why they are out of scope.\nfrom NAME 's review (URL) Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.)\nNot sure if this is really about scope clarification? I think this section is intending to talk about usage as part of a transport protocol, but tunnels are sort of more like a path overlay issue or something (might affect mtu, extra hops with extra things that can go wrong, etc.).\nNAME - I'm looking more closely at this issue, and I think text that I put in another (individual) draft may be useful here as well. I'll add it in this section (making the obvious substitutions), and I'll see what NAME and NAME think (after the holiday weekend), and then check back with you. If it works for people, the result will likely be clearer to many readers. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "encounter \"tripped circuit breakers\", with resulting user-visible impacts. 6.3. For most of the history of the Internet, we have trusted TCP to limit the impact of applications that sent a significant number of packets,"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ae7aac4852094283576ec1bfa141dd2791b5e4667b8bcafb9644a1bc7008ad2f", "old_text": "behaviors, and that those behaviors would remain relatively stable in the short term. 6.3. The QUIC protocol, developed from a proprietary protocol into an IETF standards-track protocol RFC9000, turns many of the statements made", "comments": "NAME - several things here. we probably don't need all of the descriptive text - the idea is that we'd say \"assume roughly this protocol stack, and here's what's not included\" if distinguishing between \"transport protocol\" and \"media transport protocol\" makes sense, that probably needs to be reflected elsewhere in the document (like, a lot of elsewheres) If this approach works, it could either be in the transport protocol section, or in the one you're thinking of for\nI don't think there will be a lot of places. We need to pay attention to only the places where we are explicitly referring to the media transport protocol. The rest can stay the same IMO.\nNAME - I couldn't reply to this inline, but I looked through the document (searching for \"transport protocol\", and you're right. I found one place in Section 4.1, and changed the description of RTP as a transport protocol to RTP as a media transport protocol. NAME - Section 4.1 is before where my explanation is now - if this PR makes sense to you, the explanation probably DOES belong much earlier in the doc, perhaps in .\nI'm merging this, based on Ali's review.\nWe want a paragraph to say something about scope along the lines of: This document is for endpoints delivering streaming media traffic. This can include intermediaries like CDNs as well as content owners and end users, plus their device and player providers. We think this can help address a few issues from last call/iesg review, including: (more to be linked as we notice them)\nConsider adding a subsection in the introduction about things that are out of scope, and (for extra credit), why they are out of scope.\nfrom NAME 's review (URL) Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.)\nNot sure if this is really about scope clarification? I think this section is intending to talk about usage as part of a transport protocol, but tunnels are sort of more like a path overlay issue or something (might affect mtu, extra hops with extra things that can go wrong, etc.).\nNAME - I'm looking more closely at this issue, and I think text that I put in another (individual) draft may be useful here as well. I'll add it in this section (making the obvious substitutions), and I'll see what NAME and NAME think (after the holiday weekend), and then check back with you. If it works for people, the result will likely be clearer to many readers. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "behaviors, and that those behaviors would remain relatively stable in the short term. 6.4. The QUIC protocol, developed from a proprietary protocol into an IETF standards-track protocol RFC9000, turns many of the statements made"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "Various HTTP versions have been used for media delivery. HTTP/1.0, HTTP/1.1 and HTTP/2 are carried over TCP, and TCP's transport behavior is described in tcp-behavior. HTTP/3 is carried over QUIC, and QUIC's transport behavior is described in quic-behavior. Unreliable media delivery using RTP and other UDP-based protocols is also discussed in ultralow, udp-behavior, and hop-by-hop-encrypt, but it is difficult to give general guidance for these applications. For instance, when packet loss occurs, the most appropriate response may depend on the type of codec being used. 3.", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "Various HTTP versions have been used for media delivery. HTTP/1.0, HTTP/1.1 and HTTP/2 are carried over TCP, and TCP's transport behavior is described in reliable-behavior. HTTP/3 is carried over QUIC, and QUIC's transport behavior is described in quic-behavior. Unreliable media delivery using RTP and other UDP-based protocols is also discussed in ultralow, unreliable-behavior, and hop-by-hop- encrypt, but it is difficult to give general guidance for these applications. For instance, when packet loss occurs, the most appropriate response may depend on the type of codec being used. 3."}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "elapsed time between the acknowledgements for specific TCP segments from a TCP receiver since TCP octet sequence numbers and acknowledgements for those sequence numbers are carried in the clear, even if the TCP payload itself is encrypted. See tcp-behavior for more information. As transport protocols evolve to encrypt their transport header fields, one side effect of increasing encryption is the kind of passive monitoring, or even \"performance enhancement\" (RFC3135) that was possible with the older transport protocols (UDP, described in udp-behavior and TCP, described in tcp-behavior) is no longer possible with newer transport protocols such as QUIC (described in quic-behavior). The IETF has specified a \"latency spin bit\" mechanism in Section 17.4 of RFC9000 to allow passive latency monitoring from observation points on the network path throughout the duration of a connection, but currently chartered work in the IETF is focusing on endpoint monitoring and reporting, rather than on passive", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "elapsed time between the acknowledgements for specific TCP segments from a TCP receiver since TCP octet sequence numbers and acknowledgements for those sequence numbers are carried in the clear, even if the TCP payload itself is encrypted. See reliable-behavior for more information. As transport protocols evolve to encrypt their transport header fields, one side effect of increasing encryption is the kind of passive monitoring, or even \"performance enhancement\" (RFC3135) that was possible with the older transport protocols (UDP, described in unreliable-behavior and TCP, described in reliable-behavior) is no longer possible with newer transport protocols such as QUIC (described in quic-behavior). The IETF has specified a \"latency spin bit\" mechanism in Section 17.4 of RFC9000 to allow passive latency monitoring from observation points on the network path throughout the duration of a connection, but currently chartered work in the IETF is focusing on endpoint monitoring and reporting, rather than on passive"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "6. Because networking resources are shared between users, a good place to start our discussion is how contention between users, and mechanisms to resolve that contention in ways that are \"fair\" between users, impact streaming media users. These topics are closely tied to transport protocol behaviors. As noted in sec-abr, ABR response strategies such as HLS RFC8216 or DASH MPEG-DASH are attempting to respond to changing path characteristics, and underlying transport protocols are also attempting to respond to changing path characteristics. For most of the history of the Internet, these transport protocols, described in udp-behavior and tcp-behavior, have had relatively consistent behaviors that have changed slowly, if at all, over time. Newly standardized transport protocols like QUIC RFC9000 can behave differently from existing transport protocols, and these behaviors may evolve over time more rapidly than currently-used transport protocols. For this reason, we have included a description of how the path characteristics that streaming media providers may see are likely to evolve over time. 6.1. Within sec-trans, the term \"Media Transport Protocol\" is used to describe to describe the protocol of interest. This is easier to understand if the reader assumes a protocol stack that looks something like this: where \"Media Format\" would be something like an RTP payload format RFC2736 or ISOBMFF ISOBMFF, \"Media Transport Protocol\" would be something like RTP or HTTP, and \"Transport Protocol\" would be something like TCP or UDP. Not all possible streaming media applications follow this model, but for the ones that do, it seems useful to have names for the protocol layers between Media Format and Transport Layer It is worth noting explicitly that the Media Transport Protocol and Transport Protocol layers might each include more than one protocol. For example, a Media Transport Protocol might run over HTTP, or over WebTransport, which in turn runs over HTTP. It is worth noting explicitly that more complex network protocol stacks are certainly possible - for instance, packets with this protocol stack may be carried in a tunnel, or in a VPN. If these environments are present, streaming media operators may need to analyze their effects on applications as well. 6.2. Because UDP does not provide any feedback mechanism to senders to", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "6. Within this document, the term \"Media Transport Protocol\" is used to describe any protocol that carries media metadata and media in its payload, and the term \"Transport Protocol\" describes any protocol that carries a Media Transport Protocol, or another Transport Protocol, in its payload. This is easier to understand if the reader assumes a protocol stack that looks something like this: where \"Media\" would be something like the output of a codec, or other media source such as closed-captioning, \"Media Format\" would be something like an RTP payload format RFC2736 or an ISOBMFF ISOBMFF profile, \"Media Transport Protocol\" would be something like RTP or DASH MPEG-DASH, and \"Transport Protocol\" would be a protocol that provides appropriate transport services, as described in Section 5 of RFC8095. Not all possible streaming media applications follow this model, but for the ones that do, it seems useful to distinguish between the protocol layer that is aware it is transporting media, and underlying protocol layers that are not aware. As described in the RFC8095 Abstract, the IETF has standardized a number of protocols that provide transport services. Although these protocols, taken in total, provide a wide variety of transport services, sec-trans will distinguish between two extremes: Transport protocols used to provide reliable, in-order media delivery to an endpoint, typically providing flow control and congestion control (reliable-behavior) and Transport protocols used to provide unreliable, unordered media delivery to an endpoint, without flow control or congestion control (unreliable-behavior). Because newly standardized transport protocols such as QUIC RFC9000 can evolve their transport behavior more rapidly than currently-used transport protocols, we have included a description of how the path characteristics that streaming media providers may see are likely to evolve in quic-behavior. It is worth noting explicitly that the Transport Protocol layer might include more than one protocol. For example, a specific Media Transport Protocol might run over HTTP, or over WebTransport, which in turn runs over HTTP. It is worth noting explicitly that more complex network protocol stacks are certainly possible - for instance, when packets with this protocol stack are carried in a tunnel or in a VPN, the entire packet would likely appear in the payload of other protocols. If these environments are present, streaming media operators may need to analyze their effects on applications as well. 6.1. The HLS RFC8216 and DASH MPEG-DASH media transport protocols are typically carried over HTTP, and HTTP has used TCP as its only standardized transport protocol until HTTP/3 RFC9114. These media transport protocols use ABR response strategies as described in sec- abr to respond to changing path characteristics, and underlying transport protocols are also attempting to respond to changing path characteristics. The past success of the largely TCP-based Internet is evidence that the various flow control and congestion control mechanisms TCP has used to achieve equilibrium quickly, at a point where TCP senders do not interfere with other TCP senders for sustained periods of time (RFC5681), have been largely successful. The Internet has continued to work even when the specific TCP mechanisms used to reach equilibrium changed over time (RFC7414). Because TCP provided a common tool to avoid contention, even when significant TCP-based applications like FTP were largely replaced by other significant TCP- based applications like HTTP, the transport behavior remained safe for the Internet. Modern TCP implementations (I-D.ietf-tcpm-rfc793bis) continue to probe for available bandwidth, and \"back off\" when a network path is saturated, but may also work to avoid growing queues along network paths, which can prevent older TCP senders from detecting quickly when a network path is becoming saturated. Congestion control mechanisms such as COPA COPA18 and BBR I-D.cardwell-iccrg-bbr- congestion-control make these decisions based on measured path delays, assuming that if the measured path delay is increasing, the sender is injecting packets onto the network path faster than the network can forward them (or the receiver can accept them) so the sender should adjust its sending rate accordingly. Although common TCP behavior has changed significantly since the days of Jacobson-Karels and RFC2001, even adding new congestion controllers such as CUBIC RFC8312, the common practice of implementing TCP as part of an operating system kernel has acted to limit how quickly TCP behavior can change. Even with the widespread use of automated operating system update installation on many end- user systems, streaming media providers could have a reasonable expectation that they could understand TCP transport protocol behaviors, and that those behaviors would remain relatively stable in the short term. 6.2. Because UDP does not provide any feedback mechanism to senders to"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "feedback mechanisms, UDP-based applications that send and receive substantial amounts of information are expected to provide their own feedback mechanisms, and to respond to the feedback the application receives. This expectation is most recently codified in Best Current Practice RFC8085. In contrast to adaptive segmented delivery over a reliable transport as described in adapt-deliver, some applications deliver streaming", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "feedback mechanisms, UDP-based applications that send and receive substantial amounts of information are expected to provide their own feedback mechanisms, and to respond to the feedback the application receives. This expectation is most recently codified as a Best Current Practice RFC8085. In contrast to adaptive segmented delivery over a reliable transport as described in adapt-deliver, some applications deliver streaming"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "6.3. The past success of the largely TCP-based Internet is evidence that the mechanisms TCP used to achieve equilibrium quickly, at a point where TCP senders do not interfere with other TCP senders for sustained periods of time (RFC5681), have been largely successful. The Internet has continued to work even when the specific TCP mechanisms used to reach equilibrium changed over time (RFC7414). Because TCP provides a common tool to avoid contention, even when significant TCP-based applications like FTP were largely replaced by other significant TCP-based applications like HTTP, the transport behavior remained safe for the Internet. Modern TCP implementations (I-D.ietf-tcpm-rfc793bis) continue to probe for available bandwidth, and \"back off\" when a network path is saturated, but may also work to avoid growing queues along network paths, which prevent TCP senders from detecting quickly when a network path becoming saturated. Congestion control mechanisms such as COPA COPA18 and BBR I-D.cardwell-iccrg-bbr-congestion-control make these decisions based on measured path delays, assuming that if the measured path delay is increasing, the sender is injecting packets onto the network path faster than the network can forward them (or the receiver can accept them) so the sender should adjust its sending rate accordingly. Although common TCP behavior has changed significantly since the days of Jacobson-Karels and RFC2001, even adding new congestion controllers such as CUBIC RFC8312, the common practice of implementing TCP as part of an operating system kernel has acted to limit how quickly TCP behavior can change. Even with the widespread use of automated operating system update installation on many end- user systems, streaming media providers could have a reasonable expectation that they could understand TCP transport protocol behaviors, and that those behaviors would remain relatively stable in the short term. 6.4. The QUIC protocol, developed from a proprietary protocol into an IETF standards-track protocol RFC9000, turns many of the statements made in udp-behavior and tcp-behavior on their heads. Although QUIC provides an alternative to the TCP and UDP transport protocols, QUIC is itself encapsulated in UDP. As noted elsewhere in", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "6.3. The QUIC protocol, developed from a proprietary protocol into an IETF standards-track protocol RFC9000, turns many of the statements made in reliable-behavior and unreliable-behavior on their heads. Although QUIC provides an alternative to the TCP and UDP transport protocols, QUIC is itself encapsulated in UDP. As noted elsewhere in"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "While QUIC is designed as a general-purpose transport protocol, and can carry different application-layer protocols, the current standardized mapping is for HTTP/3 I-D.ietf-quic-http, which describes how QUIC transport features are used for HTTP. The convention is for HTTP/3 to run over UDP port 443 Port443 but this is not a strict requirement. When HTTP/3 is encapsulated in QUIC, which is then encapsulated in UDP, streaming operators (and network operators) might see UDP traffic patterns that are similar to HTTP(S) over TCP. Since earlier versions of HTTP(S) rely on TCP, UDP ports may be blocked for any port numbers that are not commonly used, such as UDP 53 for DNS. Even when UDP ports are not blocked and HTTP/3 can flow, streaming operators (and network operators) may severely rate-limit this traffic because they do not expect to see legitimate high-bandwidth traffic such as streaming media over the UDP ports that HTTP/3 is using. As noted in hol-blocking, because TCP provides a reliable, in-order delivery service for applications, any packet loss for a TCP", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "While QUIC is designed as a general-purpose transport protocol, and can carry different application-layer protocols, the current standardized mapping is for HTTP/3 RFC9114, which describes how QUIC transport services are used for HTTP. The convention is for HTTP/3 to run over UDP port 443 Port443 but this is not a strict requirement. When HTTP/3 is encapsulated in QUIC, which is then encapsulated in UDP, streaming operators (and network operators) might see UDP traffic patterns that are similar to HTTP(S) over TCP. UDP ports may be blocked for any port numbers that are not commonly used, such as UDP 53 for DNS. Even when UDP ports are not blocked and QUIC packets can flow, streaming operators (and network operators) may severely rate-limit this traffic because they do not expect to see legitimate high-bandwidth traffic such as streaming media over the UDP ports that HTTP/3 is using. As noted in hol-blocking, because TCP provides a reliable, in-order delivery service for applications, any packet loss for a TCP"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "connection's congestion controller, providing some transport-level congestion avoidance measures, which UDP does not. As noted in tcp-behavior, there is an increasing interest in congestion control algorithms that respond to delay measurements, instead of responding to packet loss. These algorithms may deliver an improved user experience, but in some cases, have not responded to", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "connection's congestion controller, providing some transport-level congestion avoidance measures, which UDP does not. As noted in reliable-behavior, there is an increasing interest in congestion control algorithms that respond to delay measurements, instead of responding to packet loss. These algorithms may deliver an improved user experience, but in some cases, have not responded to"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "both were then selected as the default TCP congestion controllers in Linux, and both were deployed globally. The point mentioned in tcp-behavior about TCP congestion controllers being implemented in operating system kernels is different with QUIC. Although QUIC can be implemented in operating system kernels, one of the design goals when this work was chartered was \"QUIC is expected to support rapid, distributed development and testing of features,\" and to meet this expectation, many implementers have chosen to implement QUIC in user space, outside the operating system kernel, and to even distribute QUIC libraries with their own applications. It is worth noting that streaming operators using HTTP/3, carried over QUIC, can expect more frequent deployment of new congestion controller behavior than has been the case with HTTP/1 and HTTP/2, carried over TCP. It is worth considering that if TCP-based HTTP traffic and UDP-based HTTP/3 traffic are allowed to enter operator networks on roughly", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "both were then selected as the default TCP congestion controllers in Linux, and both were deployed globally. The point mentioned in reliable-behavior about TCP congestion controllers being implemented in operating system kernels is different with QUIC. Although QUIC can be implemented in operating system kernels, one of the design goals when this work was chartered was \"QUIC is expected to support rapid, distributed development and testing of features,\" and to meet this expectation, many implementers have chosen to implement QUIC in user space, outside the operating system kernel, and to even distribute QUIC libraries with their own applications. It is worth noting that streaming operators using HTTP/3, carried over QUIC, can expect more frequent deployment of new congestion controller behavior than has been the case with HTTP/1 and HTTP/2, carried over TCP. It is worth considering that if TCP-based HTTP traffic and UDP-based HTTP/3 traffic are allowed to enter operator networks on roughly"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-ac9793005c29888dce20611da12126d71fc04b94beb51365bbe36a0852caf438", "old_text": "certainly as examining decrypted traffic. Because HTTPS has historically layered HTTP on top of TLS, which is in turn layered on top of TCP, intermediaries do have access to unencrypted TCP-level transport information, such as retransmissions, and some carriers exploited this information in attempts to improve transport-layer performance RFC3135. The most recent standardized version of HTTPS, HTTP/3 I-D.ietf-quic-http, uses the QUIC protocol RFC9000 as its transport layer. QUIC relies on the TLS 1.3 initial handshake RFC8446 only for key exchange RFC9001, and encrypts almost all transport parameters itself except for a few invariant header fields. In the QUIC short header, the only transport-level parameter which is sent \"in the clear\" is the Destination Connection ID RFC8999, and even in the QUIC long header, the only transport-level parameters sent \"in the clear\" are the Version, Destination Connection ID, and Source Connection ID. For these reasons, HTTP/3 is significantly more \"opaque\" than HTTPS with HTTP/1 or HTTP/2.", "comments": "\u2026transport protocols Also tighten up description of the assumed protocol stack in (what is now) Section 6.1. Note that the UDP and TCP sections have been flipped in order, to improve section flow and clarity.\nNAME - If you have a moment before our call on Friday to take a look at this PR, I'd appreciate it. I think it's important to to get that right, in order to address both the INTDIR and TSVART reviews.\nFrom NAME As a bit of transport commentary, I don't find the setup of Section 6 compelling. While UDP is a transport in a technical sense, for the purpose of media, it is almost always a layer upon which the protocol doing the congestion control work runs. To that end, QUIC is just another more standardized case of this, just like SCTP over UDP. Rather than talking about \"UDP's behavior\" vs \"TCP's behavior\", I suggest talking about how applications over UDP for media behave vs applications over TCP for media behave.\nNAME - I need to look at this more closely. I understand your point about UDP, but (because UDP is not a real transport layer, while TCP and QUIC are), I suspect (without looking closely yet) I'm going to be making sure that I explain this more clearly, without trying to explain the behavior(s) of applications running over TCP and QUIC. As an aside, I'm trying to make this clearer in drafts I'm targeting forwards Media Over QUIC/MOQ, so I actually DO believe you ... Please await results.\nA +1 from NAME 's review (URL): Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review.\nAfter looking at this issue. I agree with both NAME and NAME that the emphasis should be on the interaction between (see new section describing this) media transport protocols and the transport services provided by underlying transport protocols. I'm moving the formerly-UDP section below the formerly-TCP section, because it's easier to describe what UDP is NOT doing after describing what TCP IS doing. I'm also moving one paragraph into the formerly-TCP section from Section 6, because it is specific to ABR strategies that assume a reliable transport. More news to come, when I create a PR for this issue ...\nWe think is now mergrable. Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "certainly as examining decrypted traffic. Because HTTPS has historically layered HTTP on top of TLS, which is in turn layered on top of TCP, intermediaries have historically had access to unencrypted TCP-level transport information, such as retransmissions, and some carriers exploited this information in attempts to improve transport-layer performance RFC3135. The most recent standardized version of HTTPS, HTTP/3 RFC9114, uses the QUIC protocol RFC9000 as its transport layer. QUIC relies on the TLS 1.3 initial handshake RFC8446 only for key exchange RFC9001, and encrypts almost all transport parameters itself except for a few invariant header fields. In the QUIC short header, the only transport-level parameter which is sent \"in the clear\" is the Destination Connection ID RFC8999, and even in the QUIC long header, the only transport- level parameters sent \"in the clear\" are the Version, Destination Connection ID, and Source Connection ID. For these reasons, HTTP/3 is significantly more \"opaque\" than HTTPS with HTTP/1 or HTTP/2."}
{"id": "q-en-draft-ietf-mops-streaming-opcons-252d796f6af8ff404534dba318b92520acd314811f98260d06a21aad660d157b", "old_text": "4.1. (To the editor: check this repository URL after the draft is adopted. The working group may create its own repository) This document is in the Github repository at https://github.com/GrumpyOldTroll/ietf-mops-drafts. Readers are welcome to open issues and send pull requests for this document. Substantial discussion of this document should take place on the MOPS working group mailing list (mops@ietf.org). 4.2. Presentations: IETF 105 BOF: https://www.youtube.com/watch?v=4G3YBVmn9Eo&t=47m21s IETF 106 Mops meeting: https://www.youtube.com/ watch?v=4_k340xT2jM&t=7m23s List archive search for doc mentions: https://mailarchive.ietf.org/arch/browse/mops/?q=draft-ietf-mops- streaming-opcons 5. This document requires no actions from IANA.", "comments": "I actually used the URL in the draft to clone a copy, so I think this is good to go.", "new_text": "4.1. This document is in the Github repository at: https://github.com/ietf-wg-mops/draft-ietf-mops-streaming-opcons Readers are welcome to open issues and send pull requests for this document. Substantial discussion of this document should take place on the MOPS working group mailing list (mops@ietf.org). Join: https://www.ietf.org/mailman/listinfo/mops Search: https://mailarchive.ietf.org/arch/browse/mops/ 4.2. Presentations: IETF 105 BOF: https://www.youtube.com/watch?v=4G3YBVmn9Eo&t=47m21s IETF 106 meeting: https://www.youtube.com/ watch?v=4_k340xT2jM&t=7m23s 5. This document requires no actions from IANA."}
{"id": "q-en-draft-ietf-mops-streaming-opcons-f1c9c5fa2cdfb521b36b9c9d7d337b69a50f8b4eeb54ea7b877882cf66ea9b26", "old_text": "Internet Transport Protocols\" describes the impact of increased encryption of transport headers in general terms. 7.2. Although the IETF has put considerable emphasis on end-to-end", "comments": "Discussion from editor conference call: NAME will update this PR with NAME comments\nFrom NAME (URL) Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section (Seems closely related to URL )\nThe purpose of such tunneling is to circumvent geoblocks and access content from other regions - not to protect the media content. Encrypting the content is done by the content owner for its protection.\nSo, looking at , I'm seeing this text: I think this means that the effect is that the \"transport-level information is opaque to intermediaries\" effect described in the document applies, with the additional consideration that intermediaries can't see the IP addresses used inside the tunnel, either. I think the same applies to any IPsec-encrypted tunnel. Do we need to say that? And, if so - can we describe this without using product names and descriptions which may age badly in an RFC?\nisn't as easy to read, but I think this is also the rough equivalent of IPsec, from a media operator perspective - all of the transport information and the IP addresses used inside the tunnel are obscured.\nJust for the record, definitely worth checking as it is related to this discussion: URL Reviewer: Michael Scharf Review result: Ready with Issues This document has been reviewed as part of the transport area review team's ongoing effort to review key IETF documents. These comments were written primarily for the transport area directors, but are copied to the document's authors and WG to allow them to address any issues raised and also to the IETF discussion list for information. When done at the time of IETF Last Call, the authors should consider this review as part of the last-call comments they receive. Please always CC EMAIL if you reply to or forward this review. This informational document surveys network and transport protocol issues that affect quality of experience for streaming applications, in particular video. The overview covers many topics across different technologies. Readers may only be familiar with a subset of these topics and could therefore learn quite a bit from this kind of tutorial. Nonetheless, it remains somewhat unclear what the actual objective of some of the text is. For some topics, quite a bit of technical background is provided, but the discussion is not really comprehensive. In many sections, the document neither derives technical challenges, nor system/protocol requirements, nor practical usage guidance. It is just a kind of tutorial. Quite a bit of text also presents ongoing IETF work, and an RFC with this scope may thus get outdated soon. Section 6.2 deals with topics owned by TCPM. In similar past documents, I have asked for a TCPM presentation prior to an IETF last call in order to ensure that owners of running code are in the loop. I believe this strategy has worked well in the past. Having said this, as TSV-ART reviewer I don't strongly disagree with a publication. All these issues may just be OK for an informational RFC. Some more specific comments: Section 3.2.1. Recognizing Changes from an Expected Baseline This section apparently assumes relatively static path properties, e.g., fixed network connectivity. It would be more useful to analyze the impact of Wifi and 4G/5G networks in one place. Some of this follows later in sections 3.2.1 and Yet, the split between these three sections is unclear. Having a discussion about today's Internet path characteristics and the dynamics at one place could be more useful. The text could also better distinguish between what matters to endpoints and considerations relevant only for middleboxes (such as passive monitors). Section 3.3 Path Requirements The statement \"to find the bandwidth requirements for a router on the delivery path\" may assume that the bottleneck on the path will be routers. Yet, on a path the bottlenecks could also be a link layer device (e.g., an Ethernet Switch, a Wifi access point, etc.). RFC 6077 (specifically Section 3.1.3) also explains that issue. A better wording may be \"to find the bandwidth requirements for a delivery path\". Section 3.6. Unpredictable Usage Profiles / Section 3.7. Extremely Unpredictable Usage Profiles I am not fully convinced by the distinction between \"unpredictable\" and \"extremely unpredictable\". To me, these two sections could be merged and maybe also be shortened. More specifically, Section 3.7 lists a lot of statistics from the past. What is IMHO a bit missing in Section 3.7 are actual operational considerations. For instance, are there any lessons learnt for the future? Note that Section 3.6 has a reasonable conclusion at the end. Section 4. Latency Considerations and Section 4.1. Ultra Low-Latency I am surprised by the definition \"ultra low-latency (less than 1 second)\", as well as some of the other numbers. For other real-time communication use cases, \"ultra low-latency\" would probably imply a latency requirement of the order of one millisecond. For instance, isochronous traffic for motion control in industrial networks may require a latency of 1 ms, and such \"ultra-low latency\" requirements are discussed elsewhere, e.g., in the DetNet WG. The terminology in this section should be better explained to readers dealing with networked systems with much harder latency requirements. And a reference should be added for these definitions in the context of video streaming. Section 5.5.2 Head-of-Line Blocking If Head-of-Line Blocking is indeed a relevant operational problem, it would be useful to add a corresponding reference (e.g., with measurements). Section 5.5.3. Wide and Rapid Variation in Path Capacity The statement \"As many end devices have moved to wireless connectivity for the final hop (Wi-Fi, 5G, or LTE), new problems in bandwidth detction have emerged from radio interference and signal strength effects.\" could be moved to earlier parts of the document. Quite a bit of new computers apparently only have Wifi connectivity and no Ethernet port, i.e., Wifi may just be their default. Also, wireless may not only be used on the last hop, for instance in meshed setups. This may make the problem even harder. Section 6. Evolution of Transport Protocols and Transport Protocol Behaviors I really wonder whether UDP vs. TCP vs. QUIC is actually the relevant distinction. What may actually matter are the transport services provided by the protocol stack (e.g., RFC 8095). I fully agree to a related comment in the INTDIR review. Section 6.1. UDP and Its Behavior UDP is also used for encrypted tunnels (OpenVPN, Wireguard, etc.). Does encrypted tunneling really have no operational impact on streaming apps other than circuit breakers? (I am thinking about 4K streaming over OpenVPN and the like.) Section 6.2. TCP and Its Behavior This text should IMHO be presented and be discussed in TCPM. Personally, I am not convinced that this document is a good place for discussing the long history of TCP congestion control. If the objective of the document is to provide a tutorial, IMHO it would be more useful to briefly explain the state-of-the-art in year 2022. Some further specific comments on TCP congestion control: It is surprising that RFC 5681 is not referenced.793bis has normative statements regarding congestion control, and most stacks are compatible to what is written in 793bis. Several operating systems use CUBIC by default and support standard Reno. Another general good reference for TCP standards is RFC 7414.Does ECN, DCTCP, etc. really not matter at all, for instance, if a data center hosts video playout servers? Section 6.3. QUIC and Its Behavior As already noted, the discussion about head-of-line-blocking would really benefit from backing by a reference. Section 7. Streaming Encrypted Media As far as I know, streaming users sometimes use encrypted tunnels such as OpenVPN or WireGuard (or IPsec) to access video content. I may miss something, but it is unclear to me how that fits into the categories presented in Section", "new_text": "Internet Transport Protocols\" describes the impact of increased encryption of transport headers in general terms. It is also worth noting that considerations for heavily-encrypted transport protocols also come into play when streaming media is carried over IP-level VPNs and tunnels, with the additional consideration that an intermediary that does not possess credentials allowing decryption will not have visibility to the source and destination IP addresses of the packets being carried inside the tunnel. 7.2. Although the IETF has put considerable emphasis on end-to-end"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-d2797a38ced29b1128e95418a408b6619a0cba6da7e8a35dfe9aa8d8178cfef2", "old_text": "On-demand media streaming refers to the playback of pre-recorded media based on a user's action. In some cases, on-demand media is produced as a by-product of a live media production, using the same segments as the live event, but freezing the manifest after the live event has finished. In other cases, on-demand media is constructed out of pre-recorded assets with no streaming necessarily involved during the production of the on-demand content. On-demand media generally is not subject to latency concerns, but other timing-related considerations can still be as important or even", "comments": "looks good.\nFrom NAME\nNAME and NAME - I'm not sure who wrote this text, except I don't think it was me. >And as with other parts of the ecosystem, new technology brings new challenges. For example, with the emergence of ultra-low-latency streaming, responses have to start streaming to the end user while still being transmitted to the cache, and while the cache does not yet know the size of the object. Some of the popular caching systems were designed around cache footprint and had deeply ingrained assumptions about knowing the size of objects that are being stored, so the change in design requirements in long-established systems caused some errors in production. Incidents occurred where a transmission error in the connection from the upstream source to the cache could result in the cache holding a truncated segment and transmitting it to the end user's device. In this case, players rendering the stream often had the video freeze until the player was reset. In some cases the truncated object was even cached that way and served later to other players as well, causing continued stalls at the same spot in the video for all players playing the segment delivered from that cache node. NAME was curious if there's a reasonable reference that we could provide for this.\nYeah, that was me. There was not a public reference for this, alas, just a word of mouth anecdote.\nNAME - I agree with you about your comment on Section 3.6. It happens that I got comments that Section 3.6 and 3.7 didn't need to be nearly as detailed as they were, so they were heavily rewritten. The text where \"throttled\" was used is now in Section 3.6.1, and the emphasis is now on the cause of unexpected traffic profile changes, rather than the ISP response to the unexpected traffic profile changes in this specific incident, so the text that included \"throttled\" has been removed.\nNAME - I agree with your comment on Section 4.4, that some description or reference should be provided for the word \"manifest\". The next occurrence of \"manifest\" is in Section 5.2, which does provide a description. I don't think that's quite right - I'd say So I'm changing this definition, and moving it to the first occurrence of \"manifest\" in Section 4.4. Thanks for helping us fix problems you didn't even tell us about. :upsidedownface:\nlgtm", "new_text": "On-demand media streaming refers to the playback of pre-recorded media based on a user's action. In some cases, on-demand media is produced as a by-product of a live media production, using the same segments as the live event, but freezing the manifest that describes the media available from the media server after the live event has finished. In other cases, on-demand media is constructed out of pre- recorded assets with no streaming necessarily involved during the production of the on-demand content. On-demand media generally is not subject to latency concerns, but other timing-related considerations can still be as important or even"}
{"id": "q-en-draft-ietf-mops-streaming-opcons-d2797a38ced29b1128e95418a408b6619a0cba6da7e8a35dfe9aa8d8178cfef2", "old_text": "Media servers can provide media streams at various bitrates because the media has been encoded at various bitrates. This is a so-called \"ladder\" of bitrates that can be offered to media players as part of the manifest that describes the media being requested by the media player, so that the media player can select among the available bitrate choices. The media server may also choose to alter which bitrates are made", "comments": "looks good.\nFrom NAME\nNAME and NAME - I'm not sure who wrote this text, except I don't think it was me. >And as with other parts of the ecosystem, new technology brings new challenges. For example, with the emergence of ultra-low-latency streaming, responses have to start streaming to the end user while still being transmitted to the cache, and while the cache does not yet know the size of the object. Some of the popular caching systems were designed around cache footprint and had deeply ingrained assumptions about knowing the size of objects that are being stored, so the change in design requirements in long-established systems caused some errors in production. Incidents occurred where a transmission error in the connection from the upstream source to the cache could result in the cache holding a truncated segment and transmitting it to the end user's device. In this case, players rendering the stream often had the video freeze until the player was reset. In some cases the truncated object was even cached that way and served later to other players as well, causing continued stalls at the same spot in the video for all players playing the segment delivered from that cache node. NAME was curious if there's a reasonable reference that we could provide for this.\nYeah, that was me. There was not a public reference for this, alas, just a word of mouth anecdote.\nNAME - I agree with you about your comment on Section 3.6. It happens that I got comments that Section 3.6 and 3.7 didn't need to be nearly as detailed as they were, so they were heavily rewritten. The text where \"throttled\" was used is now in Section 3.6.1, and the emphasis is now on the cause of unexpected traffic profile changes, rather than the ISP response to the unexpected traffic profile changes in this specific incident, so the text that included \"throttled\" has been removed.\nNAME - I agree with your comment on Section 4.4, that some description or reference should be provided for the word \"manifest\". The next occurrence of \"manifest\" is in Section 5.2, which does provide a description. I don't think that's quite right - I'd say So I'm changing this definition, and moving it to the first occurrence of \"manifest\" in Section 4.4. Thanks for helping us fix problems you didn't even tell us about. :upsidedownface:\nlgtm", "new_text": "Media servers can provide media streams at various bitrates because the media has been encoded at various bitrates. This is a so-called \"ladder\" of bitrates that can be offered to media players as part of the manifest, so that the media player can select among the available bitrate choices. The media server may also choose to alter which bitrates are made"}
{"id": "q-en-draft-ietf-mptcp-rfc6824bis-0fa51282ede46a8dd9b38fbebaa26c2dc20645d4539b6cbb064e6643c87d3aed", "old_text": "receiver of the TCP packet (which can be either host). The initial SYN, containing just the MP_CAPABLE header, is used to define the version of MPTCP beign requested, as well as exchanging flags to negotiate connection features, described later. This option is used to declare the 64-bit keys that the end hosts", "comments": "Adding Olivier's proposed text about the Fast-Close (cfr., URL) and integrated a shorter version of URL Hello, Sorry for being late with this, but the beginning of the fall semester is always too busy for me. During the summer, there was a discussion on the FASTCLOSE option that was triggered by Francois. The current text of RFC6824bis uses the FASTCLOSE option inside ACK packets, but deployment experience has shown that this results in additional complexity since the ACK must be retransmitted to ensure its reliable delivery. Here is a proposed update for section 3.5 that allows a sender to either send FASTCLOSE inside ACK or RST while a receiver must be ready to process both. Changes are marked with Fast Close Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, a reguler RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MPFASTCLOSE option. MPFASTCLOSE is used to indicate to the peer that the connection will be abruptly closed and no data will be accepted anymore. The reasons for triggering an MPFASTCLOSE are implementation specific. Regular TCP does not allow sending a RST while the connection is in a synchronized state [RFC0793]. Nevertheless, implementations allow the sending of a RST in this state, if, for example, the operating system is running out of resources. In these cases, MPTCP should send the MPFASTCLOSE. This option is illustrated in Figure 14. 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +---------------+---------------+-------+-----------------------+ +---------------+---------------+-------+-----------------------+ +---------------------------------------------------------------+ Figure 14: Fast Close (MPFASTCLOSE) Option If Host A wants to force the closure of an MPTCP connection, it has two different options : Option A (ACK) : Host A sends an ACK containing the MPFASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSEWAIT state. Option R (RST) : Host A sends a RST containing the MPFASTCLOSE option on all subflows, containing the key of Host B as declared in the initial connection handshake. Host A can tear the subflows and the connection down immediately. If a host receives a packet with a valid MPFASTCLOSE option, it shall process it as follows : o Upon receipt of an ACK with MPFASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). o As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole connection (transition from FASTCLOSEWAIT to CLOSED states). If Host A receives an MPFASTCLOSE instead of a TCP RST, both hosts attempted fast closure simultaneously. Host A should reply with a TCP RST and tear down the connection. o If Host A does not receive a TCP RST in reply to its MPFASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MPTCPRST has been sent), it SHOULD retransmit the MPFASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST with the MPFASTCLOSE option itself when it releases state in order to clear any remaining state at middleboxes. o Upon receipt of a RST with MPFASTCLOSE, containing the valid key, Host B tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). The burden of the MP_FASTCLOSE is on the host that decides to close the connection (typically the server, but not necessarily). The text above allows it to opt for either ACK or RST without adding complexity to the other side. Another point that might be discussed in RFC6824bis is how a host should react when there are all the subflows associates to a Multipath TCP connection are in the CLOSED state, but the connection is still alive. At this point, it is impossible to send or receive data over the connection. It should attempt to reestablish one subflow to keep the connection alive. Olivier", "new_text": "receiver of the TCP packet (which can be either host). The initial SYN, containing just the MP_CAPABLE header, is used to define the version of MPTCP being requested, as well as exchanging flags to negotiate connection features, described later. This option is used to declare the 64-bit keys that the end hosts"}
{"id": "q-en-draft-ietf-mptcp-rfc6824bis-0fa51282ede46a8dd9b38fbebaa26c2dc20645d4539b6cbb064e6643c87d3aed", "old_text": "3.5. Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, the RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MP_FASTCLOSE option.", "comments": "Adding Olivier's proposed text about the Fast-Close (cfr., URL) and integrated a shorter version of URL Hello, Sorry for being late with this, but the beginning of the fall semester is always too busy for me. During the summer, there was a discussion on the FASTCLOSE option that was triggered by Francois. The current text of RFC6824bis uses the FASTCLOSE option inside ACK packets, but deployment experience has shown that this results in additional complexity since the ACK must be retransmitted to ensure its reliable delivery. Here is a proposed update for section 3.5 that allows a sender to either send FASTCLOSE inside ACK or RST while a receiver must be ready to process both. Changes are marked with Fast Close Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, a reguler RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MPFASTCLOSE option. MPFASTCLOSE is used to indicate to the peer that the connection will be abruptly closed and no data will be accepted anymore. The reasons for triggering an MPFASTCLOSE are implementation specific. Regular TCP does not allow sending a RST while the connection is in a synchronized state [RFC0793]. Nevertheless, implementations allow the sending of a RST in this state, if, for example, the operating system is running out of resources. In these cases, MPTCP should send the MPFASTCLOSE. This option is illustrated in Figure 14. 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +---------------+---------------+-------+-----------------------+ +---------------+---------------+-------+-----------------------+ +---------------------------------------------------------------+ Figure 14: Fast Close (MPFASTCLOSE) Option If Host A wants to force the closure of an MPTCP connection, it has two different options : Option A (ACK) : Host A sends an ACK containing the MPFASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSEWAIT state. Option R (RST) : Host A sends a RST containing the MPFASTCLOSE option on all subflows, containing the key of Host B as declared in the initial connection handshake. Host A can tear the subflows and the connection down immediately. If a host receives a packet with a valid MPFASTCLOSE option, it shall process it as follows : o Upon receipt of an ACK with MPFASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). o As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole connection (transition from FASTCLOSEWAIT to CLOSED states). If Host A receives an MPFASTCLOSE instead of a TCP RST, both hosts attempted fast closure simultaneously. Host A should reply with a TCP RST and tear down the connection. o If Host A does not receive a TCP RST in reply to its MPFASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MPTCPRST has been sent), it SHOULD retransmit the MPFASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST with the MPFASTCLOSE option itself when it releases state in order to clear any remaining state at middleboxes. o Upon receipt of a RST with MPFASTCLOSE, containing the valid key, Host B tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). The burden of the MP_FASTCLOSE is on the host that decides to close the connection (typically the server, but not necessarily). The text above allows it to opt for either ACK or RST without adding complexity to the other side. Another point that might be discussed in RFC6824bis is how a host should react when there are all the subflows associates to a Multipath TCP connection are in the CLOSED state, but the connection is still alive. At this point, it is impossible to send or receive data over the connection. It should attempt to reestablish one subflow to keep the connection alive. Olivier", "new_text": "3.5. Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, a regular RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MP_FASTCLOSE option."}
{"id": "q-en-draft-ietf-mptcp-rfc6824bis-0fa51282ede46a8dd9b38fbebaa26c2dc20645d4539b6cbb064e6643c87d3aed", "old_text": "is running out of resources. In these cases, MPTCP should send the MP_FASTCLOSE. This option is illustrated in tcpm_fastclose. If Host A wants to force the closure of an MPTCP connection, the MPTCP Fast Close procedure is as follows: Host A sends an ACK containing the MP_FASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSE_WAIT state. Upon receipt of an MP_FASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole", "comments": "Adding Olivier's proposed text about the Fast-Close (cfr., URL) and integrated a shorter version of URL Hello, Sorry for being late with this, but the beginning of the fall semester is always too busy for me. During the summer, there was a discussion on the FASTCLOSE option that was triggered by Francois. The current text of RFC6824bis uses the FASTCLOSE option inside ACK packets, but deployment experience has shown that this results in additional complexity since the ACK must be retransmitted to ensure its reliable delivery. Here is a proposed update for section 3.5 that allows a sender to either send FASTCLOSE inside ACK or RST while a receiver must be ready to process both. Changes are marked with Fast Close Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, a reguler RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MPFASTCLOSE option. MPFASTCLOSE is used to indicate to the peer that the connection will be abruptly closed and no data will be accepted anymore. The reasons for triggering an MPFASTCLOSE are implementation specific. Regular TCP does not allow sending a RST while the connection is in a synchronized state [RFC0793]. Nevertheless, implementations allow the sending of a RST in this state, if, for example, the operating system is running out of resources. In these cases, MPTCP should send the MPFASTCLOSE. This option is illustrated in Figure 14. 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +---------------+---------------+-------+-----------------------+ +---------------+---------------+-------+-----------------------+ +---------------------------------------------------------------+ Figure 14: Fast Close (MPFASTCLOSE) Option If Host A wants to force the closure of an MPTCP connection, it has two different options : Option A (ACK) : Host A sends an ACK containing the MPFASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSEWAIT state. Option R (RST) : Host A sends a RST containing the MPFASTCLOSE option on all subflows, containing the key of Host B as declared in the initial connection handshake. Host A can tear the subflows and the connection down immediately. If a host receives a packet with a valid MPFASTCLOSE option, it shall process it as follows : o Upon receipt of an ACK with MPFASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). o As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole connection (transition from FASTCLOSEWAIT to CLOSED states). If Host A receives an MPFASTCLOSE instead of a TCP RST, both hosts attempted fast closure simultaneously. Host A should reply with a TCP RST and tear down the connection. o If Host A does not receive a TCP RST in reply to its MPFASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MPTCPRST has been sent), it SHOULD retransmit the MPFASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST with the MPFASTCLOSE option itself when it releases state in order to clear any remaining state at middleboxes. o Upon receipt of a RST with MPFASTCLOSE, containing the valid key, Host B tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). The burden of the MP_FASTCLOSE is on the host that decides to close the connection (typically the server, but not necessarily). The text above allows it to opt for either ACK or RST without adding complexity to the other side. Another point that might be discussed in RFC6824bis is how a host should react when there are all the subflows associates to a Multipath TCP connection are in the CLOSED state, but the connection is still alive. At this point, it is impossible to send or receive data over the connection. It should attempt to reestablish one subflow to keep the connection alive. Olivier", "new_text": "is running out of resources. In these cases, MPTCP should send the MP_FASTCLOSE. This option is illustrated in tcpm_fastclose. If Host A wants to force the closure of an MPTCP connection, it has two different options: Option A (ACK) : Host A sends an ACK containing the MP_FASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSE_WAIT state. Option R (RST) : Host A sends a RST containing the MP_FASTCLOSE option on all subflows, containing the key of Host B as declared in the initial connection handshake. Host A can tear the subflows and the connection down immediately. If a host receives a packet with a valid MP_FASTCLOSE option, it shall process it as follows : Upon receipt of an ACK with MP_FASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole"}
{"id": "q-en-draft-ietf-mptcp-rfc6824bis-0fa51282ede46a8dd9b38fbebaa26c2dc20645d4539b6cbb064e6643c87d3aed", "old_text": "If Host A does not receive a TCP RST in reply to its MP_FASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MPTCP_RST has been sent), it SHOULD retransmit the MP_FASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST itself when it releases state in order to clear any remaining state at middleboxes. 3.6.", "comments": "Adding Olivier's proposed text about the Fast-Close (cfr., URL) and integrated a shorter version of URL Hello, Sorry for being late with this, but the beginning of the fall semester is always too busy for me. During the summer, there was a discussion on the FASTCLOSE option that was triggered by Francois. The current text of RFC6824bis uses the FASTCLOSE option inside ACK packets, but deployment experience has shown that this results in additional complexity since the ACK must be retransmitted to ensure its reliable delivery. Here is a proposed update for section 3.5 that allows a sender to either send FASTCLOSE inside ACK or RST while a receiver must be ready to process both. Changes are marked with Fast Close Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, a reguler RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MPFASTCLOSE option. MPFASTCLOSE is used to indicate to the peer that the connection will be abruptly closed and no data will be accepted anymore. The reasons for triggering an MPFASTCLOSE are implementation specific. Regular TCP does not allow sending a RST while the connection is in a synchronized state [RFC0793]. Nevertheless, implementations allow the sending of a RST in this state, if, for example, the operating system is running out of resources. In these cases, MPTCP should send the MPFASTCLOSE. This option is illustrated in Figure 14. 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +---------------+---------------+-------+-----------------------+ +---------------+---------------+-------+-----------------------+ +---------------------------------------------------------------+ Figure 14: Fast Close (MPFASTCLOSE) Option If Host A wants to force the closure of an MPTCP connection, it has two different options : Option A (ACK) : Host A sends an ACK containing the MPFASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSEWAIT state. Option R (RST) : Host A sends a RST containing the MPFASTCLOSE option on all subflows, containing the key of Host B as declared in the initial connection handshake. Host A can tear the subflows and the connection down immediately. If a host receives a packet with a valid MPFASTCLOSE option, it shall process it as follows : o Upon receipt of an ACK with MPFASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). o As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole connection (transition from FASTCLOSEWAIT to CLOSED states). If Host A receives an MPFASTCLOSE instead of a TCP RST, both hosts attempted fast closure simultaneously. Host A should reply with a TCP RST and tear down the connection. o If Host A does not receive a TCP RST in reply to its MPFASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MPTCPRST has been sent), it SHOULD retransmit the MPFASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST with the MPFASTCLOSE option itself when it releases state in order to clear any remaining state at middleboxes. o Upon receipt of a RST with MPFASTCLOSE, containing the valid key, Host B tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). The burden of the MP_FASTCLOSE is on the host that decides to close the connection (typically the server, but not necessarily). The text above allows it to opt for either ACK or RST without adding complexity to the other side. Another point that might be discussed in RFC6824bis is how a host should react when there are all the subflows associates to a Multipath TCP connection are in the CLOSED state, but the connection is still alive. At this point, it is impossible to send or receive data over the connection. It should attempt to reestablish one subflow to keep the connection alive. Olivier", "new_text": "If Host A does not receive a TCP RST in reply to its MP_FASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MP_FASTCLOSE has been sent), it SHOULD retransmit the MP_FASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST with the MP_FASTCLOSE option itself when it releases state in order to clear any remaining state at middleboxes. Upon receipt of a RST with MP_FASTCLOSE, containing the valid key, Host B tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). 3.6."}
{"id": "q-en-draft-ietf-mptcp-rfc6824bis-0fa51282ede46a8dd9b38fbebaa26c2dc20645d4539b6cbb064e6643c87d3aed", "old_text": "performing subflows or subflows that regularly fail during use, in order to temporarily choose not to use these paths. 4. In order to support multipath operation, the semantics of some TCP", "comments": "Adding Olivier's proposed text about the Fast-Close (cfr., URL) and integrated a shorter version of URL Hello, Sorry for being late with this, but the beginning of the fall semester is always too busy for me. During the summer, there was a discussion on the FASTCLOSE option that was triggered by Francois. The current text of RFC6824bis uses the FASTCLOSE option inside ACK packets, but deployment experience has shown that this results in additional complexity since the ACK must be retransmitted to ensure its reliable delivery. Here is a proposed update for section 3.5 that allows a sender to either send FASTCLOSE inside ACK or RST while a receiver must be ready to process both. Changes are marked with Fast Close Regular TCP has the means of sending a reset (RST) signal to abruptly close a connection. With MPTCP, a reguler RST only has the scope of the subflow and will only close the concerned subflow but not affect the remaining subflows. MPTCP's connection will stay alive at the data level, in order to permit break-before-make handover between subflows. It is therefore necessary to provide an MPTCP-level \"reset\" to allow the abrupt closure of the whole MPTCP connection, and this is the MPFASTCLOSE option. MPFASTCLOSE is used to indicate to the peer that the connection will be abruptly closed and no data will be accepted anymore. The reasons for triggering an MPFASTCLOSE are implementation specific. Regular TCP does not allow sending a RST while the connection is in a synchronized state [RFC0793]. Nevertheless, implementations allow the sending of a RST in this state, if, for example, the operating system is running out of resources. In these cases, MPTCP should send the MPFASTCLOSE. This option is illustrated in Figure 14. 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +---------------+---------------+-------+-----------------------+ +---------------+---------------+-------+-----------------------+ +---------------------------------------------------------------+ Figure 14: Fast Close (MPFASTCLOSE) Option If Host A wants to force the closure of an MPTCP connection, it has two different options : Option A (ACK) : Host A sends an ACK containing the MPFASTCLOSE option on one subflow, containing the key of Host B as declared in the initial connection handshake. On all the other subflows, Host A sends a regular TCP RST to close these subflows, and tears them down. Host A now enters FASTCLOSEWAIT state. Option R (RST) : Host A sends a RST containing the MPFASTCLOSE option on all subflows, containing the key of Host B as declared in the initial connection handshake. Host A can tear the subflows and the connection down immediately. If a host receives a packet with a valid MPFASTCLOSE option, it shall process it as follows : o Upon receipt of an ACK with MPFASTCLOSE, containing the valid key, Host B answers on the same subflow with a TCP RST and tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). o As soon as Host A has received the TCP RST on the remaining subflow, it can close this subflow and tear down the whole connection (transition from FASTCLOSEWAIT to CLOSED states). If Host A receives an MPFASTCLOSE instead of a TCP RST, both hosts attempted fast closure simultaneously. Host A should reply with a TCP RST and tear down the connection. o If Host A does not receive a TCP RST in reply to its MPFASTCLOSE after one retransmission timeout (RTO) (the RTO of the subflow where the MPTCPRST has been sent), it SHOULD retransmit the MPFASTCLOSE. The number of retransmissions SHOULD be limited to avoid this connection from being retained for a long time, but this limit is implementation specific. A RECOMMENDED number is 3. If no TCP RST is received in response, Host A SHOULD send a TCP RST with the MPFASTCLOSE option itself when it releases state in order to clear any remaining state at middleboxes. o Upon receipt of a RST with MPFASTCLOSE, containing the valid key, Host B tears down all subflows. Host B can now close the whole MPTCP connection (it transitions directly to CLOSED state). The burden of the MP_FASTCLOSE is on the host that decides to close the connection (typically the server, but not necessarily). The text above allows it to opt for either ACK or RST without adding complexity to the other side. Another point that might be discussed in RFC6824bis is how a host should react when there are all the subflows associates to a Multipath TCP connection are in the CLOSED state, but the connection is still alive. At this point, it is impossible to send or receive data over the connection. It should attempt to reestablish one subflow to keep the connection alive. Olivier", "new_text": "performing subflows or subflows that regularly fail during use, in order to temporarily choose not to use these paths. 3.11. TCP Fast Open, described in RFC7413, has been introduced with the objective of gaining one RTT before transmitting data. This is considered a valuable gain as very short connections are very common, especially for HTTP request/response schemes. It achieves this by sending the SYN-segment together with data and allowing the server to reply immediately with data after the SYN/ACK. RFC7413 secures this mechanism, by using a new TCP option that includes a cookie which is negotiated in a preceding connection. When using TCP Fast Open in conjunction with MPTCP, there are two key points to take into account, detailed hereafter. 3.11.1. When a TFO client first connects to a server, it cannot immediately include data in the SYN for security reasons RFC7413. Instead, it requests a cookie that will be used in subsequent connections. This is done with the TCP cookie request/response options, of resp. 2 bytes and 6-18 bytes (depending on the chosen cookie length). TFO and MPTCP can be combined provided that the total length of their options does not exceed the maximum 40 bytes possible in TCP: In the SYN: MPTCP uses a 4-bytes long MP_CAPABLE option. The MPTCP and TFO options sum up to 6 bytes. With typical TCP-options using up to 19 bytes in the SYN (24 bytes if options are padded at a word boundary), there is enough space to combine the MP_CAPABLE with the TFO Cookie Request. In the SYN+ACK: MPTCP uses a 12-bytes long MP_CAPABLE option, but now TFO can be as long as 18 bytes. Since the maximum option length may be exceeded, it is up to the server to solve this by using a shorter cookie. As an example, if we consider that 19 bytes are used for classical TCP options, the maximum possible cookie length would be of 7 bytes. Note that the same limitation applies to subsequent connections, for the SYN packet (because the client then echoes back the cookie to the server). Finally, if the security impact of reducing the cookie size is not deemed acceptable, the server can reduce the amount of other TCP-options by omitting the TCP timestamps (as outlined in app_options). 3.11.2. MPTCP uses, in the TCP establishment phase, a key exchange that is used to generate the Initial Data Sequence Numbers (IDSNs). In particular, the SYN with MP_CAPABLE occupies the first octet of the data sequence space. With TFO, one way to handle the data sent together with the SYN would be to consider an implicit DSS mapping that covers that SYN segment (since there is not enough space in the SYN to include a DSS option). The problem with that approach is that if a middlebox modifies the TFO data, this will not be noticed by MPTCP because of the absence of a DSS-checksum. For example, a TCP (but not MPTCP)-aware middlebox could insert bytes at the beginning of the stream and adapt the TCP checksum and sequence numbers accordingly. With an implicit mapping, this would give to client and server a different view on the DSS-mapping, with no way to detect this inconsistency as the DSS checksum is not present. To solve this, the TFO data should not be considered part of the Data Sequence Number space: the SYN with MP_CAPABLE still occupies the first octet of data sequence space, but then the first non-TFO data byte occupies the second octet. This guarantees that, if the use of DSS-checksum is negotiated, all data in the data sequence number space is checksummed. We also note that this does not entail a loss of functionality, because TFO-data is always sent when only one path is active. 3.11.3. The following shows a few examples of possible TFO+MPTCP establishment scenarios. Before a client can send data together with the SYN, it must request a cookie to the server, as shown in Figure fig_tfocookie. This is done by simply combining the TFO and MPTCP options. Once this is done, the received cookie can be used for TFO, as shown in Figure fig_tfodata. In this example, the client first sends 20 bytes in the SYN. The server immediately replies with 100 bytes following the SYN-ACK upon which the client replies with 20 more bytes. Note that the last segment in the figure has a TCP sequence number of 21, while the DSS subflow sequence number is 1 (because the TFO data is not part of the data sequence number space, as explained in Section tfodata. In Figure fig_tfofallback, the server does not support TFO. The client detects that no state is created in the server (as no data is acked), and now sends the MP_CAPABLE in the third ack, in order for the server to build its MPTCP context at then end of the establishment. Now, the tfo data, retransmitted, becomes part of the data sequence mapping because it is effectively sent (in fact re- sent) after the establishment. It is also possible that the server acknowledges only part of the TFO data, as illustrated in Figure fig_tfopartial. The client will simply retransmit the missing data together with a DSS-mapping. 4. In order to support multipath operation, the semantics of some TCP"}
{"id": "q-en-draft-ietf-ppm-dap-3dbd11bcdb0f60c4d14798aae39fdcd30d9715e6a9c39b4be659c93deceb900a", "old_text": "ppm-urn-space). Clients SHOULD display the \"detail\" field of all errors. The \"instance\" value MUST be the endpoint to which the request was targeted. The problem document MUST also include a \"taskid\" member which contains the associated PPM task ID, encoded with base64 using the standard alphabet RFC4648 (this value is always known, see task-configuration). In the remainder of this document, we use the tokens in the table above to refer to error types, rather than the full URNs. For", "comments": "All protocol request messages contain a field, which allows aggregators to service multiple tasks from a single set of endpoints. The exception is : since it is an HTTP GET request, there is no body, and thus no place for the client to indicate the desired to the aggregator. We don't want to introduce a request body to , because then it would have to be a POST instead of a GET, and that would make it harder for implementations to cache HPKE config values. So instead we have clients encode the task ID in RFC 4648 Base16 (a.k.a. hexadecimal) and provide it as a query parameter. See issue\nMy goal here is to enable our aggregator implementations in the interop target to use one HPKE config per task. The alternative to this would be for us to use aggregator endpoints that somehow incorporate the task ID, as I described , which is already permitted by the spec as written.\nThe Base64 alphabet includes , and , which means you have to URL encode (a.k.a. percent encode) Base64 strings before you can use them in a URL, which is a chore and yields less readable URLs (not that 64 character hex strings is especially user-friendly). An alternative would be to use RFC 4648's , omitting the padding characters as suggested in since task IDs are of a known length.\nGotcha. I think the main thing is to be consistent. I would either: Change this PR to use base64url; or Tack on a commit that makes all encodings of the taskID (except those that appear in the body, of course) the same (i.e., hexadecimal). I think (2.) would be my preference.\nI agree with the consistency argument. I'm leaning towards using everywhere because it's more compact than Base16/hex, and also I think implementations are less likely to be surprised by case requirements in base64url (i.e., it's obvious that base64 is case sensitive, but you have to read RFC 4648 attentively to notice that it mandates uppercase hex strings).\nBase64Url sounds good to me. I took a quick look and it appears that most standard base64 libraries also support the URL encoding.\nI will clean this up and merge it once I have implemented it in Janus and convinced myself it's sound.\nSGTM, but why not base64 encode the taskID for consistency with URL", "new_text": "ppm-urn-space). Clients SHOULD display the \"detail\" field of all errors. The \"instance\" value MUST be the endpoint to which the request was targeted. The problem document MUST also include a \"taskid\" member which contains the associated PPM task ID, encoded in Base 64 using the URL and filename safe alphabet with no padding as defined in sections 5 and 3.2 of RFC4648 (this value is always known, see task-configuration). In the remainder of this document, we use the tokens in the table above to refer to error types, rather than the full URNs. For"}
{"id": "q-en-draft-ietf-ppm-dap-3dbd11bcdb0f60c4d14798aae39fdcd30d9715e6a9c39b4be659c93deceb900a", "old_text": "MUST be in the same order. The leader's endpoint MUST be the first in the list. The order of the \"encrypted_input_shares\" in a \"Report\" (see uploading-reports) MUST be the same as the order in which aggregators appear in this list. \"max_batch_lifetime\": The maximum number of times a batch of reports may be used in collect requests.", "comments": "All protocol request messages contain a field, which allows aggregators to service multiple tasks from a single set of endpoints. The exception is : since it is an HTTP GET request, there is no body, and thus no place for the client to indicate the desired to the aggregator. We don't want to introduce a request body to , because then it would have to be a POST instead of a GET, and that would make it harder for implementations to cache HPKE config values. So instead we have clients encode the task ID in RFC 4648 Base16 (a.k.a. hexadecimal) and provide it as a query parameter. See issue\nMy goal here is to enable our aggregator implementations in the interop target to use one HPKE config per task. The alternative to this would be for us to use aggregator endpoints that somehow incorporate the task ID, as I described , which is already permitted by the spec as written.\nThe Base64 alphabet includes , and , which means you have to URL encode (a.k.a. percent encode) Base64 strings before you can use them in a URL, which is a chore and yields less readable URLs (not that 64 character hex strings is especially user-friendly). An alternative would be to use RFC 4648's , omitting the padding characters as suggested in since task IDs are of a known length.\nGotcha. I think the main thing is to be consistent. I would either: Change this PR to use base64url; or Tack on a commit that makes all encodings of the taskID (except those that appear in the body, of course) the same (i.e., hexadecimal). I think (2.) would be my preference.\nI agree with the consistency argument. I'm leaning towards using everywhere because it's more compact than Base16/hex, and also I think implementations are less likely to be surprised by case requirements in base64url (i.e., it's obvious that base64 is case sensitive, but you have to read RFC 4648 attentively to notice that it mandates uppercase hex strings).\nBase64Url sounds good to me. I took a quick look and it appears that most standard base64 libraries also support the URL encoding.\nI will clean this up and merge it once I have implemented it in Janus and convinced myself it's sound.\nSGTM, but why not base64 encode the taskID for consistency with URL", "new_text": "MUST be in the same order. The leader's endpoint MUST be the first in the list. The order of the \"encrypted_input_shares\" in a \"Report\" (see uploading-reports) MUST be the same as the order in which aggregators appear in this list. Aggregators MAY use the same endpoint for multiple tasks. \"max_batch_lifetime\": The maximum number of times a batch of reports may be used in collect requests."}
{"id": "q-en-draft-ietf-ppm-dap-3dbd11bcdb0f60c4d14798aae39fdcd30d9715e6a9c39b4be659c93deceb900a", "old_text": "Before the client can upload its report to the leader, it must know the public key of each of the aggregators. These are retrieved from each aggregator by sending a request to \"[aggregator]/hpke_config\", where \"[aggregator]\" is the aggregator's endpoint URL, obtained from the task parameters. The aggregator responds to well-formed requests with status 200 and an \"HpkeConfig\" value: [OPEN ISSUE: Decide whether to expand the width of the id, or support multiple cipher suites (a la OHTTP/ECH).]", "comments": "All protocol request messages contain a field, which allows aggregators to service multiple tasks from a single set of endpoints. The exception is : since it is an HTTP GET request, there is no body, and thus no place for the client to indicate the desired to the aggregator. We don't want to introduce a request body to , because then it would have to be a POST instead of a GET, and that would make it harder for implementations to cache HPKE config values. So instead we have clients encode the task ID in RFC 4648 Base16 (a.k.a. hexadecimal) and provide it as a query parameter. See issue\nMy goal here is to enable our aggregator implementations in the interop target to use one HPKE config per task. The alternative to this would be for us to use aggregator endpoints that somehow incorporate the task ID, as I described , which is already permitted by the spec as written.\nThe Base64 alphabet includes , and , which means you have to URL encode (a.k.a. percent encode) Base64 strings before you can use them in a URL, which is a chore and yields less readable URLs (not that 64 character hex strings is especially user-friendly). An alternative would be to use RFC 4648's , omitting the padding characters as suggested in since task IDs are of a known length.\nGotcha. I think the main thing is to be consistent. I would either: Change this PR to use base64url; or Tack on a commit that makes all encodings of the taskID (except those that appear in the body, of course) the same (i.e., hexadecimal). I think (2.) would be my preference.\nI agree with the consistency argument. I'm leaning towards using everywhere because it's more compact than Base16/hex, and also I think implementations are less likely to be surprised by case requirements in base64url (i.e., it's obvious that base64 is case sensitive, but you have to read RFC 4648 attentively to notice that it mandates uppercase hex strings).\nBase64Url sounds good to me. I took a quick look and it appears that most standard base64 libraries also support the URL encoding.\nI will clean this up and merge it once I have implemented it in Janus and convinced myself it's sound.\nSGTM, but why not base64 encode the taskID for consistency with URL", "new_text": "Before the client can upload its report to the leader, it must know the public key of each of the aggregators. These are retrieved from each aggregator by sending a request to \"[aggregator]/hpke_config?task_id=[task-id]\", where \"[aggregator]\" is the aggregator's endpoint URL, obtained from the task parameters, and \"[task-id]\" is the task ID obtained from the task parameters, encoded in Base 64 with URL and filename safe alphabet with no padding, as specified in sections 5 and 3.2 of RFC4648. If the aggregator does not recognize the task ID, then it responds with HTTP error status 404 Not Found and an error of type \"unrecognizedTask\". The aggregator responds to well-formed requests with status 200 and an \"HpkeConfig\" value: [TODO: Allow aggregators to return HTTP 403 Forbidden in deployments that use authentication to avoid leaking information about which tasks exist.] [OPEN ISSUE: Decide whether to expand the width of the id, or support multiple cipher suites (a la OHTTP/ECH).]"}
{"id": "q-en-draft-ietf-ppm-dap-ba6fe7cb322f785ae38eeb10f49836860c8d9f6b92997e557a8553f75932bf07", "old_text": "Initialize VDAF preparation and initial outputs as described in input-share-prep. [[OPEN ISSUE: consider moving the helper nonce check into #early- input-share-validation]] Once the helper has processed each valid report share in", "comments": "Justify text after bullets consistenly Make sure all lines break at 80 characters\nLGTM, modulo conflicts. It'd be nice if we had a linter that blocked PRs until their formatting was right, since changes like this taint git history.", "new_text": "Initialize VDAF preparation and initial outputs as described in input-share-prep. [[OPEN ISSUE: consider moving the helper nonce check into early- input-share-validation]] Once the helper has processed each valid report share in"}
{"id": "q-en-draft-ietf-ppm-dap-78b8d166ec82e028aca31c9ef93e73acf6e585f0e77830eda2d549677924e99c", "old_text": "The last input comprises the randomness consumed by the sharding algorithm. The sharding randomness is a random byte string of length specified by the VDAF. The Client MUST generate this using a cryptographicallly secure random number generator. The Client then wraps each input share in the following structure:", "comments": "As of draft-irtf-cfrg-vdaf-05 (not yet published), the random coins used for sharding are an explicit parameter of . Add this parameter and explasin how it is generated.\nNAME I believe this is the only API change we need to account for, correct?", "new_text": "The last input comprises the randomness consumed by the sharding algorithm. The sharding randomness is a random byte string of length specified by the VDAF. The Client MUST generate this using a cryptographically secure random number generator. The Client then wraps each input share in the following structure:"}
{"id": "q-en-draft-ietf-ppm-dap-d63ddb0d0f4ffde1f874c218be357219786e0bfcfc3cd307efd08c805c8691fb", "old_text": "Each report share has a corresponding task ID, report metadata (report ID and, timestamp), the public share sent to each Aggregator, and the recipient's encrypted input share. Let \"task_id\", \"metadata\", \"public_share\", and \"encrypted_input_share\" denote these values, respectively. Given these values, an aggregator decrypts the input share as follows. First, it constructs an \"InputShareAad\" message from \"task_id\", \"metadata\", and \"public_share\". Let this be denoted by \"input_share_aad\". Then, the aggregator looks up the HPKE config and corresponding secret key indicated by \"encrypted_input_share.config_id\" and attempts decryption of the payload with the following procedure: where \"sk\" is the HPKE secret key, and \"server_role\" is the role of the aggregator (\"0x02\" for the leader and \"0x03\" for the helper).", "comments": "All references to \"metadata\" are now to \"report_metadata\". References to \"CollectReq\" are now \"CollectionReq\". References to \"CollectResp\" are now \"Collection\".\nThe current version of the draft renames field \"metadata\" to \"report_metadata\" in the Report structure, but does not do the same renaming in the ReportShare and InputShareAad structures. I think this naming should be consistent, though I don't have a preference between the alternatives.\nThanks for pointing this out! Can you send prepare a PR?\nFixed in\nThanks a bunch NAME", "new_text": "Each report share has a corresponding task ID, report metadata (report ID and, timestamp), the public share sent to each Aggregator, and the recipient's encrypted input share. Let \"task_id\", \"report_metadata\", \"public_share\", and \"encrypted_input_share\" denote these values, respectively. Given these values, an aggregator decrypts the input share as follows. First, it constructs an \"InputShareAad\" message from \"task_id\", \"report_metadata\", and \"public_share\". Let this be denoted by \"input_share_aad\". Then, the aggregator looks up the HPKE config and corresponding secret key indicated by \"encrypted_input_share.config_id\" and attempts decryption of the payload with the following procedure: where \"sk\" is the HPKE secret key, and \"server_role\" is the role of the aggregator (\"0x02\" for the leader and \"0x03\" for the helper)."}
{"id": "q-en-draft-ietf-ppm-dap-d63ddb0d0f4ffde1f874c218be357219786e0bfcfc3cd307efd08c805c8691fb", "old_text": "Implementation note: The Leader considers a batch to be collected once it has completed a collection job for a CollectReq message from the Collector; the Helper considers a batch to be collected once it has responded to an AggregateShareReq message from the Leader. A batches is determined by query (query) conveyed in these messages. Queries must satisfy the criteria covered in batch-validation.", "comments": "All references to \"metadata\" are now to \"report_metadata\". References to \"CollectReq\" are now \"CollectionReq\". References to \"CollectResp\" are now \"Collection\".\nThe current version of the draft renames field \"metadata\" to \"report_metadata\" in the Report structure, but does not do the same renaming in the ReportShare and InputShareAad structures. I think this naming should be consistent, though I don't have a preference between the alternatives.\nThanks for pointing this out! Can you send prepare a PR?\nFixed in\nThanks a bunch NAME", "new_text": "Implementation note: The Leader considers a batch to be collected once it has completed a collection job for a CollectionReq message from the Collector; the Helper considers a batch to be collected once it has responded to an AggregateShareReq message from the Leader. A batches is determined by query (query) conveyed in these messages. Queries must satisfy the criteria covered in batch-validation."}
{"id": "q-en-draft-ietf-ppm-dap-d63ddb0d0f4ffde1f874c218be357219786e0bfcfc3cd307efd08c805c8691fb", "old_text": "the hash values with a bitwise-XOR operation. Then, for each Aggregator endpoint \"{aggregator}\" in the parameters associated with \"CollectReq.task_id\" (see collect-flow) except its own, the Leader sends a POST request to \"{aggregator}/tasks/{task- id}/aggregate_shares\" with the following message:", "comments": "All references to \"metadata\" are now to \"report_metadata\". References to \"CollectReq\" are now \"CollectionReq\". References to \"CollectResp\" are now \"Collection\".\nThe current version of the draft renames field \"metadata\" to \"report_metadata\" in the Report structure, but does not do the same renaming in the ReportShare and InputShareAad structures. I think this naming should be consistent, though I don't have a preference between the alternatives.\nThanks for pointing this out! Can you send prepare a PR?\nFixed in\nThanks a bunch NAME", "new_text": "the hash values with a bitwise-XOR operation. Then, for each Aggregator endpoint \"{aggregator}\" in the parameters associated with \"CollectionReq.task_id\" (see collect-flow) except its own, the Leader sends a POST request to \"{aggregator}/tasks/{task- id}/aggregate_shares\" with the following message:"}
{"id": "q-en-draft-ietf-ppm-dap-d63ddb0d0f4ffde1f874c218be357219786e0bfcfc3cd307efd08c805c8691fb", "old_text": "4.5.6. Before an Aggregator responds to a CollectReq or AggregateShareReq, it must first check that the request does not violate the parameters associated with the DAP task. It does so as described here. First the Aggregator checks that the batch respects any \"boundaries\" determined by the query type. These are described in the subsections", "comments": "All references to \"metadata\" are now to \"report_metadata\". References to \"CollectReq\" are now \"CollectionReq\". References to \"CollectResp\" are now \"Collection\".\nThe current version of the draft renames field \"metadata\" to \"report_metadata\" in the Report structure, but does not do the same renaming in the ReportShare and InputShareAad structures. I think this naming should be consistent, though I don't have a preference between the alternatives.\nThanks for pointing this out! Can you send prepare a PR?\nFixed in\nThanks a bunch NAME", "new_text": "4.5.6. Before an Aggregator responds to a CollectionReq or AggregateShareReq, it must first check that the request does not violate the parameters associated with the DAP task. It does so as described here. First the Aggregator checks that the batch respects any \"boundaries\" determined by the query type. These are described in the subsections"}
{"id": "q-en-draft-ietf-ppm-dap-d63ddb0d0f4ffde1f874c218be357219786e0bfcfc3cd307efd08c805c8691fb", "old_text": "batch IDs. Thus the Aggregator needs to check that the query is associated with a known batch ID: For a CollectReq containing a query of type \"by_batch_id\", the Leader checks that the provided batch ID corresponds to a batch ID it returned in a previous CollectResp for the task. For an AggregateShareReq, the Helper checks that the batch ID provided by the Leader corresponds to a batch ID used in a", "comments": "All references to \"metadata\" are now to \"report_metadata\". References to \"CollectReq\" are now \"CollectionReq\". References to \"CollectResp\" are now \"Collection\".\nThe current version of the draft renames field \"metadata\" to \"report_metadata\" in the Report structure, but does not do the same renaming in the ReportShare and InputShareAad structures. I think this naming should be consistent, though I don't have a preference between the alternatives.\nThanks for pointing this out! Can you send prepare a PR?\nFixed in\nThanks a bunch NAME", "new_text": "batch IDs. Thus the Aggregator needs to check that the query is associated with a known batch ID: For a CollectionReq containing a query of type \"by_batch_id\", the Leader checks that the provided batch ID corresponds to a batch ID it returned in a previous Collection for the task. For an AggregateShareReq, the Helper checks that the batch ID provided by the Leader corresponds to a batch ID used in a"}
{"id": "q-en-draft-ietf-ppm-dap-d63ddb0d0f4ffde1f874c218be357219786e0bfcfc3cd307efd08c805c8691fb", "old_text": "AggregateShare collect-flow: \"application/dap-aggregate-share\" CollectReq collect-flow: \"application/dap-collect-req\" Collection collect-flow: \"application/dap-collection\"", "comments": "All references to \"metadata\" are now to \"report_metadata\". References to \"CollectReq\" are now \"CollectionReq\". References to \"CollectResp\" are now \"Collection\".\nThe current version of the draft renames field \"metadata\" to \"report_metadata\" in the Report structure, but does not do the same renaming in the ReportShare and InputShareAad structures. I think this naming should be consistent, though I don't have a preference between the alternatives.\nThanks for pointing this out! Can you send prepare a PR?\nFixed in\nThanks a bunch NAME", "new_text": "AggregateShare collect-flow: \"application/dap-aggregate-share\" CollectionReq collect-flow: \"application/dap-collect-req\" Collection collect-flow: \"application/dap-collection\""}
{"id": "q-en-draft-ietf-ppm-dap-9dac498d4501415d9e8596627a45c3d454556df410d276e542240de9dd1787b1", "old_text": "malformed requests are handled as described in pa-error-common- aborts.) It first looks for the PA parameters \"PAParam\" for which \"PAAggregateReq.task_id\" is equal to the task id derived from \"PAParam\". Next, it looks up the HPKE config and corresponding secret key associated with \"PAAggregateReq.helper_hpke_config_id\". If not found, then it aborts and alerts the leader with \"unrecognized key config\". [NOTE: In this situation, the leader has no choice but to abort. This falls into the class of error scenarios that are addressable by running with multiple helpers.] [TODO: Don't require all sub-requests to pertain to the same HPKE config.] The response consists of the helper's updated state and a sequence of _sub-responses_, where the i-th sub-response corresponds to the sub-", "comments": "Update the aggregate request text to reflect recent protocol changes. Relax the requirement that all sub-requests use the same key config. Do URL, which was previously merged into the wrong branch.\nNAME NAME please have a look when you can.\nPer , the first step in the upload protocol is a POST to the leader server which returns information about helpers. In some yet to be specified cases, the response could include a challenge to be incorporated into reports. However I suspect that in most cases, the response will be static and so an idempotent, cacheable GET would be a better fit, especially since it would save clients a roundtrip to the helper on the hot path of submitting reports, and would make it possible for helpers to implement uploadstart by putting some static content in a CDN. HTTP GET requests don't have bodies, so we would have to figure out how to encode all the data in the current into the URI, which could be done as either path fragments in the URI (i.e., ) or as query parameters (i.e., ). We would of course need to make sure that don't preclude proof systems like the one alleged to exist in section 5.2 of \"BBCp19\".\nAlso, because the current protocol uses a POST for uploadstart, the implication is that for every report they submit, clients need to hit and then . If we make idempotent and cacheable, then that request can be skipped in the majority of cases. I suppose the significance of this depends on how we expect clients to use PDA: are they going to be continuously streaming hundreds or thousands of reports into the leader as events of interest occur (in which case I think eliminating requests is interesting) or are they expected to accumulate several reports locally before submitting them all in one or a few requests?\nI agree this would be a good change, if we can swing it. It's notable that Prio v2 and HH don't require a random challenge from the leader. But as discussed, this is useful for other proof systems one might want to implement for Prio. For example, see the data type implemented here: URL See below for an explanation. There might be other ways to implement the challenge. One possibility is to derive the challenge from the TLS key schedule (or from whatever state the client and leader share for the secure channel). NAME and NAME have both pointed out that adding this dependency on the underlying transport is problematic. Another possibility might be to use some external source of randomness that the leader trusts. comes to mind, though it is somewhat problematic because clients would effectively use the same randomness for some period of time. The security model of BBCp19 would need to be extended to account for this. (I believe this could be done.) Some other solution I'm missing? _ This data type is used to compute the mean and variance of each integer in a sequence of integers. To do so, the client encodes each integer of its input vector as a pair , where is the representation of as a bit vector and is equal to . To prove validity of its input, the proof needs to do two things: verify that (a) is indeed a bit vector, and (b) the integer encoded by is the square root of . In order to check these things efficiently, the proof system uses a random challenge generated by the leader. At a very high level, the client wants to prove that the following boolean conjunction holds: is 1 or 0 AND is 1 or 0 AND ... AND encodes the square root of . In Prio, we need to represent this conjunction as an arithmetic circuit, which the random challenge helps us do. In a bit more detail now: Let's rewrite this expression as w[0] AND w[1] AND ... AND w[s]. As an arithmetic circuit, this expression is actually , where if and only if w[i] is true. If the input is valid, then the expression should evaluate to . But because we're talking about field elements, a malicious client may try to play games like setting and so that the sum is still , but the statement is false. To detect this, we actually compute the following expression: , where is a randomly chosen field element. If the statement is true, then the expression evaluates to . If the statement is false, then with high probability, the expression will evaluate to something other than . (This idea comes from Theorem 5.3)\nThank you for the detailed explanalation NAME Having thought about it more, I suspect an HTTP GET wouldn't be the right way to support proof systems that need a challenge. Assuming that a unique challenge is needed for every report[^1], then intuitively the leader would have to maintain a mapping of reports to issued challenges so that the leader and helper(s) can later use the challenge value when evaluating the client's proof. But now that the leader has to maintain state per-report, I think the upload protocol is missing a notion of a report identifier. The contains a , but IIUC the will be the same across many reports[^2] (e.g., if a browser is sending daily reports on how often users click a button, they use the same every time they submit). So needs to include a report ID, which must be echoed back to the leader in so that the leader can look up the challenge for the report. [^1] Is this true? Or would it suffice for each to use the same challenge across multiple reports? [^2] I notice that the design doc lacks a strong definition of a report and how it relates to tasks, inputs, measurements: because GET should be \"safe\" in the sense of not changing any server state, and it seems like the server would have to maintain a mapping of reports to issued challenges so that the leader and helper can later use the challenge value when evaluating the client's proof.\nContinuing on the assumption that causes a state change on the leader, POST is then the appropriate HTTP method to use. , though it's a little unusual. There may be some big drawback to cacheable POST responses, but I think that might let us support proof systems that require live challenges while also eliminating the majority of requests when the proof system is such that the will change very rarely.\nThe requirement for the leader to keep state across upload start and upload finish may be avoidable. Suppose the leader has a long-lived PRF key K. It would could choose a random nonce N, compute R = PRF(K, N) as the challenge, and hand (N, R) to the client in its upload start response. The client would then use R to generate the proof and send N to the leader in its upload finish request. The leader could then re-derive R. NAME and I kicked around the idea of having the client generate a report ID. I think this may end up being useful, but we weren't sure if it was better to have the leader choose it or the client. To conform to the security model of BBCp19, each report would need its own challenge. It may be possible to relax this, but this would be risky.\nI like this a lot! I'm a big fan of deterministic derivation schemes like this -- besides reintroducing the possibility of an idempotent GET, eliminating storage requirements for challenges makes it much easier to scale up leaders. If the report IDs are big enough (UUIDs?) we could even use them as . I would say the leader, because if you let the client do it, then the leader would have to check for things like collisions with existing report IDs or other malicious ID choices. If the leader chooses it, then all it has to do is generate a few bytes of randomness. I agree.\nNAME we'd appreciate your feedback on this issue because it involves security considerations for ZKP systems on secret-shared data [1]. I'll quickly sum up the problem so that you don't need to worry about the conversation above. The problem at hand regards systems like [1, Theorem 5.3] that require interaction between the prover and verifier. Specifically, we're considering a special case of Theorem 5.3 in which the verifier sends the prover a random \"challenge\" that is used to generate and verify a (non-interactive) FLPCP (call it ): We refer to the prover's first and second messages as \"upload start\" and \"upload finish\" respectively. Each message corresponds to an HTTP request made by a web client to a web server. Ideally, the verifier would be able to handle these requests statelessly, i.e., without having to store . Consider the following change. Suppose the verifier has a long-term PRF key . In response to an upload start request, it chooses a unique \"upload id\" --- maybe this is a 16-byte integer chosen at random --- and computes and sends to the prover. In the upload finish request, the prover sends so that the verifier can re-derive instead of needing to remember it: The (potential) problem with this variant is that a malicious client is able to replay a challenge as many times as it wants. This behavior isn't accounted for in Definition 3.9, so when you try to \"compile\" this system into a ZKP on secret-shared data as described in Section 6.2 and consider its security with respect to Definition 6.4 (Setting I), you can no longer appeal to Theorem 6.6 to reason about its security. Our question is therefore: What impact do you expect this change to have on the soundness of the system? Can you think of an attack that exploits the fact that challenges can be replayed? My expectation is that the problem is merely theoretical, i.e., we ought to be able to close the gap with an appropriate refinement of Definition of 3.9 and reproving Theorem 5.3. [1] URL\nThanks for looping me in. In the discussion below, I am assuming that the client sends its data vector (called \"x\" in the paper [1]) at the same time as it sends its proof to the verifier. Even if the client sends its data vector in the first message, it seems that the client can reuse an (R,N) pair from upload request 1 to later submit upload request 2, in which case the client sees (R, N) before it chooses its data vector and proof. So I think we can assume that the client's data vector and proof can depend on the verifier's random value R. (I'll call this value \"r\" to match the paper.) For soundness: the client/prover must commit to its data vector before it learns the verifier's random value r. If the prover can choose its data value and proof in a way that depends on the verifier's randomness r, the soundness guarantee no longer holds. There is also an efficient attack. So I suspect that the stateless-verifier optimization is unsound. The attack works as follows: given r, the verifier finds a non-satisfying vector x such that sumi ri C(Ai(x)) = 0, again using the notation from Theorem 5.3 in the paper [1]. For the circuits C used in most applications, it will be easy to find such a vector x. Then, the prover constructs a proof that x that sumi ri C(Ai(x)) = 0. The proof is valid, so the verifier will accept always. To remove interaction, the Fiat-Shamir-like technique in Section 6.2.3 is probably the best way to go. We only sketched the idea in the paper (without formal analysis or proof) but if it's going to be important for your application, I'm happy to sanity-check the protocol you come up with. Another idea\u2014the details of which I have not checked carefully\u2014would be to modify your protocol as follows: The prover sends its data vector x to verifiers, split into k shares x1, ..., xk\u2014one for each of the k verifiers. Each verifier holds a PRF key ki computes ri = PRF(ki, xi) and they return r = r1 + ... + rk to the prover. The verifiers store no state. The prover computes the proof pi and sends (x1, pi1), ..., (xk, pik) to the verifiers\u2014again, one for each of the k verifiers. Each verifier computes ri = PRF(ki, xi), they jointly compute r = r1 + ... + r_k, and check the proof using randomness r. The Fiat-Shamir-like approach seems slightly cleaner to me, since it only requires a single prover-to-verifier message. Either way, please let me know if I misunderstood your question or if anything else here is unclear. [1] URL\nThanks for the nice explanation, Henry. It's perfectly clear and answers my question. It also points to something that I, for one, missed the importance of: the need for the prover to \"commit\" to the input shares before generating the r-dependent proof. I appreciate you pointing this out. The only requirement for making upload start not idempotent is so that the randomness r can be sent to the client. It would be nice to do away with this requirement. I think the way forward is using the Fiat-Shamir technique. NAME I'll get cracking on this and let you know when I have something for you to look at.\nNAME I believe the change is something like this. Let be a hash function with range . Suppose there are verifiers. The prover proceeds as follows: Splits its inputs into shares . For each , do: Choose blinding factor from at random. Let . Set . Derive field element from (e.g., seed a PRNG with and map the output to a field element by rejection sampling on chunks of the output). Generate the proof using and . Split into shares . For each send to the -th verifier.\nYes, this looks right to me!\nAs of URL, the upload start request no longer exists. When we're ready to specify Prio, we'll want to include the Fiat-Shamir approach for protocols that use joint randomness. I've noted this and am closing the issue.", "new_text": "malformed requests are handled as described in pa-error-common- aborts.) It first looks for the PA parameters \"PAParam\" for which \"PAAggregateReq.task_id\" is equal to the task id derived from \"PAParam\". The response consists of the helper's updated state and a sequence of _sub-responses_, where the i-th sub-response corresponds to the sub-"}
{"id": "q-en-draft-ietf-ppm-dap-9dac498d4501415d9e8596627a45c3d454556df410d276e542240de9dd1787b1", "old_text": "to the PA protocol: The helper handles each sub-request \"PAAggregateSubReq\" as follows. It first checks that checks that \"PAAggregateSubReq.proto == PAParam.proto\". If not, it aborts and alerts the leader with \"incorrect protocol for sub-request\". Otherwise, it computes the HPKE context as where \"sk\" is its HPKE key and computes the body of the \"PAAggregateSubResp\". It then updates its state according to the PA protocol. After processing all of the sub-requests, the helper encrypts its updated state and constructs its response to the aggregate request. 3.4.2.1.1.", "comments": "Update the aggregate request text to reflect recent protocol changes. Relax the requirement that all sub-requests use the same key config. Do URL, which was previously merged into the wrong branch.\nNAME NAME please have a look when you can.\nPer , the first step in the upload protocol is a POST to the leader server which returns information about helpers. In some yet to be specified cases, the response could include a challenge to be incorporated into reports. However I suspect that in most cases, the response will be static and so an idempotent, cacheable GET would be a better fit, especially since it would save clients a roundtrip to the helper on the hot path of submitting reports, and would make it possible for helpers to implement uploadstart by putting some static content in a CDN. HTTP GET requests don't have bodies, so we would have to figure out how to encode all the data in the current into the URI, which could be done as either path fragments in the URI (i.e., ) or as query parameters (i.e., ). We would of course need to make sure that don't preclude proof systems like the one alleged to exist in section 5.2 of \"BBCp19\".\nAlso, because the current protocol uses a POST for uploadstart, the implication is that for every report they submit, clients need to hit and then . If we make idempotent and cacheable, then that request can be skipped in the majority of cases. I suppose the significance of this depends on how we expect clients to use PDA: are they going to be continuously streaming hundreds or thousands of reports into the leader as events of interest occur (in which case I think eliminating requests is interesting) or are they expected to accumulate several reports locally before submitting them all in one or a few requests?\nI agree this would be a good change, if we can swing it. It's notable that Prio v2 and HH don't require a random challenge from the leader. But as discussed, this is useful for other proof systems one might want to implement for Prio. For example, see the data type implemented here: URL See below for an explanation. There might be other ways to implement the challenge. One possibility is to derive the challenge from the TLS key schedule (or from whatever state the client and leader share for the secure channel). NAME and NAME have both pointed out that adding this dependency on the underlying transport is problematic. Another possibility might be to use some external source of randomness that the leader trusts. comes to mind, though it is somewhat problematic because clients would effectively use the same randomness for some period of time. The security model of BBCp19 would need to be extended to account for this. (I believe this could be done.) Some other solution I'm missing? _ This data type is used to compute the mean and variance of each integer in a sequence of integers. To do so, the client encodes each integer of its input vector as a pair , where is the representation of as a bit vector and is equal to . To prove validity of its input, the proof needs to do two things: verify that (a) is indeed a bit vector, and (b) the integer encoded by is the square root of . In order to check these things efficiently, the proof system uses a random challenge generated by the leader. At a very high level, the client wants to prove that the following boolean conjunction holds: is 1 or 0 AND is 1 or 0 AND ... AND encodes the square root of . In Prio, we need to represent this conjunction as an arithmetic circuit, which the random challenge helps us do. In a bit more detail now: Let's rewrite this expression as w[0] AND w[1] AND ... AND w[s]. As an arithmetic circuit, this expression is actually , where if and only if w[i] is true. If the input is valid, then the expression should evaluate to . But because we're talking about field elements, a malicious client may try to play games like setting and so that the sum is still , but the statement is false. To detect this, we actually compute the following expression: , where is a randomly chosen field element. If the statement is true, then the expression evaluates to . If the statement is false, then with high probability, the expression will evaluate to something other than . (This idea comes from Theorem 5.3)\nThank you for the detailed explanalation NAME Having thought about it more, I suspect an HTTP GET wouldn't be the right way to support proof systems that need a challenge. Assuming that a unique challenge is needed for every report[^1], then intuitively the leader would have to maintain a mapping of reports to issued challenges so that the leader and helper(s) can later use the challenge value when evaluating the client's proof. But now that the leader has to maintain state per-report, I think the upload protocol is missing a notion of a report identifier. The contains a , but IIUC the will be the same across many reports[^2] (e.g., if a browser is sending daily reports on how often users click a button, they use the same every time they submit). So needs to include a report ID, which must be echoed back to the leader in so that the leader can look up the challenge for the report. [^1] Is this true? Or would it suffice for each to use the same challenge across multiple reports? [^2] I notice that the design doc lacks a strong definition of a report and how it relates to tasks, inputs, measurements: because GET should be \"safe\" in the sense of not changing any server state, and it seems like the server would have to maintain a mapping of reports to issued challenges so that the leader and helper can later use the challenge value when evaluating the client's proof.\nContinuing on the assumption that causes a state change on the leader, POST is then the appropriate HTTP method to use. , though it's a little unusual. There may be some big drawback to cacheable POST responses, but I think that might let us support proof systems that require live challenges while also eliminating the majority of requests when the proof system is such that the will change very rarely.\nThe requirement for the leader to keep state across upload start and upload finish may be avoidable. Suppose the leader has a long-lived PRF key K. It would could choose a random nonce N, compute R = PRF(K, N) as the challenge, and hand (N, R) to the client in its upload start response. The client would then use R to generate the proof and send N to the leader in its upload finish request. The leader could then re-derive R. NAME and I kicked around the idea of having the client generate a report ID. I think this may end up being useful, but we weren't sure if it was better to have the leader choose it or the client. To conform to the security model of BBCp19, each report would need its own challenge. It may be possible to relax this, but this would be risky.\nI like this a lot! I'm a big fan of deterministic derivation schemes like this -- besides reintroducing the possibility of an idempotent GET, eliminating storage requirements for challenges makes it much easier to scale up leaders. If the report IDs are big enough (UUIDs?) we could even use them as . I would say the leader, because if you let the client do it, then the leader would have to check for things like collisions with existing report IDs or other malicious ID choices. If the leader chooses it, then all it has to do is generate a few bytes of randomness. I agree.\nNAME we'd appreciate your feedback on this issue because it involves security considerations for ZKP systems on secret-shared data [1]. I'll quickly sum up the problem so that you don't need to worry about the conversation above. The problem at hand regards systems like [1, Theorem 5.3] that require interaction between the prover and verifier. Specifically, we're considering a special case of Theorem 5.3 in which the verifier sends the prover a random \"challenge\" that is used to generate and verify a (non-interactive) FLPCP (call it ): We refer to the prover's first and second messages as \"upload start\" and \"upload finish\" respectively. Each message corresponds to an HTTP request made by a web client to a web server. Ideally, the verifier would be able to handle these requests statelessly, i.e., without having to store . Consider the following change. Suppose the verifier has a long-term PRF key . In response to an upload start request, it chooses a unique \"upload id\" --- maybe this is a 16-byte integer chosen at random --- and computes and sends to the prover. In the upload finish request, the prover sends so that the verifier can re-derive instead of needing to remember it: The (potential) problem with this variant is that a malicious client is able to replay a challenge as many times as it wants. This behavior isn't accounted for in Definition 3.9, so when you try to \"compile\" this system into a ZKP on secret-shared data as described in Section 6.2 and consider its security with respect to Definition 6.4 (Setting I), you can no longer appeal to Theorem 6.6 to reason about its security. Our question is therefore: What impact do you expect this change to have on the soundness of the system? Can you think of an attack that exploits the fact that challenges can be replayed? My expectation is that the problem is merely theoretical, i.e., we ought to be able to close the gap with an appropriate refinement of Definition of 3.9 and reproving Theorem 5.3. [1] URL\nThanks for looping me in. In the discussion below, I am assuming that the client sends its data vector (called \"x\" in the paper [1]) at the same time as it sends its proof to the verifier. Even if the client sends its data vector in the first message, it seems that the client can reuse an (R,N) pair from upload request 1 to later submit upload request 2, in which case the client sees (R, N) before it chooses its data vector and proof. So I think we can assume that the client's data vector and proof can depend on the verifier's random value R. (I'll call this value \"r\" to match the paper.) For soundness: the client/prover must commit to its data vector before it learns the verifier's random value r. If the prover can choose its data value and proof in a way that depends on the verifier's randomness r, the soundness guarantee no longer holds. There is also an efficient attack. So I suspect that the stateless-verifier optimization is unsound. The attack works as follows: given r, the verifier finds a non-satisfying vector x such that sumi ri C(Ai(x)) = 0, again using the notation from Theorem 5.3 in the paper [1]. For the circuits C used in most applications, it will be easy to find such a vector x. Then, the prover constructs a proof that x that sumi ri C(Ai(x)) = 0. The proof is valid, so the verifier will accept always. To remove interaction, the Fiat-Shamir-like technique in Section 6.2.3 is probably the best way to go. We only sketched the idea in the paper (without formal analysis or proof) but if it's going to be important for your application, I'm happy to sanity-check the protocol you come up with. Another idea\u2014the details of which I have not checked carefully\u2014would be to modify your protocol as follows: The prover sends its data vector x to verifiers, split into k shares x1, ..., xk\u2014one for each of the k verifiers. Each verifier holds a PRF key ki computes ri = PRF(ki, xi) and they return r = r1 + ... + rk to the prover. The verifiers store no state. The prover computes the proof pi and sends (x1, pi1), ..., (xk, pik) to the verifiers\u2014again, one for each of the k verifiers. Each verifier computes ri = PRF(ki, xi), they jointly compute r = r1 + ... + r_k, and check the proof using randomness r. The Fiat-Shamir-like approach seems slightly cleaner to me, since it only requires a single prover-to-verifier message. Either way, please let me know if I misunderstood your question or if anything else here is unclear. [1] URL\nThanks for the nice explanation, Henry. It's perfectly clear and answers my question. It also points to something that I, for one, missed the importance of: the need for the prover to \"commit\" to the input shares before generating the r-dependent proof. I appreciate you pointing this out. The only requirement for making upload start not idempotent is so that the randomness r can be sent to the client. It would be nice to do away with this requirement. I think the way forward is using the Fiat-Shamir technique. NAME I'll get cracking on this and let you know when I have something for you to look at.\nNAME I believe the change is something like this. Let be a hash function with range . Suppose there are verifiers. The prover proceeds as follows: Splits its inputs into shares . For each , do: Choose blinding factor from at random. Let . Set . Derive field element from (e.g., seed a PRNG with and map the output to a field element by rejection sampling on chunks of the output). Generate the proof using and . Split into shares . For each send to the -th verifier.\nYes, this looks right to me!\nAs of URL, the upload start request no longer exists. When we're ready to specify Prio, we'll want to include the Fiat-Shamir approach for protocols that use joint randomness. I've noted this and am closing the issue.", "new_text": "to the PA protocol: The helper handles each sub-request \"PAAggregateSubReq\" as follows. It first looks up the HPKE config and corresponding secret key associated with \"helper_share.config_id\". If not found, then the sub-response consists of an \"unrecognized config\" alert. [TODO: We'll want to be more precise about what this means. See issue#57.] Next, it attempts to decrypt the payload with the following procedure: where \"sk\" is the HPKE secret key. If decryption fails, then the sub-response consists of a \"decryption error\" alert. [See issue#57.] Otherwise, the helper handles the request for its plaintext input share \"input_share\" and updates its state as specified by the PA protocol. After processing all of the sub-requests, the helper encrypts its updated state and constructs its response to the aggregate request. 3.4.2.1.1."}
{"id": "q-en-draft-ietf-ppm-dap-3a313dfd6e549ce8c8559ab7ee7c261e1ea21f6df378ce27f3773f9bd1e68329", "old_text": "structure, which specify the protocol used to verify and aggregate the clients' measurements: \"leader_url\": The leader's endpoint URL. \"helper_url\": The helper's endpoint URL.", "comments": "I added a clarification that the nonce is random. IMO the question of who randomly generates the nonce goes with the question of how participants agree upon PDAParams, which is to say it's outside the protocol.\nOn the 2021/7/7 design call we decided that the ID should be derived deterministically from the serialized PAParam by, for example, hashing it. This solves a few problems simultaneously. This settles the question of whether there should be 1:1 correspondence between a task ID and the parameters used for that task. NAME brought up an interesting question, which is how to make the task ID unique when two different data collection tasks use the same parameters. Thoughts?\nThe consensus here is to do SHA-256 over the PDAParams structure and use that as the 32 byte PDATaskID. However, hashing a protocol message requires us to decide on a canonical representation of a protocol message, which I think forces us to decide how to encode messages on the wire. I filed for that question. I'll proceed with a PR for this issue on the assumption that we'll pick JSON.\nAs I mentioned there, the assumption would be that we're adopting TLS' binary encoding. If so, in TLS notation, you would write as the hash of , which itself is a serialized data structure.\nAll we require is that each PAParam have some unique value, right? A random nonce would probably suffice. r.e. the encoding, yes, let's stick with TLS's wire format as NAME suggests.\nYes, my thinking was that we would just stick 16 bytes of random data in (a UUID's worth). I agree that we should stick with TLS binary encoding.\nApproved, with the following suggestion: Let's leave a [TODO: ...] comment under the definition of for clarifying how it should be chosen.", "new_text": "structure, which specify the protocol used to verify and aggregate the clients' measurements: \"nonce\": A unique sequence of bytes used to ensure that two otherwise identical \"PDAParam\" instances will have distinct \"PDATaskID\"s. It is RECOMMENDED that this be set to a random 16-byte string derived from a cryptographically secure pseurandom number generator. \"leader_url\": The leader's endpoint URL. \"helper_url\": The helper's endpoint URL."}
{"id": "q-en-draft-ietf-ppm-dap-3a313dfd6e549ce8c8559ab7ee7c261e1ea21f6df378ce27f3773f9bd1e68329", "old_text": "\"proto\": The PDA protocol, e.g., Prio or Hits. The rest of the structure contains the protocol specific parameters. Each task has a unique _task id_ derived from the PDA parameters: The task id is derived using the following procedure. [TODO: Specify derivation of the task ID.] 3.1.2.", "comments": "I added a clarification that the nonce is random. IMO the question of who randomly generates the nonce goes with the question of how participants agree upon PDAParams, which is to say it's outside the protocol.\nOn the 2021/7/7 design call we decided that the ID should be derived deterministically from the serialized PAParam by, for example, hashing it. This solves a few problems simultaneously. This settles the question of whether there should be 1:1 correspondence between a task ID and the parameters used for that task. NAME brought up an interesting question, which is how to make the task ID unique when two different data collection tasks use the same parameters. Thoughts?\nThe consensus here is to do SHA-256 over the PDAParams structure and use that as the 32 byte PDATaskID. However, hashing a protocol message requires us to decide on a canonical representation of a protocol message, which I think forces us to decide how to encode messages on the wire. I filed for that question. I'll proceed with a PR for this issue on the assumption that we'll pick JSON.\nAs I mentioned there, the assumption would be that we're adopting TLS' binary encoding. If so, in TLS notation, you would write as the hash of , which itself is a serialized data structure.\nAll we require is that each PAParam have some unique value, right? A random nonce would probably suffice. r.e. the encoding, yes, let's stick with TLS's wire format as NAME suggests.\nYes, my thinking was that we would just stick 16 bytes of random data in (a UUID's worth). I agree that we should stick with TLS binary encoding.\nApproved, with the following suggestion: Let's leave a [TODO: ...] comment under the definition of for clarifying how it should be chosen.", "new_text": "\"proto\": The PDA protocol, e.g., Prio or Hits. The rest of the structure contains the protocol specific parameters. Each task has a unique _task ID_ derived from the PDA parameters: The task ID of a \"PDAParam\" `is derived using the following procedure: Where \"SHA-256\" is as specified in FIPS180-4. 3.1.2."}
{"id": "q-en-draft-ietf-ppm-dap-3a313dfd6e549ce8c8559ab7ee7c261e1ea21f6df378ce27f3773f9bd1e68329", "old_text": "The helper handles well-formed requests as follows. (As usual, malformed requests are handled as described in pa-error-common- aborts.) It first looks for the PDA parameters \"PDAParam\" for which \"PDAAggregateReq.task_id\" is equal to the task id derived from \"PDAParam\". The response consists of the helper's updated state and a sequence of", "comments": "I added a clarification that the nonce is random. IMO the question of who randomly generates the nonce goes with the question of how participants agree upon PDAParams, which is to say it's outside the protocol.\nOn the 2021/7/7 design call we decided that the ID should be derived deterministically from the serialized PAParam by, for example, hashing it. This solves a few problems simultaneously. This settles the question of whether there should be 1:1 correspondence between a task ID and the parameters used for that task. NAME brought up an interesting question, which is how to make the task ID unique when two different data collection tasks use the same parameters. Thoughts?\nThe consensus here is to do SHA-256 over the PDAParams structure and use that as the 32 byte PDATaskID. However, hashing a protocol message requires us to decide on a canonical representation of a protocol message, which I think forces us to decide how to encode messages on the wire. I filed for that question. I'll proceed with a PR for this issue on the assumption that we'll pick JSON.\nAs I mentioned there, the assumption would be that we're adopting TLS' binary encoding. If so, in TLS notation, you would write as the hash of , which itself is a serialized data structure.\nAll we require is that each PAParam have some unique value, right? A random nonce would probably suffice. r.e. the encoding, yes, let's stick with TLS's wire format as NAME suggests.\nYes, my thinking was that we would just stick 16 bytes of random data in (a UUID's worth). I agree that we should stick with TLS binary encoding.\nApproved, with the following suggestion: Let's leave a [TODO: ...] comment under the definition of for clarifying how it should be chosen.", "new_text": "The helper handles well-formed requests as follows. (As usual, malformed requests are handled as described in pa-error-common- aborts.) It first looks for the PDA parameters \"PDAParam\" for which \"PDAAggregateReq.task_id\" is equal to the task ID derived from \"PDAParam\". The response consists of the helper's updated state and a sequence of"}
{"id": "q-en-draft-ietf-ppm-dap-3a313dfd6e549ce8c8559ab7ee7c261e1ea21f6df378ce27f3773f9bd1e68329", "old_text": "Each POST request to an aggregator contains a \"PDATaskID\". If the aggregator does not recognize the task, i.e., it can't find a \"PDAParam\" for which the derived task id matches the \"PDATaskID\", then it aborts and alerts the peer with \"unrecognized task\". 4.", "comments": "I added a clarification that the nonce is random. IMO the question of who randomly generates the nonce goes with the question of how participants agree upon PDAParams, which is to say it's outside the protocol.\nOn the 2021/7/7 design call we decided that the ID should be derived deterministically from the serialized PAParam by, for example, hashing it. This solves a few problems simultaneously. This settles the question of whether there should be 1:1 correspondence between a task ID and the parameters used for that task. NAME brought up an interesting question, which is how to make the task ID unique when two different data collection tasks use the same parameters. Thoughts?\nThe consensus here is to do SHA-256 over the PDAParams structure and use that as the 32 byte PDATaskID. However, hashing a protocol message requires us to decide on a canonical representation of a protocol message, which I think forces us to decide how to encode messages on the wire. I filed for that question. I'll proceed with a PR for this issue on the assumption that we'll pick JSON.\nAs I mentioned there, the assumption would be that we're adopting TLS' binary encoding. If so, in TLS notation, you would write as the hash of , which itself is a serialized data structure.\nAll we require is that each PAParam have some unique value, right? A random nonce would probably suffice. r.e. the encoding, yes, let's stick with TLS's wire format as NAME suggests.\nYes, my thinking was that we would just stick 16 bytes of random data in (a UUID's worth). I agree that we should stick with TLS binary encoding.\nApproved, with the following suggestion: Let's leave a [TODO: ...] comment under the definition of for clarifying how it should be chosen.", "new_text": "Each POST request to an aggregator contains a \"PDATaskID\". If the aggregator does not recognize the task, i.e., it can't find a \"PDAParam\" for which the derived task ID matches the \"PDATaskID\", then it aborts and alerts the peer with \"unrecognized task\". 4."}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-ffee840fb1de94fa6dc2be1f75abcb5b141519628d6b4db3b3f666796198b8ab", "old_text": "2. This document uses the following set of terms, roles, and concepts as defined in I-D.ietf-rats-architecture: Attester, Verifier, Relying Party, Conceptual Message, Evidence, Endorsement, Attestation Result, Appraisal Policy, Attesting Environment, Target Environment A PKIX Certificate is an X.509v3 format certificate as specified by RFC5280.", "comments": "\u2026 in diagram. Also fixed \"randomized\" (was British English before). Signed-off-by: Michael Eckel\nAn handle mustbe created/generated before it can be distributed. Adding a generateHandle() call would also strengthen understanding by readers.\nBased on URL In the sequence diagram, some elements in the bracket are not defined in the \u201cInformation Elements\u201d section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined?\nI totally agree. These elements should either be put into Information elements (preferred), or at least be mentioned in the text. Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "2. This document uses the following set of terms, roles, and concepts as defined in I-D.ietf-rats-architecture: Attester, Verifier, Relying Party, Conceptual Message, Evidence, Endorsement, Attestation Result, Appraisal Policy, Attesting Environment, Target Environment A PKIX Certificate is an X.509v3 format certificate as specified by RFC5280."}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-ffee840fb1de94fa6dc2be1f75abcb5b141519628d6b4db3b3f666796198b8ab", "old_text": "interaction models in general, the following set of prerequisites MUST be in place to support the implementation of interaction models: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. An Attester Identity MAY be a unique identity, it MAY be included in a zero-knowledge proof (ZKP), or it MAY be part of a group signature, or it MAY be a randomised DAA credential DAA. Attestation Evidence MUST be authentic. In order to provide proofs of authenticity, Attestation Evidence SHOULD be cryptographically associated with an identity document (e.g. an PKIX certificate or trusted key material, or a randomised DAA credential DAA), or SHOULD include a correct and unambiguous and stable reference to an accessible identity document.", "comments": "\u2026 in diagram. Also fixed \"randomized\" (was British English before). Signed-off-by: Michael Eckel\nAn handle mustbe created/generated before it can be distributed. Adding a generateHandle() call would also strengthen understanding by readers.\nBased on URL In the sequence diagram, some elements in the bracket are not defined in the \u201cInformation Elements\u201d section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined?\nI totally agree. These elements should either be put into Information elements (preferred), or at least be mentioned in the text. Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "interaction models in general, the following set of prerequisites MUST be in place to support the implementation of interaction models: A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity, used as proof of identity. The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. An Attester Identity MAY be a unique identity, MAY be included in a zero-knowledge proof (ZKP), MAY be part of a group signature, or it MAY be a randomized DAA credential DAA. Attestation Evidence MUST be authentic. In order to provide proofs of authenticity, Attestation Evidence SHOULD be cryptographically associated with an identity document (e.g. an PKIX certificate or trusted key material, or a randomized DAA credential DAA), or SHOULD include a correct and unambiguous and stable reference to an accessible identity document."}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-ffee840fb1de94fa6dc2be1f75abcb5b141519628d6b4db3b3f666796198b8ab", "old_text": "without accompanying evidence about its validity - used as proof of identity. The Attester is issued with a credential by the Endorser that is randomised and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. _mandatory_ A statement representing an identifier list that MUST be", "comments": "\u2026 in diagram. Also fixed \"randomized\" (was British English before). Signed-off-by: Michael Eckel\nAn handle mustbe created/generated before it can be distributed. Adding a generateHandle() call would also strengthen understanding by readers.\nBased on URL In the sequence diagram, some elements in the bracket are not defined in the \u201cInformation Elements\u201d section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined?\nI totally agree. These elements should either be put into Information elements (preferred), or at least be mentioned in the text. Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "without accompanying evidence about its validity - used as proof of identity. _mandatory_ A statement representing an identifier list that MUST be"}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-ffee840fb1de94fa6dc2be1f75abcb5b141519628d6b4db3b3f666796198b8ab", "old_text": "Claims are assertions that represent characteristics of an Attester's Target Environment. Claims are part Conceptual Message and are, for example, used to appraise the integrity of Attesters via a Verifiers. The other information elements in this section can be expressed as Claims in any type of Conceptional Messages. _mandatory_ Reference Values as defined in I-D.ietf-rats-architecture. This", "comments": "\u2026 in diagram. Also fixed \"randomized\" (was British English before). Signed-off-by: Michael Eckel\nAn handle mustbe created/generated before it can be distributed. Adding a generateHandle() call would also strengthen understanding by readers.\nBased on URL In the sequence diagram, some elements in the bracket are not defined in the \u201cInformation Elements\u201d section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined?\nI totally agree. These elements should either be put into Information elements (preferred), or at least be mentioned in the text. Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "Claims are assertions that represent characteristics of an Attester's Target Environment. Claims are part of a Conceptual Message and are, for example, used to appraise the integrity of Attesters via Verifiers. The other information elements in this section can be expressed as Claims in any type of Conceptional Messages. _optional_ Event Logs accompany Claims by providing event trails of security- critical events in a system. The primary purpose of Event Logs is to support Claim reproducibility by providing information on how Claims originated. _mandatory_ Reference Values as defined in I-D.ietf-rats-architecture. This"}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-ffee840fb1de94fa6dc2be1f75abcb5b141519628d6b4db3b3f666796198b8ab", "old_text": "_mandatory_ A set of Claims that consists of a list of Authentication Secret IDs that each identifies an Authentication Secret in a single Attesting Environment, the Attester Identity, Claims, and a", "comments": "\u2026 in diagram. Also fixed \"randomized\" (was British English before). Signed-off-by: Michael Eckel\nAn handle mustbe created/generated before it can be distributed. Adding a generateHandle() call would also strengthen understanding by readers.\nBased on URL In the sequence diagram, some elements in the bracket are not defined in the \u201cInformation Elements\u201d section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined?\nI totally agree. These elements should either be put into Information elements (preferred), or at least be mentioned in the text. Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "_mandatory_ Collected Claims represent a (sub-)set of Claims created by an Attester. Collected Claims are gathered based on the Claims selected in the Claim Selection. If a Verifier does not provide a Claim Selection, then all available Claims on the Attester are part of the Collected Claims. _mandatory_ A set of Claims that consists of a list of Authentication Secret IDs that each identifies an Authentication Secret in a single Attesting Environment, the Attester Identity, Claims, and a"}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-ffee840fb1de94fa6dc2be1f75abcb5b141519628d6b4db3b3f666796198b8ab", "old_text": "The Attester boots up and thereby produces claims about its boot state and its operational state. Event Logs accompany the produced claims by providing an event trail of security-critical events of a system. Claims are produced by all attesting Environments of an Attester system.", "comments": "\u2026 in diagram. Also fixed \"randomized\" (was British English before). Signed-off-by: Michael Eckel\nAn handle mustbe created/generated before it can be distributed. Adding a generateHandle() call would also strengthen understanding by readers.\nBased on URL In the sequence diagram, some elements in the bracket are not defined in the \u201cInformation Elements\u201d section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined?\nI totally agree. These elements should either be put into Information elements (preferred), or at least be mentioned in the text. Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "The Attester boots up and thereby produces claims about its boot state and its operational state. Event Logs accompany the produced claims by providing an event trail of security-critical events in a system. Claims are produced by all attesting Environments of an Attester system."}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-28a32388a09221ec70a2aa018f71814db30fe1f01843a6b2d593cd56fc1f75ad", "old_text": "interaction models in general, the following set of prerequisites MUST be in place to support the implementation of interaction models: A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity, used as proof of identity. The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. An Attester Identity MAY be a unique identity, MAY be included in a zero-knowledge proof (ZKP), MAY be part of a group signature, or it MAY be a randomized DAA credential DAA. Attestation Evidence MUST be authentic. In order to provide proofs of authenticity, Attestation Evidence SHOULD be cryptographically associated with an identity document (e.g. an PKIX certificate or trusted key material, or a randomized DAA credential DAA), or SHOULD include a correct and unambiguous and stable reference to an accessible identity document. An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Evidence MUST include an indicator about its freshness that can be understood by a Verifier. Analogously, interaction models MUST", "comments": "Signed-off-by: Michael Eckel\nBased on URL Previous section says \u201cAttester Identity\u201d is about \u201ca distinguishable Attesting Environment\u201d, and here says it\u2019s about \u201ca distinguishable Attester\u201d.\nHow to understand \u201cwithout accompanying evidence about its validity\u201d?\nI think it means that there is no certificate conveyed along with it which proves its validity.\nI think this is related to and should be fixed.\nBased on URL The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\nThe TPM\u2019s AIK certificate is one kind of Attester Identity, right?\n\"The TPM\u2019s AIK certificate is one kind of Attester Identity, right?\": Correct.\n\"The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\" I see the point, too, and agree. So, perhaps it should be renamed to \"Attesting Environment identity\"!? The Attester Identity then should be something different. It may be the TPM AK/AIK of one of the Attesting Environments, but not necessarily.\nAs discussed, polish can happen later Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "interaction models in general, the following set of prerequisites MUST be in place to support the implementation of interaction models: An Authentication Secret MUST be available exclusively to an Attesting Environment of an Attester. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. A statement about a distinguishable Attester made by an Endorser. The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. An Attester Identity MAY be an Authentication Secret which is available exclusively to one of the Attesting Environments of an Attester. It MAY be a unique identity, MAY be included in a zero- knowledge proof (ZKP), MAY be part of a group signature, or it MAY be a randomized DAA credential DAA. Attestation Evidence MUST be authentic. In order to provide proofs of authenticity, Attestation Evidence SHOULD be cryptographically associated with an identity document (e.g., a PKIX certificate or trusted key material, or a randomized DAA credential DAA), or SHOULD include a correct, unambiguous and stable reference to an accessible identity document. Evidence MUST include an indicator about its freshness that can be understood by a Verifier. Analogously, interaction models MUST"}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-28a32388a09221ec70a2aa018f71814db30fe1f01843a6b2d593cd56fc1f75ad", "old_text": "_mandatory_ A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. _mandatory_ A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Claims included in Evidence.", "comments": "Signed-off-by: Michael Eckel\nBased on URL Previous section says \u201cAttester Identity\u201d is about \u201ca distinguishable Attesting Environment\u201d, and here says it\u2019s about \u201ca distinguishable Attester\u201d.\nHow to understand \u201cwithout accompanying evidence about its validity\u201d?\nI think it means that there is no certificate conveyed along with it which proves its validity.\nI think this is related to and should be fixed.\nBased on URL The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\nThe TPM\u2019s AIK certificate is one kind of Attester Identity, right?\n\"The TPM\u2019s AIK certificate is one kind of Attester Identity, right?\": Correct.\n\"The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\" I see the point, too, and agree. So, perhaps it should be renamed to \"Attesting Environment identity\"!? The Attester Identity then should be something different. It may be the TPM AK/AIK of one of the Attesting Environments, but not necessarily.\nAs discussed, polish can happen later Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "_mandatory_ A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Claims included in Evidence."}
{"id": "q-en-draft-ietf-rats-reference-interaction-models-28a32388a09221ec70a2aa018f71814db30fe1f01843a6b2d593cd56fc1f75ad", "old_text": "Evidence as well as all accompanying Event Logs back to the Verifier. While it is crucial that Claims, the Handle, and the Attester Identity information MUST be cryptographically bound to the signature of Evidence, they MAY be presented obfuscated, encrypted, or cryptographically blinded. For further reference see section security-and-privacy-considerations. As soon as the Verifier receives the signed Evidence and Event Logs, it appraises the Evidence. For this purpose, it validates the signature, the Attester Identity, and the Handle, and then appraises the Claims. Appraisal procedures are application-specific and can be conducted via comparison of the Claims with corresponding Reference", "comments": "Signed-off-by: Michael Eckel\nBased on URL Previous section says \u201cAttester Identity\u201d is about \u201ca distinguishable Attesting Environment\u201d, and here says it\u2019s about \u201ca distinguishable Attester\u201d.\nHow to understand \u201cwithout accompanying evidence about its validity\u201d?\nI think it means that there is no certificate conveyed along with it which proves its validity.\nI think this is related to and should be fixed.\nBased on URL The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\nThe TPM\u2019s AIK certificate is one kind of Attester Identity, right?\n\"The TPM\u2019s AIK certificate is one kind of Attester Identity, right?\": Correct.\n\"The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity?\" I see the point, too, and agree. So, perhaps it should be renamed to \"Attesting Environment identity\"!? The Attester Identity then should be something different. It may be the TPM AK/AIK of one of the Attesting Environments, but not necessarily.\nAs discussed, polish can happen later Hi Henk, I've reviewed the reference interaction models I-D recently, I hope the comments below can help improve the draft before the IETF110 draft submission cut-off. Section 3 Disambiguation Comment > This section is talking about the disambiguation of terminology, so I suggest making it a sub-section of Section 2 Terminology. Examples of these types of co-located environments include: a Trusted Execution Environment (TEE), Baseboard Management Controllers (BMCs), as well as other physical or logical protected/isolated/shielded Computing Environments (e.g. embedded Secure Elements (eSE) or Trusted Platform Modules (TPM)). Comment > About \"these types of co-located environments\", The previous sentences are about the Verifier and Attester don't need to be remote. So \"these\" is weird here, because here is about the attesting environment and target environment. Section 5 Direct Anonymous Attestation Comment > I think it's better to move this section to the bottom of the draft. DAA doesn't introduce a new information elements, and only augments the scope/definition of Attester Identity and Authentication Secret IDs, describing it after the introduction of all the 3 basic interaction models would be better. Putting DAA in the middle makes me feel the basic interaction models rely upon DAA, but actually it's not. This document extends the duties of the Endorser role as defined by the RATS architecture with respect to the provision of these Attester Identity documents to Attesters. The existing duties of the Endorser role and the duties of a DAA Issuer are quite similar as illustrated in the following subsections. Comment > Without DAA, I think the Endorser also needs to provision the Attester Identity to Attesters. And as I understand, the DAA Issuer is a supply chain entity before the Attester being shipped, so it is the Endorser when DAA is used, right? If yes, then in the next sentence, the comparison between Endorser and DAA Issuer doesn't make sense to me. Section 5.1 Endorsers Comment > Does this section have difference with the definition of Endorser in architecture draft? If not, is it necessary to keep this section? Comment > This section only describes the Layered Attestation, but the scope of Endorsement is more than what's described here. Endorsement indicates the Attester's various capabilities such as Claims collection and Evidence signing. For other situations than Layered Attestation, the Endorser and Endorsement are also needed. So why only mention Layered Attestation here? I don't see any special relationship between DAA and Layered Attestation. Section 5.2 Endorsers for Direct Anonymous Attestation In order to enable the use of DAA, an Endorser role takes on the duties of a DAA Issuer in addition to its already defined duties. DAA Issuers offer zero-knowledge proofs based on public key certificates used for a group of Attesters [DAA]. Effectively, these certificates share the semantics of Endorsements, with the following exceptions: Comment > In the first sentence, I suggest saying that \"a DAA Issuer takes on the role of an Endorser\". o The associated private keys are used by the DAA Issuer to provide an Attester with a credential that it can use to convince the Verifier that its Evidence is valid. To keep their anonymity the Attester randomizes this credential each time that it is used. Comment > How to understand \"Evidence is valid\"? Does it mean the Evidence is sent from an authentic Attester and not tampered during the conveyance? Or does it mean the Evidence comes from a RoT and is trustable (although the Evidence can diverge from the Reference Values)? Comment > When saying \"Attester randomizes this credential\", how many credentials does an Attester have? 1) Can a DAA Issuer have multiple key pairs and use them for one Attester? 2) Can a DAA Issuer use one key pair to generate multiple credentials for one Attester? o A credential is conveyed from an Endorser to an Attester in combination with the conveyance of the public key certificates from Endorser to Verifier. Comment > Is there another way to convey the public key certificate, for example, can the public key certificates be conveyed from Endorser to Attester first and then from Attester to Verifier? The zero-knowledge proofs required cannot be created by an Attester alone - like the Endorsements of RoTs - and have to be created by a trustable third entity - like an Endorser. Due to that semantic overlap, the Endorser role is augmented via the definition of DAA duties as defined below. This augmentation enables the Endorser to convey trustable third party statements both to Verifier roles and Attester roles. Comment > For \"the definition of DAA duties as defined below\", what is the definition of DAA duties? Comment > In the last sentence, the Endorsement with its original definition is the third party statement for Verifier and Attester, isn't it? So, \"augmentation\" isn't the correct expression. Section 6 Normative Prerequisites Attester Identity: The provenance of Evidence with respect to a distinguishable Attesting Environment MUST be correct and unambiguous. Comment > The Attester Identity is to identify which Attester the Evidence comes from. But if the Attester has multiple Attesting Environments, what should be the Attester Identity? Comment > The TPM's AIK certificate is one kind of Attester Identity, right? Attestation Evidence Authenticity: Attestation Evidence MUST be correct and authentic. Comment > Is it appropriate to say \"correct Evidence\"? If saying so, I think it means that the Evidence satisfies the Attestation Policy for Evidence. Authentication Secret: An Authentication Secret MUST be available exclusively to an Attester's Attesting Environment. The Attester MUST protect Claims with that Authentication Secret, thereby proving the authenticity of the Claims included in Evidence. The Authentication Secret MUST be established before RATS can take place. Comment > Does the Authentication Secret represent the identity of the Attesting Environment? Comment > How to understand the Authentication Secret, and is it necessary all the time? For example, in our implementation, during the router booting up, the BIOS measures the BootLoader and records the measured hash value into the TPM, and the BootLoader measures the OS Kernel and records the measured hash value into the TPM as well. From the Layered Attestation perspective, I think (BIOS + TPM) is the Attesting Environment and (BootLoader + TPM) is also the Attesting Environment. But the Claims (measured hash values) aren't protected separately, and they finally becomes the Evidence (TPM Quote) and it's only protected by the TPM's AIK. So, what is the Authentication Secret? Section 7 Generic Information Elements Attester Identity ('attesterIdentity'): mandatory A statement about a distinguishable Attester made by an Endorser without accompanying evidence about its validity - used as proof of identity. Comment > Previous section says \"Attester Identity\" is about \"a distinguishable Attesting Environment\", and here says it's about \"a distinguishable Attester\". Comment > How to understand \"without accompanying evidence about its validity\"? Attester Identity ('attesterIdentity'): mandatory In DAA, the Attester's identity is not revealed to the verifier. The Attester is issued with a credential by the Endorser that is randomized and then used to anonymously confirm the validity of their evidence. The evidence is verified using the Endorser's public key. Comment > I think here means the DAA credential represents the Attester Identity. Comment > There is ambiguity of \"that is randomized\", does it mean randomized Endorser or randomized credential? Comment > For \"confirm the validity of their evidence\", what does \"their\" refer to? And what does \"the validity of evidence\" mean? Authentication Secret IDs ('authSecID'): mandatory A statement representing an identifier list that MUST be associated with corresponding Authentication Secrets used to protect Evidence. Comment > Previous section says \"Authentication Secret\" is used to protect Claims, but here says it's used to protect Evidence. Comment > As I understand, if Authentication Secret represents the identity of Attesting Environment, then it's not mandatory, at least in our implementation. Authentication Secret IDs ('authSecID'): mandatory In DAA, Authentication Secret IDs are represented by the Endorser (DAA issuer)'s public key that MUST be used to create DAA credentials for the corresponding Authentication Secrets used to protect Evidence. In DAA, an Authentication Secret ID does not identify a unique Attesting Environment but associated with a group of Attesting Environments. This is because an Attesting Environment should not be distinguishable and the DAA credential which represents the Attesting Environment is randomised each time it used. Comment > In my understanding, here says that the DAA credential identities the Attesting Environment. Compared with the description in the \"Attester Identity\" part, what does the DAA credential represent actually? Reference Claims ('refClaims') mandatory Reference Claims are components of Reference Values as defined in [I-D.ietf-rats-architecture]. [Editor's Note: Definition might become obsolete, if replaced by Reference Values. Is there a difference between Claims and Values here? Analogously, why is not named Reference Claims in the RATS arch?] Comment > I suggest using Reference Values to keep consistent with the RATS arch. Reference Claims are used to appraise the Claims received from an Attester via appraisal by direct comparison. Comment > \"Direct comparison\" isn't the correct expression, the comparison can have other rules. Claim Selection ('claimSelection'): optional A statement that represents a (sub-)set of Claims that can be created by an Attester. Claim Selections can act as filters that can specify the exact set of Claims to be included in Evidence. An Attester MAY decide whether or not to provide all Claims as requested via a Claim Selection. Comment > The \"all Claims\" may be ambiguous, I'd like to double check, does it refer to all Claims that the Attester can create or refer to all Claims requested in the Claim Selection? Section 8.1 Challenge/Response Remote Attestation Comment > In the sequence diagram, some elements in the bracket are not defined in the \"Information Elements\" section, such as targetEnvironment, collectedClaims, eventLog. Should these elements be defined? Section 9 Additional Application-Specific Requirements Depending on the use cases covered, there can be additional requirements. An exemplary subset is illustrated in this section. Comment > Here starts to talk about \"additional requirements\", but I wonder there is no other places in this draft talking about requirement, so what are the basic requirements? Regards & Thanks! Wei Pan", "new_text": "Evidence as well as all accompanying Event Logs back to the Verifier. While it is crucial that Claims, the Handle, and the Attester Identity information (i.e., the Authentication Secret) MUST be cryptographically bound to the signature of Evidence, they MAY be presented obfuscated, encrypted, or cryptographically blinded. For further reference see section security-and-privacy-considerations. As soon as the Verifier receives the Evidence and the Event Logs, it appraises the Evidence. For this purpose, it validates the signature, the Attester Identity, and the Handle, and then appraises the Claims. Appraisal procedures are application-specific and can be conducted via comparison of the Claims with corresponding Reference"}
{"id": "q-en-draft-ietf-sacm-coswid-00de5703d4fa49c0bae69cf28e3de9f36acfa05ad3ee259c2aed84a41b75ee4d", "old_text": "stored or transported, as compared to the less efficient text-based form of the original vocabulary. The root of the CDDL specification provided by this document is the rule \"coswid\" (as defined in tagged):", "comments": "Addressed the following comments: [Per IETF 110] Add text on the order of native CDDL approach for handling version is used. If a future version of SWID is published which alters the semantics of exists fields, a future version of CoSWID SHOULD (MUST?) use a different tag to represent it. [daw: A new paragraph was added to section 2 detailing how index values are used to support revisions and extensions.] [Per IETF 110] see -16. Use ISO SWID reference for the IANA registry and if desired, used [SEMVER] as an informal reference in the text [daw: This has been done in section 4.1.] [daw: The FIXME markers have been replaced with actual text.] [daw: Section 7 now details the construction of a signed CoSWID. A SHOULD sign requirement was also added to section 7.] [Per IETF 110] We discussed adding language on the order of \"This document specifies a data format and an associated integrity mechanism agnostic of a deployment. Application which choose to use CoSWID will need to specify explicit guidance which security properties are needed and under which circumstances.\" > ** [-15] Section 6. Per \"When an authoritative tag is signed, the software [daw: Authenticating the signer is the software provider will require a match between a CoSWID entity and the signer. We are working out a means to do this.] 2 [daw: Not sure which tags this applies to? Signing? All of section 2?] [daw: done.] [daw: This confusing text has been removed.] [daw: done.] [daw: We added a note about this.] [daw: Done.] [daw: Added signature validation as a way.] Regards, Roman", "new_text": "stored or transported, as compared to the less efficient text-based form of the original vocabulary. Through use of CDDL-based integer labels, CoSWID allows for future expansion in subsequent revisions of this specification and through extensions (see model-extension). New constructs can be associated with a new integer index. A deprecated construct can be replaced by a new construct with a new integer index. An implementation can use these integer indexes to identify the construct to parse. The CoSWID Items registry, defined in iana-coswid-items, is used to ensure that new constructs are assigned a unique index value on a first-come, first-served basis. This approach avoids the need to have an explicit CoSWID version. The root of the CDDL specification provided by this document is the rule \"coswid\" (as defined in tagged):"}
{"id": "q-en-draft-ietf-sacm-coswid-00de5703d4fa49c0bae69cf28e3de9f36acfa05ad3ee259c2aed84a41b75ee4d", "old_text": "integrity measurements for files. The ISO-19770-2:2015 XML schema uses XML DSIG to support cryptographic signatures. Signing CoSWID tags follows the procedues defined in CBOR Object Signing and Encryption RFC8152. A CoSWID tg MUST be wrapped in a COSE Single Signer Data Object (COSE_Sign1) that contains a single signature and MUST be signed by the tag creator. The following CDDL specification defines a restrictive subset of COSE header parameters", "comments": "Addressed the following comments: [Per IETF 110] Add text on the order of native CDDL approach for handling version is used. If a future version of SWID is published which alters the semantics of exists fields, a future version of CoSWID SHOULD (MUST?) use a different tag to represent it. [daw: A new paragraph was added to section 2 detailing how index values are used to support revisions and extensions.] [Per IETF 110] see -16. Use ISO SWID reference for the IANA registry and if desired, used [SEMVER] as an informal reference in the text [daw: This has been done in section 4.1.] [daw: The FIXME markers have been replaced with actual text.] [daw: Section 7 now details the construction of a signed CoSWID. A SHOULD sign requirement was also added to section 7.] [Per IETF 110] We discussed adding language on the order of \"This document specifies a data format and an associated integrity mechanism agnostic of a deployment. Application which choose to use CoSWID will need to specify explicit guidance which security properties are needed and under which circumstances.\" > ** [-15] Section 6. Per \"When an authoritative tag is signed, the software [daw: Authenticating the signer is the software provider will require a match between a CoSWID entity and the signer. We are working out a means to do this.] 2 [daw: Not sure which tags this applies to? Signing? All of section 2?] [daw: done.] [daw: This confusing text has been removed.] [daw: done.] [daw: We added a note about this.] [daw: Done.] [daw: Added signature validation as a way.] Regards, Roman", "new_text": "integrity measurements for files. The ISO-19770-2:2015 XML schema uses XML DSIG to support cryptographic signatures. Signing CoSWID tags follows the procedures defined in CBOR Object Signing and Encryption RFC8152. A CoSWID tag MUST be wrapped in a COSE Single Signer Data Object (COSE_Sign1) that contains a single signature and MUST be signed by the tag creator. The following CDDL specification defines a restrictive subset of COSE header parameters"}
{"id": "q-en-draft-ietf-sacm-coswid-00de5703d4fa49c0bae69cf28e3de9f36acfa05ad3ee259c2aed84a41b75ee4d", "old_text": "provide a signature on a signature allowing for a proof that a signature existed at a given time (i.e., a timestamp). 8. This specification allows for tagged and untagged CBOR data items", "comments": "Addressed the following comments: [Per IETF 110] Add text on the order of native CDDL approach for handling version is used. If a future version of SWID is published which alters the semantics of exists fields, a future version of CoSWID SHOULD (MUST?) use a different tag to represent it. [daw: A new paragraph was added to section 2 detailing how index values are used to support revisions and extensions.] [Per IETF 110] see -16. Use ISO SWID reference for the IANA registry and if desired, used [SEMVER] as an informal reference in the text [daw: This has been done in section 4.1.] [daw: The FIXME markers have been replaced with actual text.] [daw: Section 7 now details the construction of a signed CoSWID. A SHOULD sign requirement was also added to section 7.] [Per IETF 110] We discussed adding language on the order of \"This document specifies a data format and an associated integrity mechanism agnostic of a deployment. Application which choose to use CoSWID will need to specify explicit guidance which security properties are needed and under which circumstances.\" > ** [-15] Section 6. Per \"When an authoritative tag is signed, the software [daw: Authenticating the signer is the software provider will require a match between a CoSWID entity and the signer. We are working out a means to do this.] 2 [daw: Not sure which tags this applies to? Signing? All of section 2?] [daw: done.] [daw: This confusing text has been removed.] [daw: done.] [daw: We added a note about this.] [daw: Done.] [daw: Added signature validation as a way.] Regards, Roman", "new_text": "provide a signature on a signature allowing for a proof that a signature existed at a given time (i.e., a timestamp). A CoSWID SHOULD be signed, using the above mechanism, to protect the integrity of the CoSWID tag. See the security considerations (in sec-sec) for more information on why a signed CoSWID is valuable in most cases. 8. This specification allows for tagged and untagged CBOR data items"}
{"id": "q-en-draft-ietf-sacm-coswid-00de5703d4fa49c0bae69cf28e3de9f36acfa05ad3ee259c2aed84a41b75ee4d", "old_text": "A tag is considered \"authoritative\" if the CoSWID tag was created by the software provider. An authoritative CoSWID tag contains information about a software component provided by the maintainer of the software component, who is expected to be an expert in their own software. Thus, authoritative CoSWID tags can be trusted to represent authoritative information about the software component. A signed CoSWID tag (see coswid-cose) whose signature has been validated can be relied upon to be unchanged since it was signed. By contrast, the data contained in unsigned tags cannot be trusted to be unmodified. When an authoritative tag is signed, the software provider can be authenticated as the originator of the signature. A trustworthy association between the signature and the originator of the signature can be established via trust anchors. A certification path between a trust anchor and a certificate including a pub-key enabling the validation of a tag signature can realize the assessment of trustworthiness of an authoritative tag. Having a signed authoritative CoSWID tag can be useful when the information in the tag needs to be trusted, such as when the tag is being used to convey reference integrity measurements for software components. CoSWID tags are intended to contain public information about software components and, as such, the contents of a CoSWID tag does not need", "comments": "Addressed the following comments: [Per IETF 110] Add text on the order of native CDDL approach for handling version is used. If a future version of SWID is published which alters the semantics of exists fields, a future version of CoSWID SHOULD (MUST?) use a different tag to represent it. [daw: A new paragraph was added to section 2 detailing how index values are used to support revisions and extensions.] [Per IETF 110] see -16. Use ISO SWID reference for the IANA registry and if desired, used [SEMVER] as an informal reference in the text [daw: This has been done in section 4.1.] [daw: The FIXME markers have been replaced with actual text.] [daw: Section 7 now details the construction of a signed CoSWID. A SHOULD sign requirement was also added to section 7.] [Per IETF 110] We discussed adding language on the order of \"This document specifies a data format and an associated integrity mechanism agnostic of a deployment. Application which choose to use CoSWID will need to specify explicit guidance which security properties are needed and under which circumstances.\" > ** [-15] Section 6. Per \"When an authoritative tag is signed, the software [daw: Authenticating the signer is the software provider will require a match between a CoSWID entity and the signer. We are working out a means to do this.] 2 [daw: Not sure which tags this applies to? Signing? All of section 2?] [daw: done.] [daw: This confusing text has been removed.] [daw: done.] [daw: We added a note about this.] [daw: Done.] [daw: Added signature validation as a way.] Regards, Roman", "new_text": "A tag is considered \"authoritative\" if the CoSWID tag was created by the software provider. An authoritative CoSWID tag contains information about a software component provided by the supplier of the software component, who is expected to be an expert in their own software. Thus, authoritative CoSWID tags can represent authoritative information about the software component. The degree to which this information can be trusted depends on the tag's chain of custody and the ability to verify a signature provided by the supplier if present in the CoSWID tag. The provisioning and validation of CoSWID tags are handled by local policy and is outside the scope of this document. A signed CoSWID tag (see coswid-cose) whose signature has been validated can be relied upon to be unchanged since it was signed. By contrast, the data contained in unsigned tags can be altered by any user or process with write-access to the tag. To support signature validation, there is the need associate the right key with the software provider or party originating the signature. This operation is application specific and needs to be addressed by the application or a user of the application; a specific approach for which is out- of-scope for this document. When an authoritative tag is signed, the originator of the signature can be verified. A trustworthy association between the signature and the originator of the signature can be established via trust anchors. A certification path between a trust anchor and a certificate including a public key enabling the validation of a tag signature can realize the assessment of trustworthiness of an authoritative tag. Verifying that the software provider is the signer is a different matter. This requires an association between the signature and the tag's entity item associated corresponding to the software provider. No mechanism is defined in this draft to make this association; therefore, this association will need to be handled by local policy. CoSWID tags are intended to contain public information about software components and, as such, the contents of a CoSWID tag does not need"}
{"id": "q-en-draft-ietf-sacm-coswid-00de5703d4fa49c0bae69cf28e3de9f36acfa05ad3ee259c2aed84a41b75ee4d", "old_text": "do this, CoSWID tags can be created by any party and the CoSWID tags collected from an endpoint could contain a mixture of vendor and non- vendor created tags. For this reason, a CoSWID tag might contain potentially malicious content. Input sanitization and loop detection are two ways that implementations can address this concern. 10.", "comments": "Addressed the following comments: [Per IETF 110] Add text on the order of native CDDL approach for handling version is used. If a future version of SWID is published which alters the semantics of exists fields, a future version of CoSWID SHOULD (MUST?) use a different tag to represent it. [daw: A new paragraph was added to section 2 detailing how index values are used to support revisions and extensions.] [Per IETF 110] see -16. Use ISO SWID reference for the IANA registry and if desired, used [SEMVER] as an informal reference in the text [daw: This has been done in section 4.1.] [daw: The FIXME markers have been replaced with actual text.] [daw: Section 7 now details the construction of a signed CoSWID. A SHOULD sign requirement was also added to section 7.] [Per IETF 110] We discussed adding language on the order of \"This document specifies a data format and an associated integrity mechanism agnostic of a deployment. Application which choose to use CoSWID will need to specify explicit guidance which security properties are needed and under which circumstances.\" > ** [-15] Section 6. Per \"When an authoritative tag is signed, the software [daw: Authenticating the signer is the software provider will require a match between a CoSWID entity and the signer. We are working out a means to do this.] 2 [daw: Not sure which tags this applies to? Signing? All of section 2?] [daw: done.] [daw: This confusing text has been removed.] [daw: done.] [daw: We added a note about this.] [daw: Done.] [daw: Added signature validation as a way.] Regards, Roman", "new_text": "do this, CoSWID tags can be created by any party and the CoSWID tags collected from an endpoint could contain a mixture of vendor and non- vendor created tags. For this reason, a CoSWID tag might contain potentially malicious content. Input sanitization, loop detection, and signature verification are ways that implementations can address this concern. 10."}
{"id": "q-en-draft-ietf-sacm-coswid-00de5703d4fa49c0bae69cf28e3de9f36acfa05ad3ee259c2aed84a41b75ee4d", "old_text": "Aligned excerpt examples in I-D text with full CDDL Fixed titels where title was referring to group instead of map Added missig date in SEMVER Fixed root cardinality for file and directory, etc.", "comments": "Addressed the following comments: [Per IETF 110] Add text on the order of native CDDL approach for handling version is used. If a future version of SWID is published which alters the semantics of exists fields, a future version of CoSWID SHOULD (MUST?) use a different tag to represent it. [daw: A new paragraph was added to section 2 detailing how index values are used to support revisions and extensions.] [Per IETF 110] see -16. Use ISO SWID reference for the IANA registry and if desired, used [SEMVER] as an informal reference in the text [daw: This has been done in section 4.1.] [daw: The FIXME markers have been replaced with actual text.] [daw: Section 7 now details the construction of a signed CoSWID. A SHOULD sign requirement was also added to section 7.] [Per IETF 110] We discussed adding language on the order of \"This document specifies a data format and an associated integrity mechanism agnostic of a deployment. Application which choose to use CoSWID will need to specify explicit guidance which security properties are needed and under which circumstances.\" > ** [-15] Section 6. Per \"When an authoritative tag is signed, the software [daw: Authenticating the signer is the software provider will require a match between a CoSWID entity and the signer. We are working out a means to do this.] 2 [daw: Not sure which tags this applies to? Signing? All of section 2?] [daw: done.] [daw: This confusing text has been removed.] [daw: done.] [daw: We added a note about this.] [daw: Done.] [daw: Added signature validation as a way.] Regards, Roman", "new_text": "Aligned excerpt examples in I-D text with full CDDL Fixed titles where title was referring to group instead of map Added missing date in SEMVER Fixed root cardinality for file and directory, etc."}
{"id": "q-en-draft-ietf-taps-transport-security-8b7bd23d2f1f220ac3d1575f01f447e60be280a6abc8b14c0d8745c01e17165a", "old_text": "This interaction is often limited to signaling between the record layer and the handshake layer. ESP Mobility Events (ME): The record protocol can be signaled that it is being migrated to another transport or interface due to", "comments": "Completes\nWe collapsed IKEv2 and ESP into IPsec. Should we remove ESP from feature lists, e.g., the list of protocols that support PSK import?\nAgreed, we should make sure this is consistent. One of our features, Key Expiration, currently only mentions ESP though. Is this actually an IPsec feature now, do we remove the feature, or is there another solution?", "new_text": "This interaction is often limited to signaling between the record layer and the handshake layer. IPsec Mobility Events (ME): The record protocol can be signaled that it is being migrated to another transport or interface due to"}
{"id": "q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b", "old_text": "4. There exists a common set of features shared across the transport protocols surveyed in this document. The mandatory features should be provided by any transport security protocol, while the optional features are extensions that a subset of the protocols provide. For clarity, we also distinguish between handshake and record features. 4.1. 4.1.1. Forward-secure segment encryption and authentication: Transit data must be protected with an authenticated encryption algorithm. Private key interface or injection: Authentication based on public key signatures is commonplace for many transport security", "comments": "\u2026/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs.", "new_text": "4. There exists a common set of features shared across the transport protocols surveyed in this document. Mandatory features constitute a baseline of functionality that an application may assume for any TAPS implementation. Optional features by contrast may vary from implementation to implementation, and so an application cannot simply assume they are available. Applications learn of and use optional features by querying for their presence and support. Optional features may not be implemented, or may be disabled if their presence impacts transport services or if a necessary transport service is unavailable. 4.1. Segment encryption and authentication: Transit data must be protected with an authenticated encryption algorithm. Forward-secure key establishment: Negotiated keying material must come from an authenticated, forward-secure key exchange protocol. Private key interface or injection: Authentication based on public key signatures is commonplace for many transport security"}
{"id": "q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b", "old_text": "connection must be authenticated before any data is sent to said party. Source validation: Source validation must be provided to mitigate server-targeted DoS attacks. This can be done with puzzles or cookies. 4.1.2. Pre-shared key support: A record protocol must be able to use a pre-shared key established out-of-band to encrypt individual messages, packets, or datagrams. 4.2. 4.2.1. Mutual authentication: Transport security protocols must allow each endpoint to authenticate the other if required by the application. Application-layer feature negotiation: The type of application using a transport security protocol often requires features configured at the connection establishment layer, e.g., ALPN", "comments": "\u2026/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs.", "new_text": "connection must be authenticated before any data is sent to said party. Pre-shared key support: A security protocol must be able to use a pre-shared key established out-of-band or from a prior session to encrypt individual messages, packets, or datagrams. 4.2. Mutual authentication: Transport security protocols must allow each endpoint to authenticate the other if required by the application. Transport dependency: None. Application dependency: Mutual authentication required for application support. Connection mobility: Sessions should not be bound to a network connection (or 5-tuple). This allows cryptographic key material and other state information to be reused in the event of a connection change. Examples of this include a NAT rebinding that occurs without a client's knowledge. Transport dependency: Connections are unreliable or can change due to unpredictable network events, e.g., NAT re-bindings. Application dependency: None. Source validation: Source validation must be provided to mitigate server-targeted DoS attacks. This can be done with puzzles or cookies. Transport dependency: Packets may arrive as datagrams instead of streams from unauthenticated sources. Application dependency: None. Application-layer feature negotiation: The type of application using a transport security protocol often requires features configured at the connection establishment layer, e.g., ALPN"}
{"id": "q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b", "old_text": "mechanism to allow for such application-specific features and options to be configured or otherwise negotiated. Configuration extensions: The protocol negotiation should be extensible with addition of new configuration options. Session caching and management: Sessions should be cacheable to enable reuse and amortize the cost of performing session establishment handshakes. 4.2.2. Connection mobility: Sessions should not be bound to a network connection (or 5-tuple). This allows cryptographic key material and other state information to be reused in the event of a connection change. Examples of this include a NAT rebinding that occurs without a client's knowledge. 5. This section describes the interface surface exposed by the security protocols described above, with each interface. Note that not all protocols support each interface. 5.1.", "comments": "\u2026/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs.", "new_text": "mechanism to allow for such application-specific features and options to be configured or otherwise negotiated. Transport dependency: None. Application dependency: Specification of application-layer features or functionality. Configuration extensions: The protocol negotiation should be extensible with addition of new configuration options. Transport dependency: None. Application dependency: Specification of application-specific extensions. Session caching and management: Sessions should be cacheable to enable reuse and amortize the cost of performing session establishment handshakes. Transport dependency: None. Application dependency: None. 5. This section describes the interface surface exposed by the security protocols described above. Note that not all protocols support each interface. We partition these interfaces into pre-connection (configuration), connection, and post-connection interfaces. 5.1."}
{"id": "q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b", "old_text": "key exchange, signatures, and ciphersuites. Protocols: TLS, DTLS, QUIC + TLS, MinimalT, tcpcrypt, IKEv2, SRTP Session Cache The application provides the ability to save and retrieve session state (such as tickets, keying material, and server parameters) that may be used to resume the security session. Protocols: TLS, DTLS, QUIC + TLS, MinimalT Authentication Delegation", "comments": "\u2026/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs.", "new_text": "key exchange, signatures, and ciphersuites. Protocols: TLS, DTLS, QUIC + TLS, MinimalT, tcpcrypt, IKEv2, SRTP Session Cache Management The application provides the ability to save and retrieve session state (such as tickets, keying material, and server parameters) that may be used to resume the security session. Protocols: TLS, DTLS, QUIC + TLS, MinimalT Authentication Delegation"}
{"id": "q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b", "old_text": "provide authentication, using EAP for example. Protocols: IKEv2, SRTP 5.2. Handshake interfaces are the points of interaction between a handshake protocol and the application, record protocol, and transport once the handshake is active. Send Handshake Messages The handshake protocol needs to be able to send messages over a transport to the remote peer to establish trust and to negotiate keys. Protocols: All (TLS, DTLS, QUIC + TLS, MinimalT, CurveCP, IKEv2, WireGuard, SRTP (DTLS)) Receive Handshake Messages The handshake protocol needs to be able to receive messages from the remote peer over a transport to establish trust and to negotiate keys. Protocols: All (TLS, DTLS, QUIC + TLS, MinimalT, CurveCP, IKEv2, WireGuard, SRTP (DTLS)) Identity Validation During a handshake, the security protocol will conduct identity", "comments": "\u2026/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs.", "new_text": "provide authentication, using EAP for example. Protocols: IKEv2, SRTP Pre-Shared Key Import Either the handshake protocol or the application directly can supply pre-shared keys for the record protocol use for encryption/ decryption and authentication. If the application can supply keys directly, this is considered explicit import; if the handshake protocol traditionally provides the keys directly, it is considered direct import; if the keys can only be shared by the handshake, they are considered non-importable. Explict import: QUIC, ESP Direct import: TLS, DTLS, MinimalT, tcpcrypt, WireGuard Non-importable: CurveCP 5.2. Identity Validation During a handshake, the security protocol will conduct identity"}
{"id": "q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b", "old_text": "involves sending a cookie exchange to avoid DoS attacks. Protocols: QUIC + TLS, DTLS, WireGuard Key Update The handshake protocol may be instructed to update its keying material, either by the application directly or by the record protocol sending a key expiration event. Protocols: TLS, DTLS, QUIC + TLS, MinimalT, tcpcrypt, IKEv2 Pre-Shared Key Export The handshake protocol will generate one or more keys to be used for record encryption/decryption and authentication. These may be explicitly exportable to the application, traditionally limited to direct export to the record protocol, or inherently non-exportable because the keys must be used directly in conjunction with the record protocol. Explict export: TLS (for QUIC), tcpcrypt, IKEv2, DTLS (for SRTP)", "comments": "\u2026/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs.", "new_text": "involves sending a cookie exchange to avoid DoS attacks. Protocols: QUIC + TLS, DTLS, WireGuard 5.3. Connection Termination The security protocol may be instructed to tear down its connection and session information. This is needed by some protocols to prevent application data truncation attacks. Protocols: TLS, DTLS, QUIC + TLS, MinimalT, tcpcrypt, IKEv2 Key Update The handshake protocol may be instructed to update its keying material, either by the application directly or by the record protocol sending a key expiration event. Protocols: TLS, DTLS, QUIC + TLS, MinimalT, tcpcrypt, IKEv2 Pre-Shared Key Export The handshake protocol will generate one or more keys to be used for record encryption/decryption and authentication. These may be explicitly exportable to the application, traditionally limited to direct export to the record protocol, or inherently non-exportable because the keys must be used directly in conjunction with the record protocol. Explict export: TLS (for QUIC), tcpcrypt, IKEv2, DTLS (for SRTP)"}
{"id": "q-en-draft-ietf-taps-transport-security-88bfd1414e102a1507ed024c37bf3f3b783358fdba6fafe951a01377c0c7d26b", "old_text": "Non-exportable: CurveCP 5.3. Record interfaces are the points of interaction between a record protocol and the application, handshake protocol, and transport once in use. Pre-Shared Key Import Either the handshake protocol or the application directly can supply pre-shared keys for the record protocol use for encryption/ decryption and authentication. If the application can supply keys directly, this is considered explicit import; if the handshake protocol traditionally provides the keys directly, it is considered direct import; if the keys can only be shared by the handshake, they are considered non-importable. Explict import: QUIC, ESP Direct import: TLS, DTLS, MinimalT, tcpcrypt, WireGuard Non-importable: CurveCP Encrypt application data The application can send data to the record protocol to encrypt it into a format that can be sent on the underlying transport. The encryption step may require that the application data is treated as a stream or as datagrams, and that the transport to send the encrypted records present a stream or datagram interface. Stream-to-Stream Protocols: TLS, tcpcrypt Datagram-to-Datagram Protocols: DTLS, ESP, SRTP, WireGuard Stream-to-Datagram Protocols: QUIC ((Editor's Note: This depends on the interface QUIC exposes to applications.)) Decrypt application data The application can receive data from its transport to be decrypted using record protocol. The decryption step may require that the incoming transport data is presented as a stream or as datagrams, and that the resulting application data is a stream or datagrams. Stream-to-Stream Protocols: TLS, tcpcrypt Datagram-to-Datagram Protocols: DTLS, ESP, SRTP, WireGuard Datagram-to-Stream Protocols: QUIC ((Editor's Note: This depends on the interface QUIC exposes to applications.)) Key Expiration The record protocol can signal that its keys are expiring due to reaching a time-based deadline, or a use-based deadline (number of", "comments": "\u2026/security and security/transport dependencies.\nNAME can you please take a look before Wednesday?\nYep, I will review Monday morning.\nI think this is fine. We can do further tweaks in subsequent PRs.", "new_text": "Non-exportable: CurveCP Key Expiration The record protocol can signal that its keys are expiring due to reaching a time-based deadline, or a use-based deadline (number of"}
{"id": "q-en-draft-ietf-tls-ctls-e437072bb69488a9223d17f3a7d2876ce15a9de47b7d001167f41638fc8449b9", "old_text": "more on this, as well as https://mailarchive.ietf.org/arch/msg/tls/ TugB5ddJu3nYg7chcyeIyUqWSbA.]] 2.1.1. To be compatible with the specializations described in this section,", "comments": "This PR looks good to me.\ncTLS enumerates keys in the template, but does not explain how templates can be extended. Suggestion: Clients must reject the whole template if it contains any key they do not understand. The \u201coptional\u201d key holds a map of optional extensions that are safe to ignore.\nThanks for addressing your own issue with a PR. Creating an IANA registry for the keys used in the configuration data makes sense. I would, however, like to point out (and maybe we need to make this explicit) that the configuration information is defined for convenience but is not a mandatory-to-implement functionality for a stack.\nSure, some use cases can pre-arrange, or even hardcode, a specific configuration through a nonstandard channel. Some use cases (where the client and server are developed separately) obviously cannot.", "new_text": "more on this, as well as https://mailarchive.ietf.org/arch/msg/tls/ TugB5ddJu3nYg7chcyeIyUqWSbA.]] contains keys that are not required to be understood by the client. Server operators MUST NOT place a key in this section unless the server is able to determine whether the key is in use based on the client data it receives. A key MUST NOT appear in both the main template and the optional section. 2.1.1. To be compatible with the specializations described in this section,"}
{"id": "q-en-draft-ietf-tls-ctls-e437072bb69488a9223d17f3a7d2876ce15a9de47b7d001167f41638fc8449b9", "old_text": "6. This document requests that a code point be allocated from the \"TLS ContentType registry. This value must be in the range 0-31 (inclusive). The row to be added in the registry has the following", "comments": "This PR looks good to me.\ncTLS enumerates keys in the template, but does not explain how templates can be extended. Suggestion: Clients must reject the whole template if it contains any key they do not understand. The \u201coptional\u201d key holds a map of optional extensions that are safe to ignore.\nThanks for addressing your own issue with a PR. Creating an IANA registry for the keys used in the configuration data makes sense. I would, however, like to point out (and maybe we need to make this explicit) that the configuration information is defined for convenience but is not a mandatory-to-implement functionality for a stack.\nSure, some use cases can pre-arrange, or even hardcode, a specific configuration through a nonstandard channel. Some use cases (where the client and server are developed separately) obviously cannot.", "new_text": "6. 6.1. This document requests that a code point be allocated from the \"TLS ContentType registry. This value must be in the range 0-31 (inclusive). The row to be added in the registry has the following"}
{"id": "q-en-draft-ietf-tls-ctls-e437072bb69488a9223d17f3a7d2876ce15a9de47b7d001167f41638fc8449b9", "old_text": "[[OPEN ISSUE: Should we require standards action for all profile IDs that would fit in 2 octets.]] ", "comments": "This PR looks good to me.\ncTLS enumerates keys in the template, but does not explain how templates can be extended. Suggestion: Clients must reject the whole template if it contains any key they do not understand. The \u201coptional\u201d key holds a map of optional extensions that are safe to ignore.\nThanks for addressing your own issue with a PR. Creating an IANA registry for the keys used in the configuration data makes sense. I would, however, like to point out (and maybe we need to make this explicit) that the configuration information is defined for convenience but is not a mandatory-to-implement functionality for a stack.\nSure, some use cases can pre-arrange, or even hardcode, a specific configuration through a nonstandard channel. Some use cases (where the client and server are developed separately) obviously cannot.", "new_text": "[[OPEN ISSUE: Should we require standards action for all profile IDs that would fit in 2 octets.]] 6.2. This document requests that IANA open a new registry entitled \"cTLS Template Keys\", on the Transport Layer Security (TLS) Parameters page, with a \"Specification Required\" registration policy and the following initial contents: "}
{"id": "q-en-draft-ietf-tls-ctls-eaded40a0c0cd00de67ef008b831621cff59808741a70639f04f087b03d3e7ea", "old_text": "https://mailarchive.ietf.org/arch/msg/tls/ TugB5ddJu3nYg7chcyeIyUqWSbA. contains keys that are not required to be understood by the client. Server operators MUST NOT place a key in this section unless the server is able to determine whether the key is in use", "comments": "FYI, this PR now moves out of the ClientHello back into (now called ). This avoids confusing questions about how to demultiplex profiles that use different modes.\nLGTM.\nThis would allow recipients to detect the end of a handshake message from the CTLSPlaintext header alone, allowing the recipient to reconstruct the entire handshake message before parsing it. Currently, a streaming parser is required in order to detect the end. Suggested implementation: Senders set contenttype = ctlshandshake_end for a message\u2019s final fragment.\nI can see why you want that, but it's not generally a TLS property, and I'm reluctant to change things.\nAFAIK, TLS does not require a streaming ClientHello parser. The recipient can read (uint24) and then buffer the remainder before parsing. cTLS, as currently specified, does not have . This means that the recipient has to cross all the way down into the extension parsing code before it finds out that the ClientHello is still incomplete. Given the complex template-driven handshake parsing code, I think making a streaming parser required is ill-advised. Given the desire for compactness, I think a flag of some kind on the final handshake message record seems like a reasonable solution, replacing the uint24 length. It would also simplify Pseudorandom cTLS. However, if that's not appealing, you could simply reintroduce .\nNAME proposed a solution on slide 5 : I'm fine with that, so long as it doesn't end up requiring recipients to do something heroic to reconstruct the ClientHello. Thus, \"framing = none\" should imply that messages cannot be fragmented at all (max size limited to 2^14-1). I'm not sure I see the utility of the explicit distinction between \"length\" and \"full\". Surely \"length\" is not usable with DTLS due to reordering, and \"full\" is redundant in TLS/TCP. Perhaps we only need a single bit to enable/disable framing, and length-vs-full can be implicit from the transport.\nI could live with this. Objections?\nSolved by\nThe current text doesn\u2019t mention this, although it seems relatively clear how it\u2019s expected to work (using the DTLS Handshake struct).", "new_text": "https://mailarchive.ietf.org/arch/msg/tls/ TugB5ddJu3nYg7chcyeIyUqWSbA. If true, handshake messages MUST be conveyed inside a \"Handshake\" (RFC8446, Section 4) struct on stream transports, or a \"DTLSHandshake\" (RFC9147, Section 5.2) struct on datagram transports, and MAY be broken into multiple records as in TLS and DTLS. Otherwise, each handshake message is conveyed in a \"CTLSHandshake\" or \"CTLSDatagramHandshake\" struct (ctlshandshake), which MUST be the payload of a single record. contains keys that are not required to be understood by the client. Server operators MUST NOT place a key in this section unless the server is able to determine whether the key is in use"}
{"id": "q-en-draft-ietf-tls-ctls-eaded40a0c0cd00de67ef008b831621cff59808741a70639f04f087b03d3e7ea", "old_text": "value to distinguish cTLS plaintext records from encrypted records, TLS/DTLS records, and other protocols using the same 5-tuple. Encrypted records use DTLS 1.3 RFC9147 record framing, comprising a configuration octet followed by optional connection ID, sequence number, and length fields. The encryption process and additional", "comments": "FYI, this PR now moves out of the ClientHello back into (now called ). This avoids confusing questions about how to demultiplex profiles that use different modes.\nLGTM.\nThis would allow recipients to detect the end of a handshake message from the CTLSPlaintext header alone, allowing the recipient to reconstruct the entire handshake message before parsing it. Currently, a streaming parser is required in order to detect the end. Suggested implementation: Senders set contenttype = ctlshandshake_end for a message\u2019s final fragment.\nI can see why you want that, but it's not generally a TLS property, and I'm reluctant to change things.\nAFAIK, TLS does not require a streaming ClientHello parser. The recipient can read (uint24) and then buffer the remainder before parsing. cTLS, as currently specified, does not have . This means that the recipient has to cross all the way down into the extension parsing code before it finds out that the ClientHello is still incomplete. Given the complex template-driven handshake parsing code, I think making a streaming parser required is ill-advised. Given the desire for compactness, I think a flag of some kind on the final handshake message record seems like a reasonable solution, replacing the uint24 length. It would also simplify Pseudorandom cTLS. However, if that's not appealing, you could simply reintroduce .\nNAME proposed a solution on slide 5 : I'm fine with that, so long as it doesn't end up requiring recipients to do something heroic to reconstruct the ClientHello. Thus, \"framing = none\" should imply that messages cannot be fragmented at all (max size limited to 2^14-1). I'm not sure I see the utility of the explicit distinction between \"length\" and \"full\". Surely \"length\" is not usable with DTLS due to reordering, and \"full\" is redundant in TLS/TCP. Perhaps we only need a single bit to enable/disable framing, and length-vs-full can be implicit from the transport.\nI could live with this. Objections?\nSolved by\nThe current text doesn\u2019t mention this, although it seems relatively clear how it\u2019s expected to work (using the DTLS Handshake struct).", "new_text": "value to distinguish cTLS plaintext records from encrypted records, TLS/DTLS records, and other protocols using the same 5-tuple. The \"profile_id\" field MUST identify the profile that is in use. A zero-length ID corresponds to the cTLS default protocol. The server's reply does not include the \"profile_id\", because the server must be using the same profile indicated by the client. Encrypted records use DTLS 1.3 RFC9147 record framing, comprising a configuration octet followed by optional connection ID, sequence number, and length fields. The encryption process and additional"}
{"id": "q-en-draft-ietf-tls-ctls-eaded40a0c0cd00de67ef008b831621cff59808741a70639f04f087b03d3e7ea", "old_text": "2.3. The cTLS handshake framing is same as the TLS 1.3 handshake framing, except for two changes: The length field is omitted. The HelloRetryRequest message is a true handshake message instead of a specialization of ServerHello. 3. In general, we retain the basic structure of each individual TLS handshake message. However, the following handshake messages have been modified for space reduction and cleaned up to remove pre-TLS 1.3 baggage. 3.1. The cTLS ClientHello is defined as follows. The \"profile_id\" field MUST identify the profile that is in use. A zero-length ID corresponds to the cTLS default protocol. 3.2. We redefine ServerHello in the following way. 3.3. The HelloRetryRequest has the following format. The HelloRetryRequest is the same as the ServerHello above but without the unnecessary sentinel Random value.", "comments": "FYI, this PR now moves out of the ClientHello back into (now called ). This avoids confusing questions about how to demultiplex profiles that use different modes.\nLGTM.\nThis would allow recipients to detect the end of a handshake message from the CTLSPlaintext header alone, allowing the recipient to reconstruct the entire handshake message before parsing it. Currently, a streaming parser is required in order to detect the end. Suggested implementation: Senders set contenttype = ctlshandshake_end for a message\u2019s final fragment.\nI can see why you want that, but it's not generally a TLS property, and I'm reluctant to change things.\nAFAIK, TLS does not require a streaming ClientHello parser. The recipient can read (uint24) and then buffer the remainder before parsing. cTLS, as currently specified, does not have . This means that the recipient has to cross all the way down into the extension parsing code before it finds out that the ClientHello is still incomplete. Given the complex template-driven handshake parsing code, I think making a streaming parser required is ill-advised. Given the desire for compactness, I think a flag of some kind on the final handshake message record seems like a reasonable solution, replacing the uint24 length. It would also simplify Pseudorandom cTLS. However, if that's not appealing, you could simply reintroduce .\nNAME proposed a solution on slide 5 : I'm fine with that, so long as it doesn't end up requiring recipients to do something heroic to reconstruct the ClientHello. Thus, \"framing = none\" should imply that messages cannot be fragmented at all (max size limited to 2^14-1). I'm not sure I see the utility of the explicit distinction between \"length\" and \"full\". Surely \"length\" is not usable with DTLS due to reordering, and \"full\" is redundant in TLS/TCP. Perhaps we only need a single bit to enable/disable framing, and length-vs-full can be implicit from the transport.\nI could live with this. Objections?\nSolved by\nThe current text doesn\u2019t mention this, although it seems relatively clear how it\u2019s expected to work (using the DTLS Handshake struct).", "new_text": "2.3. When \"template.handshakeFraming\" is not \"true\", cTLS uses a custom handshake framing that saves space by relying on the record layer for message lengths. (This saves 3 bytes per message compared to TLS, or 9 bytes compared to DTLS.) This compact framing is defined by the \"CTLSHandshake\" and \"CTLSDatagramHandshake\" structs. Any handshake type registered in the IANA TLS HandshakeType Registry can be conveyed in a \"CTLS[Datagram]Handshake\", but not all messages are actually allowed on a given connection. This definition shows the messages types supported in \"CTLSHandshake\" as of TLS 1.3 and DTLS 1.3, but any future message types are also permitted. Each \"CTLSHandshake\" or \"CTLSDatagramHandshake\" MUST be conveyed as a single \"CTLSClientPlaintext.fragment\", \"CTLSServerPlaintext.fragment\", or \"CTLSCiphertext.encrypted_record\", and is therefore limited to a maximum length of \"2^16-1\" or less. When operating over UDP, large \"CTLSDatagramHandshake\" messages will also require the use of IP fragmentation, which is sometimes undesirable. Operators can avoid these concerns by setting \"template.handshakeFraming = true\". 3. In general, we retain the basic structure of each individual TLS or DTLS handshake message. However, the following handshake messages have been modified for space reduction and cleaned up to remove pre- TLS 1.3 baggage. 3.1. The cTLS ClientHello is defined as follows. 3.2. We redefine ServerHello in the following way. 3.3. In cTLS, the HelloRetryRequest message is a true handshake message instead of a specialization of ServerHello. The HelloRetryRequest has the following format. The HelloRetryRequest is the same as the ServerHello above but without the unnecessary sentinel Random value."}
{"id": "q-en-draft-ietf-tls-ctls-eaded40a0c0cd00de67ef008b831621cff59808741a70639f04f087b03d3e7ea", "old_text": "6.3. This document requests that IANA open a new registry entitled \"Well- known cTLS Profile IDs\", on the Transport Layer Security (TLS) Parameters page, with the following columns:", "comments": "FYI, this PR now moves out of the ClientHello back into (now called ). This avoids confusing questions about how to demultiplex profiles that use different modes.\nLGTM.\nThis would allow recipients to detect the end of a handshake message from the CTLSPlaintext header alone, allowing the recipient to reconstruct the entire handshake message before parsing it. Currently, a streaming parser is required in order to detect the end. Suggested implementation: Senders set contenttype = ctlshandshake_end for a message\u2019s final fragment.\nI can see why you want that, but it's not generally a TLS property, and I'm reluctant to change things.\nAFAIK, TLS does not require a streaming ClientHello parser. The recipient can read (uint24) and then buffer the remainder before parsing. cTLS, as currently specified, does not have . This means that the recipient has to cross all the way down into the extension parsing code before it finds out that the ClientHello is still incomplete. Given the complex template-driven handshake parsing code, I think making a streaming parser required is ill-advised. Given the desire for compactness, I think a flag of some kind on the final handshake message record seems like a reasonable solution, replacing the uint24 length. It would also simplify Pseudorandom cTLS. However, if that's not appealing, you could simply reintroduce .\nNAME proposed a solution on slide 5 : I'm fine with that, so long as it doesn't end up requiring recipients to do something heroic to reconstruct the ClientHello. Thus, \"framing = none\" should imply that messages cannot be fragmented at all (max size limited to 2^14-1). I'm not sure I see the utility of the explicit distinction between \"length\" and \"full\". Surely \"length\" is not usable with DTLS due to reordering, and \"full\" is redundant in TLS/TCP. Perhaps we only need a single bit to enable/disable framing, and length-vs-full can be implicit from the transport.\nI could live with this. Objections?\nSolved by\nThe current text doesn\u2019t mention this, although it seems relatively clear how it\u2019s expected to work (using the DTLS Handshake struct).", "new_text": "6.3. This document requests that IANA change the name of entry 6 in the TLS HandshakeType Registry from \"hello_retry_request_RESERVED\" to \"hello_retry_request\", and set its Reference field to this document. 6.4. This document requests that IANA open a new registry entitled \"Well- known cTLS Profile IDs\", on the Transport Layer Security (TLS) Parameters page, with the following columns:"}
{"id": "q-en-draft-ietf-tls-ctls-3e2ec919e0f3cb8af70fedb6b77f152336383ff2dc50e5163529d7c8a48cb11d", "old_text": "Omitting the fields and handshake messages required for preserving backwards-compatibility with earlier TLS versions. More compact encodings, for example point compression. A template-based specialization mechanism that allows pre- populating information at both endpoints without the need for negotiation. Alternative cryptographic techniques, such as semi-static Diffie- Hellman. OPEN ISSUE: Semi-static and point compression are never mentioned again. For the common (EC)DHE handshake with pre-established certificates, Stream cTLS achieves an overhead of 53 bytes over the minimum", "comments": "We have decided to moved the compressed elliptic curve representations to an external draft since the issue is equally applicable to regular TLS/DTLS and does not need to be cTLS specific.\nNAME It looks like you need to run .\nIt seems that we have consensus to move compressed representations into new IANA codepoints, rather than try to reuse the existing codepoints with a template-driven encoding. That would make the question of compressed representations orthogonal to cTLS, so this topic can be removed from the draft.\nFixed by", "new_text": "Omitting the fields and handshake messages required for preserving backwards-compatibility with earlier TLS versions. More compact encodings. A template-based specialization mechanism that allows pre- populating information at both endpoints without the need for negotiation. Alternative cryptographic techniques, such as nonce truncation. For the common (EC)DHE handshake with pre-established certificates, Stream cTLS achieves an overhead of 53 bytes over the minimum"}
{"id": "q-en-draft-ietf-tls-esni-b5986ce06daa28fc22b1bf16670dc8d38eff0d825583c7e4d8c41b32d5782e4c", "old_text": "\"encrypted_server_name\" extension. If the client does not retry in either scenario, it MUST report an error to the calling application. 5.1.3. When the server cannot decrypt or does not process the", "comments": "NAME NAME please have a look.\nWe also need to specify server-side HRR behavior, right? Specifically, the server MUST recheck the second ClientHello's ESNI extension before sending the server certificate, otherwise putting the key share into the AD field doesn't really mean anything. The attacker would be able to do a cut-and-paste thing. (We could also incorporate the nonce into key schedule, but that upsets the document's split mode use case.)\nYep! Thanks for bringing that up. I'll prepare text.\nThank you for the text. LGTM.", "new_text": "\"encrypted_server_name\" extension. If the client does not retry in either scenario, it MUST report an error to the calling application. If the server sends a HelloRetryRequest in response to the ClientHello and the client can send a second updated ClientHello per the rules in RFC8446, the \"encrypted_server_name\" extension values which do not depend on the (possibly updated) ClientHello.KeyShareClientHello, i.e,, ClientEncryptedSNI.suite, ClientEncryptedSNI.key_share, and ClientEncryptedSNI.record_digest, MUST NOT change across ClientHello messages. Moreover, ClientESNIInner.nonce and ClientESNIInner.realSNI MUST not change across ClientHello messages. Informally, the values of all unencrypted extension information, as well as the inner extension plaintext, must be consistent between the first and second ClientHello messages. 5.1.3. When the server cannot decrypt or does not process the"}
{"id": "q-en-draft-ietf-tls-esni-e36437bf98bccecb0708af5c552ff3260a86aab034cadebc6d0a27a272af0275", "old_text": "existing registry for Alerts (defined in RFC8446), with the \"DTLS-OK\" column being set to \"Y\". 8.3. IANA is requested to create an entry, ESNI(0xff9f), in the existing registry for Resource Record (RR) TYPEs (defined in RFC6895) with \"Meaning\" column value being set to \"Encrypted SNI\". ", "comments": "Because this document now delegates the way the keys are advertised from the RRType ESNI to HTTPSSVC shouldn't the RRType Considerations be removed from this document.", "new_text": "existing registry for Alerts (defined in RFC8446), with the \"DTLS-OK\" column being set to \"Y\". 9. When operating in Split Mode, backend servers will not have access to PaddedServerNameList.sni or ClientESNIInner.nonce without access to the ESNI keys or a way to decrypt ClientEncryptedSNI.encrypted_sni. One way to address this for a single connection, at the cost of having communication not be unmodified TLS 1.3, is as follows. Assume there is a shared (symmetric) key between the client-facing server and the backend server and use it to AEAD-encrypt Z and send the encrypted blob at the beginning of the connection before the ClientHello. The backend server can then decrypt ESNI to recover the true SNI and nonce. 10. Alternative approaches to encrypted SNI may be implemented at the TLS or application layer. In this section we describe several alternatives and discuss drawbacks in comparison to the design in this document. 10.1. 10.1.1. In this variant, TLS Client Hellos are tunneled within early data payloads belonging to outer TLS connections established with the client-facing server. This requires clients to have established a previous session --- and obtained PSKs --- with the server. The client-facing server decrypts early data payloads to uncover Client Hellos destined for the backend server, and forwards them onwards as necessary. Afterwards, all records to and from backend servers are forwarded by the client-facing server - unmodified. This avoids double encryption of TLS records. Problems with this approach are: (1) servers may not always be able to distinguish inner Client Hellos from legitimate application data, (2) nested 0-RTT data may not function correctly, (3) 0-RTT data may not be supported - especially under DoS - leading to availability concerns, and (4) clients must bootstrap tunnels (sessions), costing an additional round trip and potentially revealing the SNI during the initial connection. In contrast, encrypted SNI protects the SNI in a distinct Client Hello extension and neither abuses early data nor requires a bootstrapping connection. 10.1.2. In this variant, client-facing and backend servers coordinate to produce \"combined tickets\" that are consumable by both. Clients offer combined tickets to client-facing servers. The latter parse them to determine the correct backend server to which the Client Hello should be forwarded. This approach is problematic due to non- trivial coordination between client-facing and backend servers for ticket construction and consumption. Moreover, it requires a bootstrapping step similar to that of the previous variant. In contrast, encrypted SNI requires no such coordination. 10.2. 10.2.1. In this variant, clients request secondary certificates with CERTIFICATE_REQUEST HTTP/2 frames after TLS connection completion. In response, servers supply certificates via TLS exported authenticators I-D.ietf-tls-exported-authenticator in CERTIFICATE frames. Clients use a generic SNI for the underlying client-facing server TLS connection. Problems with this approach include: (1) one additional round trip before peer authentication, (2) non-trivial application-layer dependencies and interaction, and (3) obtaining the generic SNI to bootstrap the connection. In contrast, encrypted SNI induces no additional round trip and operates below the application layer. 11. The design described here only provides encryption for the SNI, but not for other extensions, such as ALPN. Another potential design would be to encrypt all of the extensions using the same basic structure as we use here for ESNI. That design has the following advantages: It protects all the extensions from ordinary eavesdroppers If the encrypted block has its own KeyShare, it does not necessarily require the client to use a single KeyShare, because the client's share is bound to the SNI by the AEAD (analysis needed). It also has the following disadvantages: The client-facing server can still see the other extensions. By contrast we could introduce another EncryptedExtensions block that was encrypted to the backend server and not the client-facing server. It requires a mechanism for the client-facing server to provide the extension-encryption key to the backend server (as in communicating-sni and thus cannot be used with an unmodified backend server. A conformant middlebox will strip every extension, which might result in a ClientHello which is just unacceptable to the server (more analysis needed). "}
{"id": "q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331", "old_text": "encrypt the ClientHello. See hpke-map for information on how a cipher suite maps to corresponding HPKE algorithm identifiers. the largest name the server expects to support. If the server supports arbitrary wildcard names, it SHOULD set this value to 256. Clients SHOULD reject ESNIConfig as invalid if maximum_name_length is greater than 256. A list of extensions that the client can take into consideration when generating a Client Hello message. The purpose of the field", "comments": "I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH.", "new_text": "encrypt the ClientHello. See hpke-map for information on how a cipher suite maps to corresponding HPKE algorithm identifiers. The largest name the server expects to support, if known. If this value is not known it can be set to zero, in which case clients SHOULD use the inner ClientHello padding scheme described below. That could happen if wildcard names are in use, or if names can be added or removed from the anonymity set during the lifetime of a particular resource record value. A list of extensions that the client can take into consideration when generating a Client Hello message. The purpose of the field"}
{"id": "q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331", "old_text": "an \"echo_nonce\" extension TLS padding Padding SHOULD be P = L - D bytes, where L = ECHOConfig.maximum_name_length, rounded up to the nearest multiple of 16 D = len(dns_name), where dns_name is the DNS name in the ClientHelloInner \"server_name\" extension When offering an encrypted ClientHello, the client MUST NOT offer to resume any non-ECHO PSKs. It additionally MUST NOT offer to resume", "comments": "I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH.", "new_text": "an \"echo_nonce\" extension TLS padding RFC7685 (see section padding) When offering an encrypted ClientHello, the client MUST NOT offer to resume any non-ECHO PSKs. It additionally MUST NOT offer to resume"}
{"id": "q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331", "old_text": "7.2. As described in server-behavior, the server MAY either accept ECHO and use ClientHelloInner or reject it and use ClientHelloOuter. However, there is no indication in ServerHello of which one the server has done and the client must therefore use trial decryption in order to determine this. 7.2.1. If the server used ClientHelloInner, the client proceeds with the connection as usual, authenticating the connection for the origin server. 7.2.2. If the server used ClientHelloOuter, the client proceeds with the handshake, authenticating for ECHOConfig.public_name as described in", "comments": "I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH.", "new_text": "7.2. Future extensions could reveal sensitive information through their length. Consequently, padding should be flexible and support arbitrary extension changes. For each field in the inner ClientHello, clients need to determine how much to pad given the semantics of that field. For example, if a client can propose various ALPN values, it could add padding to round up to the longest of those. The target padding length of most ClientHello extensions can be determined without server help. However, the \"server_name\" extension could benefit from server input (ECHOConfig.maximum_name_length). Clients SHOULD compute the padding for this extension as follows: If ECHOConfig.maximum_name_length is longer than the actual server_name then clients SHOULD add padding to make up that difference. Otherwise, if ECHOConfig.maximum_name_length is zero or less than the length of the actual server_name then round the server_name up to a multiple of 32 octets and randomly add another 32 octets 50% of the time and then include that amount of additional padding. The amount of padding applied to the ClientHello is then computed as the sum of all per-extension padding values, rounded up to the nearest multiple of 32. (This additional rounding step aims to hide variance across different client implementations.) In addition to padding ClientHelloInner, clients and servers will also need to pad all other handshake messages that have sensitive- length fields. For example, if a client proposes ALPN values in ClientHelloInner, the server-selected value will be returned in an EncryptedExtension, so that handshake message also needs to be padded using TLS record layer padding. 7.3. As described in server-behavior, the server MAY either accept ECHO and use ClientHelloInner or reject it and use ClientHelloOuter. However, there is no indication in ServerHello of which one the server has done and the client must therefore use trial decryption in order to determine this. 7.3.1. If the server used ClientHelloInner, the client proceeds with the connection as usual, authenticating the connection for the origin server. 7.3.2. If the server used ClientHelloOuter, the client proceeds with the handshake, authenticating for ECHOConfig.public_name as described in"}
{"id": "q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331", "old_text": "\"encrypted_server_name\" extension. If the client does not retry in either scenario, it MUST report an error to the calling application. 7.2.2.1. When the server cannot decrypt or does not process the \"encrypted_server_name\" extension, it continues with the handshake", "comments": "I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH.", "new_text": "\"encrypted_server_name\" extension. If the client does not retry in either scenario, it MUST report an error to the calling application. 7.3.2.1. When the server cannot decrypt or does not process the \"encrypted_server_name\" extension, it continues with the handshake"}
{"id": "q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331", "old_text": "implemented, for instance, by reporting a failed connection with a dedicated error code. 7.2.3. If the server sends a HelloRetryRequest in response to the ClientHello, the client sends a second updated ClientHello per the", "comments": "I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH.", "new_text": "implemented, for instance, by reporting a failed connection with a dedicated error code. 7.3.3. If the server sends a HelloRetryRequest in response to the ClientHello, the client sends a second updated ClientHello per the"}
{"id": "q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331", "old_text": "and this is easy to probe for, can we just instead have an extension to indicate what has happened.]] 7.3. If the client attempts to connect to a server and does not have an ECHOConfig structure available for the server, it SHOULD send a", "comments": "I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH.", "new_text": "and this is easy to probe for, can we just instead have an extension to indicate what has happened.]] 7.4. If the client attempts to connect to a server and does not have an ECHOConfig structure available for the server, it SHOULD send a"}
{"id": "q-en-draft-ietf-tls-esni-5ef4585428f30afe26bf73e263abdb3f98a3ba493c11ed63b893cabee2e0f331", "old_text": "11.1. In comparison to I-D.kazuho-protected-sni, wherein DNS Resource Records are signed via a server private key, ECHO records have no authenticity or provenance information. This means that any attacker which can inject DNS responses or poison DNS caches, which is a common scenario in client access networks, can supply clients with fake ECHO records (so that the client encrypts data to them) or strip the ECHO record from the response. However, in the face of an attacker that controls DNS, no encryption scheme can work because the attacker can replace the IP address, thus blocking client connections, or substituting a unique IP address which is 1:1 with the DNS name that was looked up (modulo DNS wildcards). Thus, allowing the ECHO records in the clear does not make the situation significantly worse. Clearly, DNSSEC (if the client validates and hard fails) is a defense against this form of attack, but DoH/DPRIVE are also defenses against DNS attacks by attackers on the local network, which is a common case where ClientHello and SNI encryption are desired. Moreover, as noted in the introduction, SNI encryption is less useful without encryption of DNS queries in transit via DoH or DPRIVE mechanisms. 11.2. Supporting optional record digests and trial decryption opens oneself up to DoS attacks. Specifically, an adversary may send malicious ClientHello messages, i.e., those which will not decrypt with any known ECHO key, in order to force decryption. Servers that support this feature should, for example, implement some form of rate limiting mechanism to limit the damage caused by such attacks. 11.3. ECHO requires encrypted DNS to be an effective privacy protection mechanism. However, verifying the server's identity from the Certificate message, particularly when using the X509 CertificateType, may result in additional network traffic that may reveal the server identity. Examples of this traffic may include requests for revocation information, such as OCSP or CRL traffic, or requests for repository information, such as authorityInformationAccess. It may also include implementation- specific traffic for additional information sources as part of verification. Implementations SHOULD avoid leaking information that may identify the server. Even when sent over an encrypted transport, such requests may result in indirect exposure of the server's identity, such as indicating a specific CA or service being used. To mitigate this risk, servers SHOULD deliver such information in-band when possible, such as through the use of OCSP stapling, and clients SHOULD take steps to minimize or protect such requests during certificate validation. 11.4. I-D.ietf-tls-sni-encryption lists several requirements for SNI encryption. In this section, we re-iterate these requirements and assess the ECHO design against them. 11.4.1. Since servers process either ClientHelloInner or ClientHelloOuter, and ClientHelloInner contains an HPKE-derived nonce, it is not possible for an attacker to \"cut and paste\" the ECHO value in a different Client Hello and learn information from ClientHelloInner. This is because the attacker lacks access to the HPKE-derived nonce used to derive the handshake secrets. 11.4.2. This design depends upon DNS as a vehicle for semi-static public key distribution. Server operators may partition their private keys however they see fit provided each server behind an IP address has the corresponding private key to decrypt a key. Thus, when one ECHO key is provided, sharing is optimally bound by the number of hosts that share an IP address. Server operators may further limit sharing by publishing different DNS records containing ECHOConfig values with different keys using a short TTL. 11.4.3. This design requires servers to decrypt ClientHello messages with ClientEncryptedCH extensions carrying valid digests. Thus, it is possible for an attacker to force decryption operations on the server. This attack is bound by the number of valid TCP connections an attacker can open. 11.4.4. As more clients enable ECHO support, e.g., as normal part of Web browser functionality, with keys supplied by shared hosting providers, the presence of ECHO extensions becomes less suspicious and part of common or predictable client behavior. In other words, if all Web browsers start using ECHO, the presence of this value does not signal suspicious behavior to passive eavesdroppers. Additionally, this specification allows for clients to send GREASE ECHO extensions (see grease-extensions), which helps ensure the ecosystem handles the values correctly. 11.4.5. This design is not forward secret because the server's ECHO key is static. However, the window of exposure is bound by the key lifetime. It is RECOMMENDED that servers rotate keys frequently. 11.4.6. This design permits servers operating in Split Mode to forward connections directly to backend origin servers, thereby avoiding unnecessary MiTM attacks. 11.4.7. Assuming ECHO records retrieved from DNS are authenticated, e.g., via DNSSEC or fetched from a trusted Recursive Resolver, spoofing a server operating in Split Mode is not possible. See cleartext-dns for more details regarding cleartext DNS. Authenticating the ECHOConfigs structure naturally authenticates the included public name. This also authenticates any retry signals from the server because the client validates the server certificate against the public name before retrying. 11.4.8. This design has no impact on application layer protocol negotiation. It may affect connection routing, server certificate selection, and client certificate verification. Thus, it is compatible with multiple protocols. 11.5. Note that the backend server has no way of knowing what the SNI was, but that does not lead to additional privacy exposure because the backend server also only has one identity. This does, however, change the situation slightly in that the backend server might previously have checked SNI and now cannot (and an attacker can route a connection with an encrypted SNI to any backend server and the TLS connection will still complete). However, the client is still responsible for verifying the server's identity in its certificate. [[TODO: Some more analysis needed in this case, as it is a little odd, and probably some precise rules about handling ECHO and no SNI uniformly?]] 12. 12.1. IANA is requested to create an entry, encrypted_server_name(0xffce), in the existing registry for ExtensionType (defined in RFC8446), with \"TLS 1.3\" column values being set to \"CH, EE\", and \"Recommended\" column being set to \"Yes\". 12.2. IANA is requested to create an entry, echo_required(121) in the existing registry for Alerts (defined in RFC8446), with the \"DTLS-OK\"", "comments": "I've tried to capture what I think is a reasonable way to handle padding now we've changed from ESNI->ECHO. If we could do something like this, that'd be better than what's in -06. Note that I'd be even happier if we entirely eliminated ECHOConfig.minimuminnerlength - I don't think it actually adds any value and it represents yet another way to get a configuration wrong.\nThanks for the fixes. On the general point, I think that if we keep the flexibility for the client to vary the inner/outer CH however it chooses, then we will also have to depend on clients to figure out how to pad well for whatever they do. If we constrain the inner/outer variance a lot, then we could do better. It also strikes me that the minimuminnerlength is sort of in tension with the idea of compression - if a server has to set the minimuminnerlength to 300ish to handle clients that don't compress, then clients that do compress either don't get the benefit or have to ignore the minimuminnerlength.\nI don't think this PR works. The server does not have enough information to report a useful value, and the value the server reports is not useful to maintain good anonymity sets. The server does not know that the client supports some large extension or key share type (or lots of cipher suites, ALPN protocols, etc). are especially fun. It also doesn't know which extensions the client will compress from the outer ClientHello and which it will reencode. Conversely, the value the server sets isn't useful. If the client implementation has a smaller baseline compression and extensions, we waste bytes without much benefit. Worse, if the client implementation has a larger baseline compression and extensions, this PR regresses anonymity. The ClientHello will then exceed and we don't hide the lengths of the sensitive bits. Stepping back, we want to hide the distribution of ClientHelloInner lengths within some anonymity set. Most of the contributors to that length come from local client configuration: how many ciphers you support, what ALPN protocols, etc. The client is the one that knows, e.g., it sometimes asks servers for protocols A and B and sometimes for A and C. It largely has enough information to pad itself. The main exception is the server name. Realistically we need to trim anonymity sets down to colocated services, and the client doesn't know that distribution of names. Thus, we want servers to report the value in ECHOConfig. Maybe we'll need more such statistics later, which is what extensions are for. It's annoying this is a bit complicated, but we've signing up to encrypt far more things now. I should note this presumes an anonymity set of similarly-configured clients. I think, realistically, that's all we can say rigorous things about. It's nice to reduce visible artifacts of configuration, which is why we additionally round up to multiples of 16, but that's something we can burn into the protocol without ECHOConfig support.\nHiya, Sorry, not seeing that last. The PR just says (or was meant to say) to pad to some multiple of 16 in that case which I agree isn't much guidance, but does allow doing something that works. (As well as allowing things that don't work well;-) I agree that all this really needs to be driven by the client. Well, yes and no. I don't believe a server config is useful for that, (as I've argued before and still do;-) ISTM the max server name value will almost always be set to 254. May as well tell the client to pad as do that IMO. Not quite sure if I agree with that last para or not TBH! S.\nWell, if we were satisfied just padding names to a multiple of 16, we wouldn't need any of this. We'd just say to pad to a multiple 16 and move on. :-) In the current name-focused spelling, the client is always able to apply a smarter padding decision based on the server's knowledge of the name distribution. With this PR, clients whose local configuration exceeds or is near lose this and hit the baseline multiple of 16 behavior. Clients whose local configuration is decently below do hide the name distribution, but using many more bytes than they would with the current text. We have some metrics in Chrome on the distribution of DNS names. It's nowhere near 254. But, regardless, if you don't believe a server config is even useful for the name and agree with me that other extensions ought to be client-driven, I'm confused, why this PR? This PR seems to go in the opposite direction. Hehe. Which part?\nHIya, Yes. That's IMO the downside of having any server config. (To answer your question below, I prefer not having any server config at all but proposed this as I think it's better than draft-06.) Having also looked at some passive DNS stuff, I fully agree. But ISTM for many scenarios the entity that generates the ECHOConfig won't know the distribution of names in use (e.g. whoever configures OpenSSL won't, same with Apache etc.) so it's likely they'll have to go for whatever is the max (i.e. 254). Same is true for anyone with a wild-card server cert in play somewhere. See above. I'm trying for either a modest improvement over draft-06 or for us all to arrive at a real improvement by ditching the server config entirely:-) I like the last sentence. I think I dislike the 1st:-) Cheers, S.\nIt's not a downside of the draft-06 formulation, is it? The client learns something targeted to the name and can pad accordingly. Even with a wildcard server cert, the server should still know the distribution. For instance, isn't going to see anything terribly long most of the time. (Looks like the registration limit is 39 characters.) For reference, this is how long a 254-character hostname is: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa In order to enjoy wide adoption, ECHO can't be too much of a waste of resources. I think it thus makes sense to target a different maximum here. I guess my view is that, because of this issue above, this is a regression over the draft-06 formulation, even though it looks more general.\nHiya, Draft-06 assumes that no other inner CH content has the property that server guidance will help. My claim is that that guidance is useless in practice, and therefore unnecessary. I think the server config'd length in the PR is a little less useless:-) Some servers will. Some will not. Many (most?) server ECHOConfig instances will make use of default settings. Those'll pick 254 because adding a new VirtualHost to an apache instance (or similar) won't be co-ordinated with making a new ECHOConfig in general. And even it it were, (but it won't!) we get a TTL's worth of mismatch. If we don't provide a way for clients to handle the case where the server config is too short, again servers will publish 254. I am more interested in random smaller web sites/hosters myself. I think ECHO should work for those as well as we can make it, (albeit with smaller anonymity sets), and this tie between web server config and ECHOConfig is just a bad plan for such. AFAIK, all current deployments do just that. To be fair, that's largely driven by CF's draft-02 deployment, but I think that backs up my argument that if there's a max, that'll almost always be chosen, which implies it's a useless field that leads to the inefficiency you note. Then it can't be the max:-) The 254 value matches the max for servername IIRC. Ok. We disagree. But what do you think the client should do? Draft-06 is IMO plainly incorrect in that it says to pad in a way that'd expose other lengths should those be sensitive. If your answer were pad the SNI to at least ECHOConfig.maximumname_length and then make the overall a multiple of 16, with as many added 16 octets blocks as the client chooses, then a) that's != draft-06 and b) is quite close to this PR and c) just failing if the actual SNI in the inner CH is longer than the config setting seems quite wrong to me and d) hardcodes the assumption that SNI is the only thing where the server may know better. Cheers, S.\nIf we come up with other places where server guidance is useful, that can be addressed with extensions. But, gotcha, I suppose that is where we disagree. I think server guidance on the overall ClientHello size is entirely useless, while server guidance on the name is may be useful. Agreed that it should also work for smaller things. I don't follow how this PR helps in that regard. It seems they still need to come up with a random value or a default, but now it's even less clear how to set it well. Er, yes, sorry I meant to say target a different public length. That is, 254 is so far from the maximum domain name in practice (As it should be! Hard length limits ought to be comfortably away from the actual requirement.), so padding up to it is wasteful. I think it would be better to pad up to a tighter value and, if we exceed it, accept that we only have the fallback multiple of 16 padding. (Or maybe a different strategy.) It does result in smaller anonymity sets in edge cases, but I think that a reasonable default tradeoff given other desires like broader deployment and not wasting too much of users' data. I'm thinking something along the lines of: For each field, determine how much it makes sense to pad given what kinds of things it sends. If it varies ALPN, maybe round up to the largest of those. Most of this can be determined without server help. For fields where server help is useful, like the name, apply that help. Right now it's simply a function. I'm also open to other strategies. Sum all that padding together. In order to generally reduce entropy across different kinds of clients, maybe apply some additional overall padding, like rounding up to a multiple of 16, which cuts down the state space by 16x and doesn't cost much. (Or maybe we should round up to the nearest ? That avoids higher resolution on longer values while bounding the fraction of bytes wasted by this step.) That said, it's worth keeping in mind that clients already vary quite a bit in the cleartext portions of the ClientHello. There's a bit of tension between wanting to look like the other clients and wanting to ship new security improvements. This is different from this PR because this PR asks the server to report a value including things it doesn't know anything about (how large the client-controlled parameters are). I don't quite follow (c). It seems this PR and the above proposal has roughly the same failure modes when something is too long. The difference is that 's failure mode is only tripped on particular long names (something the server knows the distribution of), while this PR's equivalent failure mode is based on factors outside the server control, yet the server is responsible for the value.\nClarification: this is for each field where the client doesn't reference the outer ClientHello. If the client wishes to do that, it means the client doesn't consider that field secret so there isn't anything to pad.\nI like NAME suggested algorithm, as an ideal padding function requires input from both the client and server. (Ideally, servers would send padding recommendations for each extension that might vary specifically for them, such as the server name. They don't have insight into anything else, so clients need to make padding decisions based on local configuration.) NAME what do you think?\nCould live with it, but still prefer no server config. I'm looking a bit at some numbers and will try make a concrete suggestion later today. If I don't get that done, it seems fair to accept NAME algo and move on. Cheers, S.\nIf we can get away with no server config, I'm also entirely happy with that. Fewer moving parts is always great. We certainly could do that by saying names are padded up to 254, but I think that's too wasteful. But if we're happy with some global set of buckets that makes everyone happy (be it multiples of 16, some funny exponential thing), great!\n(trimming violently and responding slowly:-) So your steps 1-6 is a better scheme than draft-06. I still think step 2 would be better as something like: ECHOConfig contains maxnamelen as before but we RECOMMEND it be zero and a) if the name to be padded is longer than maxnamelen, then names will be padded as follows: P=32-(len(name)%32)+headsortails*32, or b) if the name to be padded is shorter than maxnamelen then we just do as the server asked and pad to that length. That (hopefully:-) means a default of \"pad out to the next multiple of 32, then toss a coin and add another 32 if the coin landed as heads.\" If we landed somewhere there, I'd be happy to make this PR say that or to make a new one or whatever. According to some data I have the CDFs for names and for these padded lengths would be as follows: Len, CDF, Padded-CDF 32, 81.28, 40.64 64, 97.87, 89.57 96, 99.31, 98.59 128, 99.87, 99.59 That means that 81.28 of names are <32 octets long and 40.64% of name+namepadding will be 32 octets long. 97.87% of names are <64 octets and 89.57% of name+namepadding are 64 octets long. I think we should RECOMMEND setting maxnamelen to zero unless an ESNI key generator has a specific reason to set it to a non-zero value. ISTM the use-case for non-zero values is where the anonymity set has only a few very long names that with length differences of ECHO. I'm basically fine that we default to not serving that use-case wellmby default, as I don't believe it's real. (But I could be wrong;-) Cheers, S.\nCould the sender always pad their packets to at least 1200 bytes or something, using an encrypted extension in the ClientHello? I don't understand the focus on server input to a padding function that would ideally fill out one packet.\nNAME what's the value in adding zero or one 32B blocks after rounding? Why not just stick with what was done, say, in RFC8467? (We could make it read something like, \"clients may add zero or one blocks.\" Either way, I'm more or less fine with the suggested change. Would you mind updating this PR to match?)\nPer above, clients could use help from servers in choosing appropriate padding lengths. (Only servers know their name anonymity set.)\nHiya, Say if the anonymity set has lots of names <32 length (~81% of names in the data I have) but a small number between 32 and 64 (~16% in my data). Then this should disguise the use of the longer names at the expense of sending more octets. Would be v. happy to know if that made sense to others. Cheers, S.\nA friendly amendment below (but one I think is important)... I think that last really needs to be \"Only servers can know their name anonymity set. But many servers will not know that, either due to wildcard certs or because names/vhosts are added and removed all the time independent of whatever is in the DNS.\" Cheers, S.\nIt makes sense, I'm just not not sure it's worth the cost. That said, this will always be an imperfect solution, so I'm happy either way.\nYep. Me neither. It is cheaper than padding to 254 octets for everyone though;-) Partly, I'm not that sympathetic towards anyone who wants to hide a crazily-long server_name. OTOH, I guess there may be a few who can't easily change names so we probably ought try do something for 'em. Cheers, S.\nAgree that only servers could know their name anonymity set. But what is the cost of always padding out the ClientHello so it approaches the MTU? Just trying to understand why the spec attempts to economize on the number of bytes in the ClientHello packet.\nAs far as I know, there's no reason other than folks might not want to always \"waste\" these bytes.\nPadding the ClientHello up to an MTU isn't free. See the numbers here on what a 400-byte increase costs. URL Likewise, also from that post and a , post-quantum key shares will likely exceed a packet anyway. I think our padding scheme should be designed with that expectation in mind (e.g., the mechanism fails this because a PQ key share in the inner ClientHello will blow past the built-in server assumptions on client behavior). Canary builds of Chrome already have code for such a thing behind a flag. Edit: Added a link to the later experiment, which I'd forgotten was a separate post.\nIt's not quite clear what's going on in those results. For example, the mobile results are close to zero. Without seeing confidence intervals, and the distributions of the full packet sizes, I don't think this experiment settles this issue (although it could, with more detail). Also, is there a theory on /why/ 400 bytes might add latency? If we're going to design the padding mechanism with PQ key shares in mind, but not conventional ones, that should be in the document.\nNAME would know the details of that experiment, but I don't think the idea that sending more bytes takes more time would be terribly surprising! :-P I don't think anyone's suggested designing without conventional key shares in mind. They are what's deployed today, after all. However, we shouldn't design only for a small unpadded ClientHello. PQ key shares will need larger ones. Session tickets also go in the ClientHello, and it's common for TLS servers to stick client certificates inside tickets, so you can already get large ones today already.\nLet's assume the results for \"SI\" in the first link actually do show a significant delta. The question is whether per-packet costs dominate or per-byte costs do, and whether the experiment sometimes increased the number of packets (e.g. by sometimes bumping the ultimate ClientHello size above 1280). See Section 5.1 in URL for device performance metrics on bytes vs packets. Looking at the ClientHello traffic on my computer, I can see that Safari is sending ClientHello messages that are about 500-600 bytes. My copy of Chrome seems to be sending CurveCECPQ2 (16696?) shares, and its ClientHello messages are fragmented.\nNAME will you be able to update this as above?\nwould the spec allow me to put an arbitrarily-sized extension in the encrypted part of the ClientHello? if so, I don't care about the padding schemes.\nYes, of course! The padding policies described here are guidance at best.\nI've updated the PR to try reflect the discussion above. Please take a peek and see if it seems useful now.\nThis looks like the right idea. I'll review this again as well, but it's probably better to wait for NAME suggestions to be addressed.\nThanks for all the changes Chris. I think I've actioned all those as requested. If not, I tried, but failed:-)\nNAME do you need any help moving this forward?\nDoing other stuff today but feel free to take the text and then do more edits. Or I can look at it tomorrow. Either's fine by me. S.\nThanks for taking the time. None of my comments are strongly-held opinions (meaning they only need to be addressed at all if others agree that the issues are important).\nStrong +1 to NAME -- thank you for the work here, NAME\nNAME can you please resolve conflicts? NAME NAME can you please review?\nFair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. And I'd guess that only a few names in that set would be popular/common, so we could be giving away the game whenever an ECHO is 32 octets longer than the usual. The random padding could mitigate that passive attack but yes would be vulnerable to an active reset based counter. I'd also be fine with just living with the issue in which case we might want to note that using ECHO with uncommonly long names is a bad idea.\nLet's just drop the random addition. Servers don't (can't) do a padding check anyway, so clients can always choose to make this longer if desired.\nNAME can you update this PR and resolve the conflicts?\n\"Fair point. The argument I'd make for keeping this is that if an anonymity set has ~80% of names <32 octets, then if we do nothing, passive observation will probably reveal when the server name is in the longest ~20%. \" Which is precisely why we want the server to provide the longest name, so the client can pad to it.\nAs discussed on today's call, let's do the following: Resolve merge conflicts. Drop random padding for the SNI. And add text which says that if the client's name is larger than maxnamelength, it rounds up to the nearest 32B or 64B boundary. NAME do you need, or would you like, any help with this?\nI think that this approach is generally better, but it runs afoul of divergence in client configuration that could dominate the calculation. For instance, a client that sends two key shares will have very different padding requirements to one that sends only one. For that to work, we need to assume that implementation-/deployment-specific variation of the inner CH is largely eliminated as a result of the compression scheme we adopt. That also assumes that these variations in client behaviour are not privacy-sensitive. I think that we should be directly acknowledging those assumptions if that is indeed the case. If there is a significant variation in size due to client-specific measures, then we're in a bit of an awkward position. That might render any similarly simple scheme ineffective. For instance, a client that later supports a set of ALPN identifiers that could vary in length depending on context cannot use this signal directly. It has to apply its own logic about the padding of that extension. That a client is doing this won't be known to a server that deploys this today. In that case, the advice from the server (which knows most about the anonymity set across which it is hoping to spread the resulting CH), isn't going to be forced through an additional filter at the client. The results are unlikely to be good without a great deal of care.I am generally fine with this, with the exception of the random padding, which I think should be omitted. It's hard to analyze and the value is unclear. In particular, many clients will retry if they receive TCP errors, and so the attacker can learn information about the true minimum value by forging TCP RSTs and looking at the new CH.", "new_text": "11.1. IANA is requested to create an entry, encrypted_server_name(0xffce), in the existing registry for ExtensionType (defined in RFC8446), with \"TLS 1.3\" column values being set to \"CH, EE\", and \"Recommended\" column being set to \"Yes\". 11.2. IANA is requested to create an entry, echo_required(121) in the existing registry for Alerts (defined in RFC8446), with the \"DTLS-OK\""}
{"id": "q-en-draft-ietf-tls-esni-c27929b83fcae7f0af06a53014d50efb17af703a814f023617f329e93332b81f", "old_text": "has been unable to develop a completely generic solution. I-D.ietf- tls-sni-encryption provides a description of the problem space and some of the proposed techniques. One of the more difficult problems is \"Do not stick out\" (I-D.ietf-tls-sni-encryption; Section 3.4): if only sensitive/private services use SNI encryption, then SNI encryption is a signal that a client is going to such a service. For this reason, much recent work has focused on concealing the fact that", "comments": "This addresses . cc NAME NAME NAME NAME NAME\nThis looks fine, though I'll note that since an HTTPSSVC record is public and the outer ClientHello record is public, it's not unreasonable to pull values from there to put in the public fields since the observer already knows (or can get) them. Restricting them to come only from the ECHOConfig seems unnecessary. I think the key observation in this PR is that the outer ClientHello isn't really offering to establish a TLS connection -- it's a not-crazy-looking wrapper for the actual ClientHello inside the ECHO extension, and the vehicle for the recovery flow. The only things a server can legitimately do are respond to the inner ClientHello or do a recovery handshake. Thus, most of the things you might set on the outer ClientHello are irrelevant; they can be chosen to look innocuous, chosen at random, or copied from the inner ClientHello as the implementation chooses.\nNAME does this seem reasonable to you?\nSeems reasonable.", "new_text": "has been unable to develop a completely generic solution. I-D.ietf- tls-sni-encryption provides a description of the problem space and some of the proposed techniques. One of the more difficult problems is \"Do not stick out\" (I-D.ietf-tls-sni-encryption, Section 3.4): if only sensitive/private services use SNI encryption, then SNI encryption is a signal that a client is going to such a service. For this reason, much recent work has focused on concealing the fact that"}
{"id": "q-en-draft-ietf-tls-esni-c27929b83fcae7f0af06a53014d50efb17af703a814f023617f329e93332b81f", "old_text": "\"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. All TLS notation comes from RFC8446; Section 3. 3.", "comments": "This addresses . cc NAME NAME NAME NAME NAME\nThis looks fine, though I'll note that since an HTTPSSVC record is public and the outer ClientHello record is public, it's not unreasonable to pull values from there to put in the public fields since the observer already knows (or can get) them. Restricting them to come only from the ECHOConfig seems unnecessary. I think the key observation in this PR is that the outer ClientHello isn't really offering to establish a TLS connection -- it's a not-crazy-looking wrapper for the actual ClientHello inside the ECHO extension, and the vehicle for the recovery flow. The only things a server can legitimately do are respond to the inner ClientHello or do a recovery handshake. Thus, most of the things you might set on the outer ClientHello are irrelevant; they can be chosen to look innocuous, chosen at random, or copied from the inner ClientHello as the implementation chooses.\nNAME does this seem reasonable to you?\nSeems reasonable.", "new_text": "\"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. All TLS notation comes from RFC8446, Section 3. 3."}
{"id": "q-en-draft-ietf-tls-esni-c27929b83fcae7f0af06a53014d50efb17af703a814f023617f329e93332b81f", "old_text": "maximum_name_length is greater than 256. A list of extensions that the client can take into consideration when generating a Client Hello message. The purpose of the field is to provide room for additional features in the future. The format is defined in RFC8446; Section 4.2. The same interpretation rules apply: extensions MAY appear in any order, but there MUST NOT be more than one extension of the same type in the extensions block. An extension may be tagged as mandatory by using an extension type codepoint with the high order bit set to 1. A client which receives a mandatory extension they do not understand must reject the ECHOConfig content. Clients MUST parse the extension list and check for unsupported mandatory extensions. If an unsupported mandatory extension is", "comments": "This addresses . cc NAME NAME NAME NAME NAME\nThis looks fine, though I'll note that since an HTTPSSVC record is public and the outer ClientHello record is public, it's not unreasonable to pull values from there to put in the public fields since the observer already knows (or can get) them. Restricting them to come only from the ECHOConfig seems unnecessary. I think the key observation in this PR is that the outer ClientHello isn't really offering to establish a TLS connection -- it's a not-crazy-looking wrapper for the actual ClientHello inside the ECHO extension, and the vehicle for the recovery flow. The only things a server can legitimately do are respond to the inner ClientHello or do a recovery handshake. Thus, most of the things you might set on the outer ClientHello are irrelevant; they can be chosen to look innocuous, chosen at random, or copied from the inner ClientHello as the implementation chooses.\nNAME does this seem reasonable to you?\nSeems reasonable.", "new_text": "maximum_name_length is greater than 256. A list of extensions that the client can take into consideration when generating a ClientHello message. The purpose of the field is to provide room for additional functionality in the future. See config-extensions for guidance on what type of extensions are appropriate for this structure. The format is defined in RFC8446, Section 4.2. The same interpretation rules apply: extensions MAY appear in any order, but there MUST NOT be more than one extension of the same type in the extensions block. An extension can be tagged as mandatory by using an extension type codepoint with the high order bit set to 1. A client which receives a mandatory extension they do not understand MUST reject the ECHOConfig content. Clients MUST parse the extension list and check for unsupported mandatory extensions. If an unsupported mandatory extension is"}
{"id": "q-en-draft-ietf-tls-esni-c27929b83fcae7f0af06a53014d50efb17af703a814f023617f329e93332b81f", "old_text": "9.2. A more serious problem is MITM proxies which do not support this extension. RFC8446; Section 9.3 requires that such proxies remove any extensions they do not understand. The handshake will then present a certificate based on the public name, without echoing the \"encrypted_client_hello\" extension to the client.", "comments": "This addresses . cc NAME NAME NAME NAME NAME\nThis looks fine, though I'll note that since an HTTPSSVC record is public and the outer ClientHello record is public, it's not unreasonable to pull values from there to put in the public fields since the observer already knows (or can get) them. Restricting them to come only from the ECHOConfig seems unnecessary. I think the key observation in this PR is that the outer ClientHello isn't really offering to establish a TLS connection -- it's a not-crazy-looking wrapper for the actual ClientHello inside the ECHO extension, and the vehicle for the recovery flow. The only things a server can legitimately do are respond to the inner ClientHello or do a recovery handshake. Thus, most of the things you might set on the outer ClientHello are irrelevant; they can be chosen to look innocuous, chosen at random, or copied from the inner ClientHello as the implementation chooses.\nNAME does this seem reasonable to you?\nSeems reasonable.", "new_text": "9.2. A more serious problem is MITM proxies which do not support this extension. RFC8446, Section 9.3 requires that such proxies remove any extensions they do not understand. The handshake will then present a certificate based on the public name, without echoing the \"encrypted_client_hello\" extension to the client."}
{"id": "q-en-draft-ietf-tls-esni-c27929b83fcae7f0af06a53014d50efb17af703a814f023617f329e93332b81f", "old_text": "IANA is requested to create an entry, echo_required(121) in the existing registry for Alerts (defined in RFC8446), with the \"DTLS-OK\" column being set to \"Y\". ", "comments": "This addresses . cc NAME NAME NAME NAME NAME\nThis looks fine, though I'll note that since an HTTPSSVC record is public and the outer ClientHello record is public, it's not unreasonable to pull values from there to put in the public fields since the observer already knows (or can get) them. Restricting them to come only from the ECHOConfig seems unnecessary. I think the key observation in this PR is that the outer ClientHello isn't really offering to establish a TLS connection -- it's a not-crazy-looking wrapper for the actual ClientHello inside the ECHO extension, and the vehicle for the recovery flow. The only things a server can legitimately do are respond to the inner ClientHello or do a recovery handshake. Thus, most of the things you might set on the outer ClientHello are irrelevant; they can be chosen to look innocuous, chosen at random, or copied from the inner ClientHello as the implementation chooses.\nNAME does this seem reasonable to you?\nSeems reasonable.", "new_text": "IANA is requested to create an entry, echo_required(121) in the existing registry for Alerts (defined in RFC8446), with the \"DTLS-OK\" column being set to \"Y\". 12. Any future information or hints that influence the outer ClientHello SHOULD be specified as ECHOConfig extensions, or in an entirely new version of ECHOConfig. This is primarily because the outer ClientHello exists only in support of ECHO. Namely, it is both an envelope for the encrypted inner ClientHello and enabler for authenticated key mismatch signals (see server-behavior). In contrast, the inner ClientHello is the true ClientHello used upon ECHO negotiation. "}
{"id": "q-en-draft-ietf-tls-esni-bdf9e0081c4b23eca846ddb04e352e1a558754300a5ce990e6ba25cd1bc9fbf0", "old_text": "match a value provided in the corresponding \"ECHConfig.cipher_suites\" list. A cryptographic hash of the \"ECHConfig\" structure from which the ECH key was obtained, i.e., from the first byte of \"version\" to the end of the structure. This hash is computed using the hash function associated with \"cipher_suite\", i.e., the corresponding HPKE KDF algorithm hash. The HPKE encapsulated key, used by servers to decrypt the corresponding \"encrypted_ch\" field.", "comments": "Per NAME suggestion, this replaces computations involving the \"hash underlying the KDF\" with evaluation of the KDF itself. This respects the API defined in the HPKE draft. This supersedes .\nHey NAME and NAME I rolled up your changes! Please have another look before merging.\nRebased and rolled up.\nThanks -- let's let this stew until Friday, at which point we can merge.\nRebased.\nField is computed as \"digest of the complete ClientHelloInner\" (including presumably, the \"outer_extensions\" extension). I don't understand the analytical value of this hash. Can anyone explain its reasoning?\nThe hash is intended to prevent the attacker from influencing the reconstructed CHInner by manipulating CHOuter. Consider the contrived example of an extension which is separately protected but only works for SNI-A and not SNI-B. E.g., it's constructed by doing E(K_sni, SNI). If that extension were included in CHOuter and then imported into CHInner via this mechanism, then an attacker could substitute it and use a reaction attack to determine whether the CH referred to A or B. However, with the hash (which may be not well-specified), this attack gets caught by the client-facing server and the handshake is rejected without leaking information.\nMy initial understanding of this mechanism was incorrect. What happens is that the hash is computed over the ClientHelloInner that is consumed by the backend server. This is intended to ensure that if ECH accepts, then the backend server consumes the CH intended by the client. (PR URL makes this more clear, I think.) I wonder if this mechanism is redundant. Agreement on the CH between the client and backend server is provided by the binding of the transcript hash to the handshake traffic secret, so any attempt to manipulate the ClientHelloInner by fiddling with the ClientHelloOuter should be detected. On the other hand, the binding of the outer extensions to ClientHelloInner ensures agreement on the CH prior to the backend server consuming it. This is a strictly stronger property, which may or may not be useful. I agree that the attack works, but the example seems really contrived :) I'd be interested if there are more natural extensions for which this extra layer of security would be useful.\nFrom the interim meeting, the consensus seems to be that we should err on the conservative side.\nLet's close this issue and leave the mechanism as is.\nHrm, before closing I think it might be worth noting, in the spec, the stronger security property this provides.\nI think we should also consider binding the CHInner to a hash of the whole CHOuter (minus the ClientEncryptedCH extension). That would have the same size overhead, but would give us a stronger property: modifying anything in CHOuter (not just one of the copied extensions) invalidates the whole thing.\nThe point of hashing outer extensions is to provide integrity of the ClientHelloInner. What would be the point in hashing more of the ClientHelloOuter? (Not a rhetorical question :) It seems conceptually different, and I'm curious what the security benefit might be.)\nIf we authenticate the ClientHelloOuter, we'd be authenticating all the inputs that go into the ClientHelloInner, so I think that would also provide integrity. I expect Ben is suggesting this in the context of URL, where GREASE can additionally be made a little stronger if we did that. Getting that to interact well with any PSK binders in ClientHelloOuter is tricky though, unless we decide to ban resumption attempts in outer ClientHellos. (Probably best to keep the GREASE-related conversation in one place. Though I see I didn't help on that front because some discussion is also in . Ah, GitHub.)\nYep, I'm just echoing NAME observation. See also URL\nToday, the only thing that gets copied into ClientHelloInner is extensions, so I don't see how it's an improvement in terms of our primary security goal. (Not sure about the \"don't stick out\" benefit.)\nI don't understand how hashing all of CHOuter is going to work. In split mode, the server never sees any of CHOuter.\nYes, the hash would be checked when reconstructing the inner ClientHello (as here), and would not be present in the reconstructed ClientHello.\nThere's not much discussion here. Unless there's objection, shall we leave the mechanism as-is and close out this issue? cc/ NAME\nWorks for me.\nFrom Section 6.1: This assumes that the KDF specifies some underlying hash function. From the HPKE spec, it seems like all that's required of the KDF is that it's an \"extract-then-expand\" KDF: URL In fact, the currently assigned code points are all for HKDF: URL But there's no requirement that the KDF always have some underlying hash function. As an alternative to HKDF, one can imagine a dedicated primitive that satisfies the extract-then-expand API but is not constructed from a hash function.\nImpacted fields: ClientEncryptedCH.record_digest URL\nWe can probably replace with and be done with it.\nPutting on my theorist hat, I'd say that the extract function is a \"randomness extractor\" and not a collision resistant hash function. We want the latter in this context. From a practical stand point, using the extract function is more hashing than we need. I'd suggest doing away with (see ) and figuring something else out for . Could we simply require HKDF be used with HPKE?\nSure, but in practice randomness extractors use a collision resistant hash or PRF (CBC) under the hood. I don't think we can require HKDF for HPKE. That seems too restrictive. We might just say that ECH requires HKDF-based HPKE ciphersuites, as HKDF is already paramount for TLS 1.3, and then just reference HKDF's hash function.\naddresses this issue.\nAs I mentioned on the list, we can use the KDF in place of a hash.\nI tend to agree. I'll create a new PR.\nHave a look at , NAME\nResolved in . Closing.", "new_text": "match a value provided in the corresponding \"ECHConfig.cipher_suites\" list. Equal to \"Expand(Extract(\"\", config), \"ech_config_digest\", Nh)\", where \"config\" is the \"ECHConfig\" structure and \"Extract\", \"Expand\", and \"Nh\" are as specified by the cipher suite KDF. (Passing the literal \"\"\"\" as the salt is interpreted by \"Extract\" as no salt being provided.) The HPKE encapsulated key, used by servers to decrypt the corresponding \"encrypted_ch\" field."}
{"id": "q-en-draft-ietf-tls-esni-bdf9e0081c4b23eca846ddb04e352e1a558754300a5ce990e6ba25cd1bc9fbf0", "old_text": "When sending ClientHello, the client first computes ClientHelloInner, including any PSK binders, and then MAY substitute extensions which it knows will be duplicated in ClientHelloOuter. To do so, the client computes a hash of the entire ClientHelloInner message with the same hash as for the KDF used to encrypt ClientHelloInner. Then, the client removes and replaces extensions from ClientHelloInner with a single \"outer_extensions\" extension. The list of \"outer_extensions\" include those which were removed from ClientHelloInner, in the order in which they were removed. The hash contains the full ClientHelloInner hash computed above.", "comments": "Per NAME suggestion, this replaces computations involving the \"hash underlying the KDF\" with evaluation of the KDF itself. This respects the API defined in the HPKE draft. This supersedes .\nHey NAME and NAME I rolled up your changes! Please have another look before merging.\nRebased and rolled up.\nThanks -- let's let this stew until Friday, at which point we can merge.\nRebased.\nField is computed as \"digest of the complete ClientHelloInner\" (including presumably, the \"outer_extensions\" extension). I don't understand the analytical value of this hash. Can anyone explain its reasoning?\nThe hash is intended to prevent the attacker from influencing the reconstructed CHInner by manipulating CHOuter. Consider the contrived example of an extension which is separately protected but only works for SNI-A and not SNI-B. E.g., it's constructed by doing E(K_sni, SNI). If that extension were included in CHOuter and then imported into CHInner via this mechanism, then an attacker could substitute it and use a reaction attack to determine whether the CH referred to A or B. However, with the hash (which may be not well-specified), this attack gets caught by the client-facing server and the handshake is rejected without leaking information.\nMy initial understanding of this mechanism was incorrect. What happens is that the hash is computed over the ClientHelloInner that is consumed by the backend server. This is intended to ensure that if ECH accepts, then the backend server consumes the CH intended by the client. (PR URL makes this more clear, I think.) I wonder if this mechanism is redundant. Agreement on the CH between the client and backend server is provided by the binding of the transcript hash to the handshake traffic secret, so any attempt to manipulate the ClientHelloInner by fiddling with the ClientHelloOuter should be detected. On the other hand, the binding of the outer extensions to ClientHelloInner ensures agreement on the CH prior to the backend server consuming it. This is a strictly stronger property, which may or may not be useful. I agree that the attack works, but the example seems really contrived :) I'd be interested if there are more natural extensions for which this extra layer of security would be useful.\nFrom the interim meeting, the consensus seems to be that we should err on the conservative side.\nLet's close this issue and leave the mechanism as is.\nHrm, before closing I think it might be worth noting, in the spec, the stronger security property this provides.\nI think we should also consider binding the CHInner to a hash of the whole CHOuter (minus the ClientEncryptedCH extension). That would have the same size overhead, but would give us a stronger property: modifying anything in CHOuter (not just one of the copied extensions) invalidates the whole thing.\nThe point of hashing outer extensions is to provide integrity of the ClientHelloInner. What would be the point in hashing more of the ClientHelloOuter? (Not a rhetorical question :) It seems conceptually different, and I'm curious what the security benefit might be.)\nIf we authenticate the ClientHelloOuter, we'd be authenticating all the inputs that go into the ClientHelloInner, so I think that would also provide integrity. I expect Ben is suggesting this in the context of URL, where GREASE can additionally be made a little stronger if we did that. Getting that to interact well with any PSK binders in ClientHelloOuter is tricky though, unless we decide to ban resumption attempts in outer ClientHellos. (Probably best to keep the GREASE-related conversation in one place. Though I see I didn't help on that front because some discussion is also in . Ah, GitHub.)\nYep, I'm just echoing NAME observation. See also URL\nToday, the only thing that gets copied into ClientHelloInner is extensions, so I don't see how it's an improvement in terms of our primary security goal. (Not sure about the \"don't stick out\" benefit.)\nI don't understand how hashing all of CHOuter is going to work. In split mode, the server never sees any of CHOuter.\nYes, the hash would be checked when reconstructing the inner ClientHello (as here), and would not be present in the reconstructed ClientHello.\nThere's not much discussion here. Unless there's objection, shall we leave the mechanism as-is and close out this issue? cc/ NAME\nWorks for me.\nFrom Section 6.1: This assumes that the KDF specifies some underlying hash function. From the HPKE spec, it seems like all that's required of the KDF is that it's an \"extract-then-expand\" KDF: URL In fact, the currently assigned code points are all for HKDF: URL But there's no requirement that the KDF always have some underlying hash function. As an alternative to HKDF, one can imagine a dedicated primitive that satisfies the extract-then-expand API but is not constructed from a hash function.\nImpacted fields: ClientEncryptedCH.record_digest URL\nWe can probably replace with and be done with it.\nPutting on my theorist hat, I'd say that the extract function is a \"randomness extractor\" and not a collision resistant hash function. We want the latter in this context. From a practical stand point, using the extract function is more hashing than we need. I'd suggest doing away with (see ) and figuring something else out for . Could we simply require HKDF be used with HPKE?\nSure, but in practice randomness extractors use a collision resistant hash or PRF (CBC) under the hood. I don't think we can require HKDF for HPKE. That seems too restrictive. We might just say that ECH requires HKDF-based HPKE ciphersuites, as HKDF is already paramount for TLS 1.3, and then just reference HKDF's hash function.\naddresses this issue.\nAs I mentioned on the list, we can use the KDF in place of a hash.\nI tend to agree. I'll create a new PR.\nHave a look at , NAME\nResolved in . Closing.", "new_text": "When sending ClientHello, the client first computes ClientHelloInner, including any PSK binders, and then MAY substitute extensions which it knows will be duplicated in ClientHelloOuter. To do so, the client computes a hash of the entire ClientHelloInner message as: where \"inner\" is the ClientHelloInner structure and \"Extract\", \"Expand\", and \"Nh\" are as defined by the KDF API in I-D.irtf-cfrg- hpke. Then, the client removes and replaces extensions from ClientHelloInner with a single \"outer_extensions\" extension. The list of \"outer_extensions\" include those which were removed from ClientHelloInner, in the order in which they were removed. The hash contains the full ClientHelloInner hash computed above."}
{"id": "q-en-draft-ietf-tls-esni-bdf9e0081c4b23eca846ddb04e352e1a558754300a5ce990e6ba25cd1bc9fbf0", "old_text": "server in the same session. Set the \"config_digest\" field to a randomly-generated string of hash_length bytes, where hash_length is the length of the hash function associated with the chosen HpkeCipherSuite. Set the \"enc\" field to a randomly-generated valid encapsulated public key output by the HPKE KEM.", "comments": "Per NAME suggestion, this replaces computations involving the \"hash underlying the KDF\" with evaluation of the KDF itself. This respects the API defined in the HPKE draft. This supersedes .\nHey NAME and NAME I rolled up your changes! Please have another look before merging.\nRebased and rolled up.\nThanks -- let's let this stew until Friday, at which point we can merge.\nRebased.\nField is computed as \"digest of the complete ClientHelloInner\" (including presumably, the \"outer_extensions\" extension). I don't understand the analytical value of this hash. Can anyone explain its reasoning?\nThe hash is intended to prevent the attacker from influencing the reconstructed CHInner by manipulating CHOuter. Consider the contrived example of an extension which is separately protected but only works for SNI-A and not SNI-B. E.g., it's constructed by doing E(K_sni, SNI). If that extension were included in CHOuter and then imported into CHInner via this mechanism, then an attacker could substitute it and use a reaction attack to determine whether the CH referred to A or B. However, with the hash (which may be not well-specified), this attack gets caught by the client-facing server and the handshake is rejected without leaking information.\nMy initial understanding of this mechanism was incorrect. What happens is that the hash is computed over the ClientHelloInner that is consumed by the backend server. This is intended to ensure that if ECH accepts, then the backend server consumes the CH intended by the client. (PR URL makes this more clear, I think.) I wonder if this mechanism is redundant. Agreement on the CH between the client and backend server is provided by the binding of the transcript hash to the handshake traffic secret, so any attempt to manipulate the ClientHelloInner by fiddling with the ClientHelloOuter should be detected. On the other hand, the binding of the outer extensions to ClientHelloInner ensures agreement on the CH prior to the backend server consuming it. This is a strictly stronger property, which may or may not be useful. I agree that the attack works, but the example seems really contrived :) I'd be interested if there are more natural extensions for which this extra layer of security would be useful.\nFrom the interim meeting, the consensus seems to be that we should err on the conservative side.\nLet's close this issue and leave the mechanism as is.\nHrm, before closing I think it might be worth noting, in the spec, the stronger security property this provides.\nI think we should also consider binding the CHInner to a hash of the whole CHOuter (minus the ClientEncryptedCH extension). That would have the same size overhead, but would give us a stronger property: modifying anything in CHOuter (not just one of the copied extensions) invalidates the whole thing.\nThe point of hashing outer extensions is to provide integrity of the ClientHelloInner. What would be the point in hashing more of the ClientHelloOuter? (Not a rhetorical question :) It seems conceptually different, and I'm curious what the security benefit might be.)\nIf we authenticate the ClientHelloOuter, we'd be authenticating all the inputs that go into the ClientHelloInner, so I think that would also provide integrity. I expect Ben is suggesting this in the context of URL, where GREASE can additionally be made a little stronger if we did that. Getting that to interact well with any PSK binders in ClientHelloOuter is tricky though, unless we decide to ban resumption attempts in outer ClientHellos. (Probably best to keep the GREASE-related conversation in one place. Though I see I didn't help on that front because some discussion is also in . Ah, GitHub.)\nYep, I'm just echoing NAME observation. See also URL\nToday, the only thing that gets copied into ClientHelloInner is extensions, so I don't see how it's an improvement in terms of our primary security goal. (Not sure about the \"don't stick out\" benefit.)\nI don't understand how hashing all of CHOuter is going to work. In split mode, the server never sees any of CHOuter.\nYes, the hash would be checked when reconstructing the inner ClientHello (as here), and would not be present in the reconstructed ClientHello.\nThere's not much discussion here. Unless there's objection, shall we leave the mechanism as-is and close out this issue? cc/ NAME\nWorks for me.\nFrom Section 6.1: This assumes that the KDF specifies some underlying hash function. From the HPKE spec, it seems like all that's required of the KDF is that it's an \"extract-then-expand\" KDF: URL In fact, the currently assigned code points are all for HKDF: URL But there's no requirement that the KDF always have some underlying hash function. As an alternative to HKDF, one can imagine a dedicated primitive that satisfies the extract-then-expand API but is not constructed from a hash function.\nImpacted fields: ClientEncryptedCH.record_digest URL\nWe can probably replace with and be done with it.\nPutting on my theorist hat, I'd say that the extract function is a \"randomness extractor\" and not a collision resistant hash function. We want the latter in this context. From a practical stand point, using the extract function is more hashing than we need. I'd suggest doing away with (see ) and figuring something else out for . Could we simply require HKDF be used with HPKE?\nSure, but in practice randomness extractors use a collision resistant hash or PRF (CBC) under the hood. I don't think we can require HKDF for HPKE. That seems too restrictive. We might just say that ECH requires HKDF-based HPKE ciphersuites, as HKDF is already paramount for TLS 1.3, and then just reference HKDF's hash function.\naddresses this issue.\nAs I mentioned on the list, we can use the KDF in place of a hash.\nI tend to agree. I'll create a new PR.\nHave a look at , NAME\nResolved in . Closing.", "new_text": "server in the same session. Set the \"config_digest\" field to a randomly-generated string of \"Nh\" bytes, where \"Nh\" is the output length of the \"Extract\" function of the KDF associated with the chosen cipher suite. (The KDF API is specified in I-D.irtf-cfrg-hpke.) Set the \"enc\" field to a randomly-generated valid encapsulated public key output by the HPKE KEM."}
{"id": "q-en-draft-ietf-tls-esni-bdf9e0081c4b23eca846ddb04e352e1a558754300a5ce990e6ba25cd1bc9fbf0", "old_text": "ClientEncryptedCH.encrypted_ch. This matching procedure should be done using one of the following two checks: Compare ClientEncryptedCH.config_digest against cryptographic hashes of known ECHConfig and choose the one that matches. Use trial decryption of ClientEncryptedCH.encrypted_ch with known ECHConfig and choose the one that succeeds.", "comments": "Per NAME suggestion, this replaces computations involving the \"hash underlying the KDF\" with evaluation of the KDF itself. This respects the API defined in the HPKE draft. This supersedes .\nHey NAME and NAME I rolled up your changes! Please have another look before merging.\nRebased and rolled up.\nThanks -- let's let this stew until Friday, at which point we can merge.\nRebased.\nField is computed as \"digest of the complete ClientHelloInner\" (including presumably, the \"outer_extensions\" extension). I don't understand the analytical value of this hash. Can anyone explain its reasoning?\nThe hash is intended to prevent the attacker from influencing the reconstructed CHInner by manipulating CHOuter. Consider the contrived example of an extension which is separately protected but only works for SNI-A and not SNI-B. E.g., it's constructed by doing E(K_sni, SNI). If that extension were included in CHOuter and then imported into CHInner via this mechanism, then an attacker could substitute it and use a reaction attack to determine whether the CH referred to A or B. However, with the hash (which may be not well-specified), this attack gets caught by the client-facing server and the handshake is rejected without leaking information.\nMy initial understanding of this mechanism was incorrect. What happens is that the hash is computed over the ClientHelloInner that is consumed by the backend server. This is intended to ensure that if ECH accepts, then the backend server consumes the CH intended by the client. (PR URL makes this more clear, I think.) I wonder if this mechanism is redundant. Agreement on the CH between the client and backend server is provided by the binding of the transcript hash to the handshake traffic secret, so any attempt to manipulate the ClientHelloInner by fiddling with the ClientHelloOuter should be detected. On the other hand, the binding of the outer extensions to ClientHelloInner ensures agreement on the CH prior to the backend server consuming it. This is a strictly stronger property, which may or may not be useful. I agree that the attack works, but the example seems really contrived :) I'd be interested if there are more natural extensions for which this extra layer of security would be useful.\nFrom the interim meeting, the consensus seems to be that we should err on the conservative side.\nLet's close this issue and leave the mechanism as is.\nHrm, before closing I think it might be worth noting, in the spec, the stronger security property this provides.\nI think we should also consider binding the CHInner to a hash of the whole CHOuter (minus the ClientEncryptedCH extension). That would have the same size overhead, but would give us a stronger property: modifying anything in CHOuter (not just one of the copied extensions) invalidates the whole thing.\nThe point of hashing outer extensions is to provide integrity of the ClientHelloInner. What would be the point in hashing more of the ClientHelloOuter? (Not a rhetorical question :) It seems conceptually different, and I'm curious what the security benefit might be.)\nIf we authenticate the ClientHelloOuter, we'd be authenticating all the inputs that go into the ClientHelloInner, so I think that would also provide integrity. I expect Ben is suggesting this in the context of URL, where GREASE can additionally be made a little stronger if we did that. Getting that to interact well with any PSK binders in ClientHelloOuter is tricky though, unless we decide to ban resumption attempts in outer ClientHellos. (Probably best to keep the GREASE-related conversation in one place. Though I see I didn't help on that front because some discussion is also in . Ah, GitHub.)\nYep, I'm just echoing NAME observation. See also URL\nToday, the only thing that gets copied into ClientHelloInner is extensions, so I don't see how it's an improvement in terms of our primary security goal. (Not sure about the \"don't stick out\" benefit.)\nI don't understand how hashing all of CHOuter is going to work. In split mode, the server never sees any of CHOuter.\nYes, the hash would be checked when reconstructing the inner ClientHello (as here), and would not be present in the reconstructed ClientHello.\nThere's not much discussion here. Unless there's objection, shall we leave the mechanism as-is and close out this issue? cc/ NAME\nWorks for me.\nFrom Section 6.1: This assumes that the KDF specifies some underlying hash function. From the HPKE spec, it seems like all that's required of the KDF is that it's an \"extract-then-expand\" KDF: URL In fact, the currently assigned code points are all for HKDF: URL But there's no requirement that the KDF always have some underlying hash function. As an alternative to HKDF, one can imagine a dedicated primitive that satisfies the extract-then-expand API but is not constructed from a hash function.\nImpacted fields: ClientEncryptedCH.record_digest URL\nWe can probably replace with and be done with it.\nPutting on my theorist hat, I'd say that the extract function is a \"randomness extractor\" and not a collision resistant hash function. We want the latter in this context. From a practical stand point, using the extract function is more hashing than we need. I'd suggest doing away with (see ) and figuring something else out for . Could we simply require HKDF be used with HPKE?\nSure, but in practice randomness extractors use a collision resistant hash or PRF (CBC) under the hood. I don't think we can require HKDF for HPKE. That seems too restrictive. We might just say that ECH requires HKDF-based HPKE ciphersuites, as HKDF is already paramount for TLS 1.3, and then just reference HKDF's hash function.\naddresses this issue.\nAs I mentioned on the list, we can use the KDF in place of a hash.\nI tend to agree. I'll create a new PR.\nHave a look at , NAME\nResolved in . Closing.", "new_text": "ClientEncryptedCH.encrypted_ch. This matching procedure should be done using one of the following two checks: Compare ClientEncryptedCH.config_digest against digests of known ECHConfig and choose the one that matches. Use trial decryption of ClientEncryptedCH.encrypted_ch with known ECHConfig and choose the one that succeeds."}
{"id": "q-en-draft-ietf-tls-esni-ed858a9c66a22dda0fb9f2dc7d26bdb94fa2606b7130cdaf40e8d1143fc199a9", "old_text": "When a client wants to establish a TLS session with the backend server, it constructs its ClientHello as usual (we will refer to this as the ClientHelloInner message) and then encrypts this message using the ECH public key. It then constructs a new ClientHello (ClientHelloOuter) with innocuous values for sensitive extensions, e.g., SNI, ALPN, etc., and with the encrypted ClientHelloInner in an \"encrypted_client_hello\" extension, which this document defines (encrypted-client-hello). Finally, it sends ClientHelloOuter to the server. Upon receiving the ClientHelloOuter, the client-facing server takes one of the following actions:", "comments": "In places, the current draft doesn't make a clear distinction between rejection and decryption failure. In particular \"cannot decrypt\" can be interpreted either: \"the client-facing server doesn't recognize the configuration specified by the client\"; or \"the client-facing server recognizes the configuration but decryption fails\". This change resolves this ambiguity. Addresses issue .\nTwo sections suggest to ignore the extension on decryption failure, but one suggests to send an alert. What should happen? Ignoring the extension on unrecognized failures sounds reasonable in order to \"not stick out\". However, it is not entirely unambiguous what to do if a valid ECHConfig is found but decryption fails. says: says: says:\nI think the first two statements apply when the config_digest doesn't correspond to a known ECHConfig (i.e., \"I can't decrypt this\"); the last is when the ECHConfig is known, but decryption fails (\"I should be able to decrypt this, but the ciphertext is invalid\"). The text could be a lot clearer here.\nNAME Yes, your interpretation is correct. I think this is the result of a number of uncoordinated changes.\nClosing as resolved.", "new_text": "When a client wants to establish a TLS session with the backend server, it constructs its ClientHello as usual (we will refer to this as the ClientHelloInner message) and then encrypts this message using the public key of the ECH configuration. It then constructs a new ClientHello (ClientHelloOuter) with innocuous values for sensitive extensions, e.g., SNI, ALPN, etc., and with an \"encrypted_client_hello\" extension, which this document defines (encrypted-client-hello). The extension's payload carries the encrypted ClientHelloInner and specifies the ECH configuration used for encryption. Finally, it sends ClientHelloOuter to the server. Upon receiving the ClientHelloOuter, the client-facing server takes one of the following actions:"}
{"id": "q-en-draft-ietf-tls-esni-ed858a9c66a22dda0fb9f2dc7d26bdb94fa2606b7130cdaf40e8d1143fc199a9", "old_text": "\"encrypted_client_hello\" extension and proceeds with the handshake as usual, per RFC8446, Section 4.1.2. If it supports ECH but cannot decrypt it, then it ignores the extension and proceeds with the handshake as usual. This is referred to as \"ECH rejection\". When ECH is rejected, the server sends an acceptable ECH configuration in its EncryptedExtensions message. If it supports ECH and can decrypt it, then it forwards the ClientHelloInner to the backend, who terminates the connection. This is referred to as \"ECH acceptance\". Upon receiving the server's response, the client determines whether ECH was accepted or rejected and proceeds with the handshake", "comments": "In places, the current draft doesn't make a clear distinction between rejection and decryption failure. In particular \"cannot decrypt\" can be interpreted either: \"the client-facing server doesn't recognize the configuration specified by the client\"; or \"the client-facing server recognizes the configuration but decryption fails\". This change resolves this ambiguity. Addresses issue .\nTwo sections suggest to ignore the extension on decryption failure, but one suggests to send an alert. What should happen? Ignoring the extension on unrecognized failures sounds reasonable in order to \"not stick out\". However, it is not entirely unambiguous what to do if a valid ECHConfig is found but decryption fails. says: says: says:\nI think the first two statements apply when the config_digest doesn't correspond to a known ECHConfig (i.e., \"I can't decrypt this\"); the last is when the ECHConfig is known, but decryption fails (\"I should be able to decrypt this, but the ciphertext is invalid\"). The text could be a lot clearer here.\nNAME Yes, your interpretation is correct. I think this is the result of a number of uncoordinated changes.\nClosing as resolved.", "new_text": "\"encrypted_client_hello\" extension and proceeds with the handshake as usual, per RFC8446, Section 4.1.2. If it supports ECH but does not recognize the configuration specified by the client, then it ignores the extension and terminates the handshake using the ClientHelloOuter. This is referred to as \"ECH rejection\". When ECH is rejected, the server sends an acceptable ECH configuration in its EncryptedExtensions message. If it supports ECH and recognizes the configuration, then it attempts to decrypt the ClientHelloInner. It aborts the handshake if decryption fails; otherwise it forwards the ClientHelloInner to the backend, who terminates the connection. This is referred to as \"ECH acceptance\". Upon receiving the server's response, the client determines whether ECH was accepted or rejected and proceeds with the handshake"}
{"id": "q-en-draft-ietf-tls-esni-ed858a9c66a22dda0fb9f2dc7d26bdb94fa2606b7130cdaf40e8d1143fc199a9", "old_text": "7.3.2.1. When the server cannot decrypt or does not process the \"encrypted_client_hello\" extension, it continues with the handshake using the plaintext \"server_name\" extension instead (see server- behavior). Clients that offer ECH then authenticate the connection", "comments": "In places, the current draft doesn't make a clear distinction between rejection and decryption failure. In particular \"cannot decrypt\" can be interpreted either: \"the client-facing server doesn't recognize the configuration specified by the client\"; or \"the client-facing server recognizes the configuration but decryption fails\". This change resolves this ambiguity. Addresses issue .\nTwo sections suggest to ignore the extension on decryption failure, but one suggests to send an alert. What should happen? Ignoring the extension on unrecognized failures sounds reasonable in order to \"not stick out\". However, it is not entirely unambiguous what to do if a valid ECHConfig is found but decryption fails. says: says: says:\nI think the first two statements apply when the config_digest doesn't correspond to a known ECHConfig (i.e., \"I can't decrypt this\"); the last is when the ECHConfig is known, but decryption fails (\"I should be able to decrypt this, but the ciphertext is invalid\"). The text could be a lot clearer here.\nNAME Yes, your interpretation is correct. I think this is the result of a number of uncoordinated changes.\nClosing as resolved.", "new_text": "7.3.2.1. When the server rejects ECH or otherwise ignores \"encrypted_client_hello\" extension, it continues with the handshake using the plaintext \"server_name\" extension instead (see server- behavior). Clients that offer ECH then authenticate the connection"}
{"id": "q-en-draft-ietf-tls-esni-6c44be55ce661cbdb95b252804b9f3a08a9d118b324ca708761542127ca33909", "old_text": "suite it will use for encryption. It MUST NOT choose a cipher suite not advertised by the configuration. Next, the client constructs the ClientHelloOuter message just as it does a standard ClientHello, with the exception of the following rules: It MUST offer to negotiate TLS 1.3 or above. It MUST include an \"encrypted_client_hello\" extension with a payload constructed as described below. The value of \"ECHConfig.public_name\" MUST be placed in the \"server_name\" extension. It MUST NOT include the \"pre_shared_key\" extension. (See flow- clienthello-malleability.) The client then constructs the ClientHelloInner message just as it does a standard ClientHello, with the exception of the following rules: It MUST NOT offer to negotiate TLS 1.2 or below. Note this is necessary to ensure the backend server does not negotiate a TLS version that is incompatible with ECH. It MUST NOT offer to resume any session for TLS 1.2 and below. It MAY offer any other extension in the ClientHelloOuter except those that have been incorporated into the ClientHelloInner as described in outer-extensions. It MAY copy any other field from the ClientHelloOuter except ClientHelloOuter.random. Instead, It MUST generate a fresh ClientHelloInner.random using a secure random number generator. (See flow-client-reaction.) It SHOULD contain TLS padding RFC7685 as described in padding. If implementing TLS 1.3's compatibility mode (see Appendix D.4 of RFC8446), it MUST copy the legacy_session_id field from ClientHelloOuter. This allows the server to echo the correct session ID when ECH is negotiated. The client might duplicate non-sensitive extensions in both messages. However, implementations need to take care to ensure that sensitive extensions are not offered in the ClientHelloOuter. [[OPEN ISSUE: We", "comments": "There are two ClientHelloInners, the actual one used in the handshake, and the encoded one with outerextensions substituted in. Both are described as ClientHelloInner, but this makes it slightly ambiguous which goes into, say, the handshake transcript. (It's late now, so I'll leave this as an issue without a PR. If we don't end up resolving this separately, I'll probably fix this as part of . If we decide to implicitly copy over legacysession_id, the encryption payload becomes less and less defensibly an actual ClientHelloInner.)\nIn my own implementation, I've been referring to the plaintext payload as CompressedClientHelloInner. Which is terrible, but unambiguous.\nWorks for me. I was thinking EncodedClientHelloInner because it's shorter and still works even if you choose not to compress anything. (A no-op encoding is still an encoding.)\n+1 to EncodedClientHelloInner, given that compression is optional. NAME if you have time, could you whip up a PR?\nI noticed this while drafting a PR for . We're a little inconsistent about whether the ClientHelloInner or ClientHelloOuter is constructed first. Section 4 says inner before outer: Section 5.1 implies inner before outer: But section 6.1 says outer before inner: I think inner before outer makes more sense. You certainly can't finish the outer one before you've computed the inner one. (Of course, an implementation can pick whatever order is most convenient, but I think the spec should be consistent with itself.)", "new_text": "suite it will use for encryption. It MUST NOT choose a cipher suite not advertised by the configuration. Next, the client constructs the ClientHelloInner message just as it does a standard ClientHello, with the exception of the following rules: It MUST NOT offer to negotiate TLS 1.2 or below. Note this is necessary to ensure the backend server does not negotiate a TLS version that is incompatible with ECH. It MUST NOT offer to resume any session for TLS 1.2 and below. It SHOULD contain TLS padding RFC7685 as described in padding. The client then constructs the ClientHelloOuter message just as it does a standard ClientHello, with the exception of the following rules: It MUST offer to negotiate TLS 1.3 or above. Any extensions compressed as described in outer-extensions must match the ClientHelloInner. [[OPEN ISSUE: When #331 and compression ordering is resolved, be a bit more precise here.]] It MAY copy any other field from the ClientHelloInner except ClientHelloInner.random. Instead, It MUST generate a fresh ClientHelloOuter.random using a secure random number generator. (See flow-client-reaction.) If implementing TLS 1.3's compatibility mode (see Appendix D.4 of RFC8446), it MUST copy the legacy_session_id field from ClientHelloInner. This allows the server to echo the correct session ID when ECH is negotiated. It MUST include an \"encrypted_client_hello\" extension with a payload constructed as described below. The value of \"ECHConfig.public_name\" MUST be placed in the \"server_name\" extension. It MUST NOT include the \"pre_shared_key\" extension. (See flow- clienthello-malleability.) The client might duplicate non-sensitive extensions in both messages. However, implementations need to take care to ensure that sensitive extensions are not offered in the ClientHelloOuter. [[OPEN ISSUE: We"}
{"id": "q-en-draft-ietf-tls-esni-008f7b3176873586e3c9ff55c1595ad742178f2532beceef88f61aa977961cda", "old_text": "If sending a second ClientHello in response to a HelloRetryRequest, the client copies the entire \"encrypted_client_hello\" extension from the first ClientHello. If the server sends an \"encrypted_client_hello\" extension in either HelloRetryRequest or EncryptedExtensions, the client MUST check the", "comments": "But that is just how it is. We get to choose between that leakage and a known large amount of bustage.\nThis solution admits a straightforward, active distinguisher: just intercept CH1, respond with a fake HRR, and look at CH2. It's true that we \"give up\" on making HRR not stick out in . However, one of the design goals there is to prevent active attacks that trigger the HRR code path when it otherwise would not have happened. What we give up on is ensuring \"don't stick out\" when HRR is inevitable for the given client and server.\nAh, are you saying that this conflicts with (and thus would have to be removed if is merged)?\nI believe resolves the same \"OPEN ISSUE\", yes.\nI just noticed this as I was pondering our various HRR issues. RFC8446, section 4.1.2 says: URL The \"and present in the HelloRetryRequest\" portion is fun. In RFC8446 as written, we're not allowed to change the extension on the second ClientHello unless the server sent in HelloRetryRequest, which it doesn't. With the protocol as-is, it seems we'll at least need some text to deal with the contradiction. The impact is practice is thankfully limited. If the servers does not enforce consistency between the two ClientHellos, it doesn't matter. (They're not required to, but they're not explicitly forbidden from it either, and a stateless server may need to enforce some minimum consistency to avoid getting into a broken state.) If a server does, ECH's behavior on HRR will break it. Such a server is supposed to handshake with ClientHelloOuter, so it just affects the retry flow. That's maybe okay, but mildly annoying. Administrators using server software that checks need to know to first upgrade to a version that doesn't check before trying to deploy ECH. We can also add an ECH extension to HRR. Then, on ECH-reject + HRR, the server perhaps wouldn't send the extension and the client would be required to replay the extension. This is a waste in many ways, but would work. This would break don't stick out, but some threat models, don't stick out with HRR is hopeless anyway. (See URL) It would also be weirdly asymmetric between ECH and SH. That said, this constraint is also pretty obnoxious. For instance, currently adds a new padding extension codepoint, because the old one isn't defined for other messages. However, the RFC8446 allowance only works for the existing padding code point, so you don't want to send any other code points in ClientHello, lest the server force you to keep it unchanged in ClientHello2. So maybe we should confirm no servers did the fully strict check and undo that constraint? (Although, for , I personally favor the padding message strategy anyway.)\nNAME points out the Go implementation does check CH1 and CH2 consistency. Fortunately, it only checks known extensions, so it's still compatible with the ECH draft. URL Checking only know extensions is basically saying we should strike \"and present in the HelloRetryRequest\" in rfc8446bis and make sure everyone knows not to check this.\nI don't think we can just strike this text because the point of that text was that you couldn't like change your KeyShares if the HRR didn't tell you to. But I agree we can sharpen this in some sensible way.\nI figured that case was covered by \"defined in the future\", which I'm not proposing we delete. keyshare isn't defined in the future, so it falls under existing rules. The existing rules (bullet points over that one) only let you change keyshare \"[i]f a \"key_share\" extension was supplied in the HelloRetryRequest\". But perhaps that's a little roundabout. The rules I'm thinking of are (changes emphasized): Each extension gets to define how it reacts to HRR. It typically is sensitive the corresponding HRR extension, but doesn't have to be. RFC8446(bis) defines the rules for all existing extensions. Future extensions define their own rules. (And probably if they don't say anything, we assume they don't get to change.) As a server, you can make assumptions based on the rules for extensions you know about. You have to tolerate random changes in extensions you don't know about, because you don't know their rules.* I'm running a probe on a bunch of servers to see how they react to changes in cipher suites, curves, and a random extension. It seems most don't notice. Some (I'm guessing Go) notice cipher suites and curves. None notice a random extension. That suggests the change will be compatible.\nI spoke too soon. LibreSSL breaks if you change an extension they don't recognize. :-(\nSo draft-09 isn't tied up with this, I've uploaded URL as a bandaid for the potential compatibility issue. (I say potential because it's only a problem if the server checks unknown extensions and has different enough key share preferences from the client that it needs HRR. HRR is pretty rare, so it may not actually be an issue. For Chrome, at least, I don't think the particular servers I found actually trigger HRR. But it is still a risk for clients or servers with different configurations.)", "new_text": "If sending a second ClientHello in response to a HelloRetryRequest, the client copies the entire \"encrypted_client_hello\" extension from the first ClientHello. The identical value will reveal to an observer that the value of \"encrypted_client_hello\" was fake, but this only occurs if there is a HelloRetryRequest. If the server sends an \"encrypted_client_hello\" extension in either HelloRetryRequest or EncryptedExtensions, the client MUST check the"}
{"id": "q-en-draft-ietf-tls-esni-008f7b3176873586e3c9ff55c1595ad742178f2532beceef88f61aa977961cda", "old_text": "extension. It MUST NOT save the \"retry_config\" value in EncryptedExtensions. [[OPEN ISSUE: Depending on what we do for issue#450, it may be appropriate to change the client behavior if the HRR extension is present.]] Offering a GREASE extension is not considered offering an encrypted ClientHello for purposes of requirements in real-ech. In particular, the client MAY offer to resume sessions established without ECH.", "comments": "But that is just how it is. We get to choose between that leakage and a known large amount of bustage.\nThis solution admits a straightforward, active distinguisher: just intercept CH1, respond with a fake HRR, and look at CH2. It's true that we \"give up\" on making HRR not stick out in . However, one of the design goals there is to prevent active attacks that trigger the HRR code path when it otherwise would not have happened. What we give up on is ensuring \"don't stick out\" when HRR is inevitable for the given client and server.\nAh, are you saying that this conflicts with (and thus would have to be removed if is merged)?\nI believe resolves the same \"OPEN ISSUE\", yes.\nI just noticed this as I was pondering our various HRR issues. RFC8446, section 4.1.2 says: URL The \"and present in the HelloRetryRequest\" portion is fun. In RFC8446 as written, we're not allowed to change the extension on the second ClientHello unless the server sent in HelloRetryRequest, which it doesn't. With the protocol as-is, it seems we'll at least need some text to deal with the contradiction. The impact is practice is thankfully limited. If the servers does not enforce consistency between the two ClientHellos, it doesn't matter. (They're not required to, but they're not explicitly forbidden from it either, and a stateless server may need to enforce some minimum consistency to avoid getting into a broken state.) If a server does, ECH's behavior on HRR will break it. Such a server is supposed to handshake with ClientHelloOuter, so it just affects the retry flow. That's maybe okay, but mildly annoying. Administrators using server software that checks need to know to first upgrade to a version that doesn't check before trying to deploy ECH. We can also add an ECH extension to HRR. Then, on ECH-reject + HRR, the server perhaps wouldn't send the extension and the client would be required to replay the extension. This is a waste in many ways, but would work. This would break don't stick out, but some threat models, don't stick out with HRR is hopeless anyway. (See URL) It would also be weirdly asymmetric between ECH and SH. That said, this constraint is also pretty obnoxious. For instance, currently adds a new padding extension codepoint, because the old one isn't defined for other messages. However, the RFC8446 allowance only works for the existing padding code point, so you don't want to send any other code points in ClientHello, lest the server force you to keep it unchanged in ClientHello2. So maybe we should confirm no servers did the fully strict check and undo that constraint? (Although, for , I personally favor the padding message strategy anyway.)\nNAME points out the Go implementation does check CH1 and CH2 consistency. Fortunately, it only checks known extensions, so it's still compatible with the ECH draft. URL Checking only know extensions is basically saying we should strike \"and present in the HelloRetryRequest\" in rfc8446bis and make sure everyone knows not to check this.\nI don't think we can just strike this text because the point of that text was that you couldn't like change your KeyShares if the HRR didn't tell you to. But I agree we can sharpen this in some sensible way.\nI figured that case was covered by \"defined in the future\", which I'm not proposing we delete. keyshare isn't defined in the future, so it falls under existing rules. The existing rules (bullet points over that one) only let you change keyshare \"[i]f a \"key_share\" extension was supplied in the HelloRetryRequest\". But perhaps that's a little roundabout. The rules I'm thinking of are (changes emphasized): Each extension gets to define how it reacts to HRR. It typically is sensitive the corresponding HRR extension, but doesn't have to be. RFC8446(bis) defines the rules for all existing extensions. Future extensions define their own rules. (And probably if they don't say anything, we assume they don't get to change.) As a server, you can make assumptions based on the rules for extensions you know about. You have to tolerate random changes in extensions you don't know about, because you don't know their rules.* I'm running a probe on a bunch of servers to see how they react to changes in cipher suites, curves, and a random extension. It seems most don't notice. Some (I'm guessing Go) notice cipher suites and curves. None notice a random extension. That suggests the change will be compatible.\nI spoke too soon. LibreSSL breaks if you change an extension they don't recognize. :-(\nSo draft-09 isn't tied up with this, I've uploaded URL as a bandaid for the potential compatibility issue. (I say potential because it's only a problem if the server checks unknown extensions and has different enough key share preferences from the client that it needs HRR. HRR is pretty rare, so it may not actually be an issue. For Chrome, at least, I don't think the particular servers I found actually trigger HRR. But it is still a risk for clients or servers with different configurations.)", "new_text": "extension. It MUST NOT save the \"retry_config\" value in EncryptedExtensions. Offering a GREASE extension is not considered offering an encrypted ClientHello for purposes of requirements in real-ech. In particular, the client MAY offer to resume sessions established without ECH."}
{"id": "q-en-draft-ietf-tls-esni-24cd1e72ecadb1b6ca448a5df642d60122b9323c2b09e37e50636509aca16e5b", "old_text": "The backend server begins by generating a message ServerHelloECHConf, which is identical in content to a ServerHello message with the exception that ServerHelloECHConf.random is equal to 24 random bytes followed by 8 zero bytes. It then computes a string where Derive-Secret and Handshake Secret are as specified in RFC8446, Section 7.1, and ClientHelloInner...ServerHelloECHConf refers to the sequence of handshake messages beginning with the first ClientHello and ending with ServerHelloECHConf. Finally, the backend server constructs its ServerHello message so that it is equal to ServerHelloECHConf but with the last 8 bytes of ServerHello.random set to the first 8 bytes of accept_confirmation. The backend server MUST NOT perform this operation if it negotiated TLS 1.2 or below. Note that doing so would overwrite the downgrade", "comments": "We previously derived the acceptance signal from the handshake secret. This meant that clients which used the wrong ECHConfig might need to process ServerHello extensions twice before computing the signal, which can be problematic for some libraries. Given that the signal's secrecy is entirely dependent on URL, we can relax the signal computation and base it on the transcript alone, which includes URL, rather than a secret derived from that transcript. cc NAME NAME NAME\nYep, good change. Thanks, S.\nEncoding the \"ECH worked\" signal in the URL lower octets is, at best, ugly. It also arguably causes violation of internal API designs in at least the OpenSSL code. And I'm not sure if there might be some interaction between that and ticket handling (just from the look of some code I've yet to understand really.) Do we really need such a hacky solution? Given the ECH sticks out in the CH, we could just define a non-encrypted extension in the SH to include the confirmation value maybe (though even that is pretty hacky).\nI have some sympathy for this, but the point of this signal design was to make it difficult to distinguish GREASE from non-GREASE ECH.\nCould you elaborate on the concern with ticket handling? I'm not sure I see how tickets would relate to this.\nSorry for being slow - should get to looking again at that bit of code in a few days. S\nJust to synchronize the discussion across list/GitHub a bit, this was discussed a bit in URL I agree with NAME now that there's some fiddly interactions from two ClientHellos and ServerHello. I sketched out a possible change sketched out in URL We could compute ClientHelloInner...ServerHelloECHConf as in the draft right now. But then rather than feed that into the handshake secret computation, just rely on that transcript containing a secret URL and computing F(transcript). (Hash it down, KDF it with zero secret, whatever.) One observation is that, while this relies oddly on a secret URL, the existing construction does too. An attacker constructing a ServerHello gets to pick ServerHello.keyshare and, with knowledge of ClientHello.keyshare, compute the resulting shared secret.\nOne further observation here is that if the client uses the same shares for CHInner and CHOuter, the double key derivation piece doesn't really need to apply.\nFor my own reference, I just wanted to flag something that we've discussed on the list: the current acceptance signal interacts with PSK in a way that could easily cause a bug in an implementation that hasn't carefully vetted this interaction.\ndns records are 8 bit clean and rfc 6763 is an example of a TXT record carrying arbitrary data\n:+1: Base64 is not only complication. It is a space overhead for a large object like ESNIKeys. Seeing corruption of binary will be rare; we can detect them by validating the checksum and fallback to non-ESNI in such case. So why not go for binary? FWIW, base64 was introduced in .\nThis is solved in which removes base64 with new ESNI RR type.\nWe only added this since TXT records were claimed to not work well with binary, per Christian\u2019s suggestion. I\u2019m fine either way.\nThanks for the info. I found URL, in which NAME asks: We know that we can store more than 128 octets in TXT record, and that we have a precedent that uses TXT to store binary data, as NAME has pointed out. Regarding configurability, my understanding is that RFC 1035 defines how binary TXT records can be specified in a zone file, and therefore many DNS server \"softwares\" support having such records. Googling also tells me that managed DNS services like Route 53 and Azure DNS provide the capability, although surprisingly they use different escape sequences ( uses + 3 octal, uses + 3 decimal).\nI do not think it\u2019s a complication, and I\u2019m not convinced space is an issue as per your comment above.\nonly tangentially related - NAME do you know why your URL txt record is broken into 127 char strings instead of 255? Is there a bug being worked around somewhere?\nNAME Oh I did not notice that. I am using tinydns (of djbdns), and it seems like tinydns works that way (see URL).\nNAME The issue about the size is that with base64 we might hit the 512-octet threshold when publishing keys of multiple strengths. As an example, query for URL currently returns a 186-byte response, which contains just secp256r1 key. If we add secp384r1 and secp521r1, I'd assume that the size will come near to 490 octets. In other words, for longer hostnames we will hit the 512-octet threshold if we publish three keys. If we avoid the base64 encoding, we will have enough room to store 3 ECDH keys that can provide 128-, 192-, 256-bit-level security. I do not think this argument is strong enough to prohibit the use of base64 for the following reasons: we will be using DoH / DNS over TLS between the client and the resolver we might be using DNS over UDP that has the 512-octet threshold between the resolver and the authoritative server, but we would want that to migrate to DNSSEC or something even better we might be just fine with publishing only two keys: X25519, X448 But it does make me prefer not to have the overhead of base64, if that is possible.\nFixed in .\nSeems fine to me. Christian Huitema wrote: computation per se. Actually doing trial decryption with the secret requires reaching down into the record layer. This is especially onerous for QUIC, where the record layer is offloaded to another protocol (often outside the TLS library). I think going back to full trial decryption would still considerably increase the cost of implementing ECH over the current draft. That said, I do agree that even the first part seems more onerous than expected and it's worth reconsidering things. We switched using just the randoms to the full handshake secret to mitigate some issues. Perhaps there's an intermediate point that still mitigates those issues but avoids the expensive part of the secret computation? What if we kept the ClientHelloInner...ServerHelloECHConf transcript as in the current draft, but derived the signal exclusively from that? (Or from the client random and that transcript, though the transcript includes the client random anyway.) We'll still bind in the ServerHello extensions, and it still includes the encrypted inner ClientHello random, as in the randoms formulation. It doesn't do the job as clearly as the handshake secret did, but maybe it'll work? Implementation-wise, this variant would not require actually processing the ServerHello extensions. David\nHi all, I raised a github issue [1] related to (but different from) this but would prefer a discussion via email if that's ok. Depending how this goes, I can create a new GH issue I guess. ECH draft-09 (and -10) requires the client to verify that the server has processed ECH by comparing 8 octets in the URL with a value one can only know if you know the inner CH octets and (pretty much all) the rest of the h/s so far. That seemingly removes the need for the client to maybe decrypt twice (once based on inner CH, once based on outer) to see what works. That can be implemented ok for the nominal case. As far as I can see that was the result of github issue [2] back in Aug/Sep 2020. But in error cases that seems to require the client to possibly process the same ServerHello extensions twice, which could have side effects, is generally ickky and I think shows that this aspect of the design maybe needs a change. That's not much different from trial decryption really. In my current, (and still horrible;-), OpenSSL code the minimal effect is memory leaks for error cases that are tricky to handle, but can be handled, although at the expense of probably making the code even more brittle. (Note: I'm neither defending nor attacking the way OpenSSL handles extensions here:-) The specific problem is one needs the handshake secret to calculate the 8 octets one expects in the URL, but generating the handshake secret typically involves a key share ServerHello extension and might in future involve some PQC stuff, so I think we can't say for sure which subset of extensions are needed for the calculation. And if the first time doesn't give the right answer (e.g. if the client used the wrong ECHConfig) then you gotta process the ServerHello incl. its extensions a 2nd time. I think the requirement we ought try meet here is that any confirmation value needs to be something the client can calculate using values known before the SH is rx'd plus the ServerHello as a (mostly) unprocessed buffer. That could be e.g. some hash of the encoded inner CH, ECHConfig and the ServerHello maybe. Some of those issues were discussed in [2] already and in an interim meeting I missed but I reckon it might be worthwhile thinking again. Thanks, S. [1] URL [2] URL", "new_text": "The backend server begins by generating a message ServerHelloECHConf, which is identical in content to a ServerHello message with the exception that ServerHelloECHConf.random is equal to 24 random bytes followed by 8 zero bytes. It then computes an 8-byte string where HKDF-Expand-Label and Transcript-Hash are as defined in RFC8446, Section 7.1, \"0\" indicates a string of Hash.length bytes set to zero, and ClientHelloInner...ServerHelloECHConf refers to the sequence of handshake messages beginning with the first ClientHelloInner and ending with ServerHelloECHConf. (Note that ClientHelloInner and ServerHelloECHConf messages replace the ClientHello and ServerHello messages in the transcript hash sequence as specified in Section 4.1.1 of RFC8446. Finally, the backend server constructs its ServerHello message so that it is equal to ServerHelloECHConf but with the last 8 bytes of ServerHello.random set to \"accept_confirmation\". The backend server MUST NOT perform this operation if it negotiated TLS 1.2 or below. Note that doing so would overwrite the downgrade"}
{"id": "q-en-draft-ietf-tls-esni-049ecb5acc2a251d725e8956323b49b739c270892e436a6bd6c5226864f196cc", "old_text": "3.2. ECH allows the client to encrypt sensitive ClientHello extensions, e.g., SNI, ALPN, etc., under the public key of the client-facing server. This requires the client-facing server to publish the public key and metadata it uses for ECH for all the domains for which it serves directly or indirectly (via Split Mode). This document defines the format of the ECH encryption public key and metadata, referred to as an ECH configuration, and delegates DNS publication details to HTTPS-RR, though other delivery mechanisms are possible. In particular, if some of the clients of a private server are applications rather than Web browsers, those applications might have the public key and metadata preconfigured. When a client wants to establish a TLS session with the backend server, it constructs its ClientHello as indicated in real-ech. We will refer to this as the ClientHelloInner message. The client encrypts this message using the public key of the ECH configuration. It then constructs a new ClientHello, the ClientHelloOuter, with innocuous values for sensitive extensions, e.g., SNI, ALPN, etc., and with an \"encrypted_client_hello\" extension, which this document defines (encrypted-client-hello). The extension's payload carries the encrypted ClientHelloInner and specifies the ECH configuration used for encryption. Finally, it sends ClientHelloOuter to the server. Upon receiving the ClientHelloOuter, a TLS server takes one of the following actions: If it does not support ECH, it ignores the \"encrypted_client_hello\" extension and proceeds with the handshake as usual, per RFC8446, Section 4.1.2. If it is a client-facing server for the ECH protocol, but cannot decrypt the extension, then it terminates the handshake using the ClientHelloOuter. This is referred to as \"ECH rejection\". When ECH is rejected, the client-facing server sends an acceptable ECH configuration in its EncryptedExtensions message. If it supports ECH and decrypts the extension, it forwards the ClientHelloInner to the backend server, who terminates the connection. This is referred to as \"ECH acceptance\". Upon receiving the server's response, the client determines whether or not ECH was accepted and proceeds with the handshake accordingly. (See client-behavior for details.) The primary goal of ECH is to ensure that connections to servers in the same anonymity set are indistinguishable from one another.", "comments": "The overview was a bit long, and slightly inaccurate. The ClientHelloInner is encrypted after (most) of the ClientHelloOuter is constructed. I've also merged the ECH-naive server case with the ECH rejection case. This is more consistent with the client text, and most other TLS extensions, where we do not distinguish between reasons why the extension could not be negotiated. I've also generally tightened up the wording to make it shorter. As part of that, I'm trimmed the repeated list of sensitive ClientHello extensions. ECH itself is (mostly) agnostic to which extensions you believe are sensitive, and we already gave examples in the introduction.", "new_text": "3.2. A client-facing server enables ECH by publishing an ECH configuration, which is an encryption public key and associated metadata. The server must publish this for all the domains it serves via Shared or Split Mode. This document defines the ECH configuration's format, but delegates DNS publication details to HTTPS-RR. Other delivery mechanisms are also possible. For example, the client may have the ECH configuration preconfigured. When a client wants to establish a TLS session with some backend server, it constructs a private ClientHello, referred to as the ClientHelloInner. The client then constructs a public ClientHello, referred to as the ClientHelloOuter. The ClientHelloOuter contains innocuous values for sensitive extensions and an \"encrypted_client_hello\" extension (encrypted-client-hello), which carries the encrypted ClientHelloInner. Finally, the client sends ClientHelloOuter to the server. The server takes one of the following actions: If it does not support ECH or cannot decrypt the extension, it completes the handshake with ClientHelloOuter. This is referred to as rejecting ECH. If it successfully decrypts the extension, it forwards the ClientHelloInner to the backend server, which completes the handshake. This is referred to as accepting ECH. Upon receiving the server's response, the client determines whether or not ECH was accepted (handle-server-response) and proceeds with the handshake accordingly. When ECH is rejected, the resulting connection is not usable by the client for application data. Instead, ECH rejection allows the client to retry with up-to-date configuration (rejected-ech). The primary goal of ECH is to ensure that connections to servers in the same anonymity set are indistinguishable from one another."}
{"id": "q-en-draft-ietf-tls-esni-049ecb5acc2a251d725e8956323b49b739c270892e436a6bd6c5226864f196cc", "old_text": "extension and MUST NOT use the retry keys. Offering a GREASE extension is not considered offering an encrypted ClientHello for purposes of requirements in client-behavior. In particular, the client MAY offer to resume sessions established without ECH. 7.", "comments": "The overview was a bit long, and slightly inaccurate. The ClientHelloInner is encrypted after (most) of the ClientHelloOuter is constructed. I've also merged the ECH-naive server case with the ECH rejection case. This is more consistent with the client text, and most other TLS extensions, where we do not distinguish between reasons why the extension could not be negotiated. I've also generally tightened up the wording to make it shorter. As part of that, I'm trimmed the repeated list of sensitive ClientHello extensions. ECH itself is (mostly) agnostic to which extensions you believe are sensitive, and we already gave examples in the introduction.", "new_text": "extension and MUST NOT use the retry keys. Offering a GREASE extension is not considered offering an encrypted ClientHello for purposes of requirements in real-ech. In particular, the client MAY offer to resume sessions established without ECH. 7."}
{"id": "q-en-draft-ietf-tls-esni-bf698c528d95001f2b76a1f2735aa737617d5cca8185ccaaaddc291a9204f9e6", "old_text": "First four (4) octets of the SHA-256 message digest RFC6234 of the ESNIKeys structure starting from the first octet of \"keys\" to the end of the stucture. The list of keys which can be used by the client to encrypt the SNI. Every key being listed MUST belong to a different group. The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the server expects to support rounded up the nearest multiple of 16. [[OPEN ISSUE: An alternative to padding is to instead send a hash of the server name. This would be fixed-length, but have the disadvantage that the server has to retain a table of all the server names it supports, and will not work if the mapping between the client-facing server and hidden server uses wildcards.]] The moment when the keys become valid for use. The value is represented as seconds from 00:00:00 UTC on Jan 1 1970, not", "comments": "Current text says: the \"encryptedservername\" extension. However, I am not sure if this is the desired behavior. I would rather suggest encrypting even when the hostname exceeds the padded_length. Having some protection is better than none. The issue is that in certain deployments it is impossible to guess what the maximum length of the hostname will be. Consider . It uses a wildcard certificate. Anybody can add a new hostname of any length by creating a new Github account (it might be true that Github some limitation on the length, but that does not necessarily mean that Github shares the maximum length information with the fronting CDN, assuming that it uses a CDN).\nI have filed that fixes another issue (it's just a clarification of an open issue) related to wildcard delegation.\nTechnically, it is possible to guess the maximum length of the hostname, as that maximum textual representation length is defined as 253 bytes per RFC 1034.\nYes. So the question is if a CDN that has a wildcard mapping needs to set to 253 (since current text is a MUST NOT for sending a server name longer than the paddedlength), or if we should introduce a different rule.\nHow about something like: padded_length : The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the fronting server expects to support rounded up the nearest multiple of 16. If the fronting server supports wildcard names, it SHOULD set this value to 256.\nNAME Thank you for the suggestion. I was feeling uneasy (and am still feeling a bit) about requiring the client to send an ESNI extension as large as ~300 bytes, assuming that CDN's would have at least one wildcard mappings. But it's about the first packet sent from the client, and we typically have enough room even when 0-RTT data is involved. So I think that we can live with that.\nTo be\nThank you for working on this! LGTM aside from the two comments shown below.Sorry, I missed the padding thing, so you'll need to adjust", "new_text": "First four (4) octets of the SHA-256 message digest RFC6234 of the ESNIKeys structure starting from the first octet of \"keys\" to the end of the structure. The list of keys which can be used by the client to encrypt the SNI. Every key being listed MUST belong to a different group. The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the server expects to support rounded up the nearest multiple of 16. If the server supports wildcard names, it SHOULD set this value to 256. The moment when the keys become valid for use. The value is represented as seconds from 00:00:00 UTC on Jan 1 1970, not"}
{"id": "q-en-draft-ietf-tls-esni-bf698c528d95001f2b76a1f2735aa737617d5cca8185ccaaaddc291a9204f9e6", "old_text": "5.2. Upon receiving an \"encrypted_server_name\" extension, the server MUST first perform the following checks: If it is unable to negotiate TLS 1.3 or greater, it MUST abort the connection with a \"handshake_failure\" alert.", "comments": "Current text says: the \"encryptedservername\" extension. However, I am not sure if this is the desired behavior. I would rather suggest encrypting even when the hostname exceeds the padded_length. Having some protection is better than none. The issue is that in certain deployments it is impossible to guess what the maximum length of the hostname will be. Consider . It uses a wildcard certificate. Anybody can add a new hostname of any length by creating a new Github account (it might be true that Github some limitation on the length, but that does not necessarily mean that Github shares the maximum length information with the fronting CDN, assuming that it uses a CDN).\nI have filed that fixes another issue (it's just a clarification of an open issue) related to wildcard delegation.\nTechnically, it is possible to guess the maximum length of the hostname, as that maximum textual representation length is defined as 253 bytes per RFC 1034.\nYes. So the question is if a CDN that has a wildcard mapping needs to set to 253 (since current text is a MUST NOT for sending a server name longer than the paddedlength), or if we should introduce a different rule.\nHow about something like: padded_length : The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the fronting server expects to support rounded up the nearest multiple of 16. If the fronting server supports wildcard names, it SHOULD set this value to 256.\nNAME Thank you for the suggestion. I was feeling uneasy (and am still feeling a bit) about requiring the client to send an ESNI extension as large as ~300 bytes, assuming that CDN's would have at least one wildcard mappings. But it's about the first packet sent from the client, and we typically have enough room even when 0-RTT data is involved. So I think that we can live with that.\nTo be\nThank you for working on this! LGTM aside from the two comments shown below.Sorry, I missed the padding thing, so you'll need to adjust", "new_text": "5.2. Upon receiving an \"encrypted_server_name\" extension, the client- facing server MUST first perform the following checks: If it is unable to negotiate TLS 1.3 or greater, it MUST abort the connection with a \"handshake_failure\" alert."}
{"id": "q-en-draft-ietf-tls-esni-bf698c528d95001f2b76a1f2735aa737617d5cca8185ccaaaddc291a9204f9e6", "old_text": "expansion) the server MAY abort the connection with an \"illegal_parameter\" alert without attempting to decrypt. Assuming that these checks succeed, the server then computes K_sni and decrypts the ServerName value. If decryption fails, the server MUST abort the connection with a \"decrypt_error\" alert. If the decrypted value's length is different from the advertised padding_length or the padding consists of any value other than 0,", "comments": "Current text says: the \"encryptedservername\" extension. However, I am not sure if this is the desired behavior. I would rather suggest encrypting even when the hostname exceeds the padded_length. Having some protection is better than none. The issue is that in certain deployments it is impossible to guess what the maximum length of the hostname will be. Consider . It uses a wildcard certificate. Anybody can add a new hostname of any length by creating a new Github account (it might be true that Github some limitation on the length, but that does not necessarily mean that Github shares the maximum length information with the fronting CDN, assuming that it uses a CDN).\nI have filed that fixes another issue (it's just a clarification of an open issue) related to wildcard delegation.\nTechnically, it is possible to guess the maximum length of the hostname, as that maximum textual representation length is defined as 253 bytes per RFC 1034.\nYes. So the question is if a CDN that has a wildcard mapping needs to set to 253 (since current text is a MUST NOT for sending a server name longer than the paddedlength), or if we should introduce a different rule.\nHow about something like: padded_length : The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the fronting server expects to support rounded up the nearest multiple of 16. If the fronting server supports wildcard names, it SHOULD set this value to 256.\nNAME Thank you for the suggestion. I was feeling uneasy (and am still feeling a bit) about requiring the client to send an ESNI extension as large as ~300 bytes, assuming that CDN's would have at least one wildcard mappings. But it's about the first packet sent from the client, and we typically have enough room even when 0-RTT data is involved. So I think that we can live with that.\nTo be\nThank you for working on this! LGTM aside from the two comments shown below.Sorry, I missed the padding thing, so you'll need to adjust", "new_text": "expansion) the server MAY abort the connection with an \"illegal_parameter\" alert without attempting to decrypt. Assuming these checks succeed, the server then computes K_sni and decrypts the ServerName value. If decryption fails, the server MUST abort the connection with a \"decrypt_error\" alert. If the decrypted value's length is different from the advertised padding_length or the padding consists of any value other than 0,"}
{"id": "q-en-draft-ietf-tls-esni-bf698c528d95001f2b76a1f2735aa737617d5cca8185ccaaaddc291a9204f9e6", "old_text": "5.3. The Hidden Server ignores both the \"encrypted_server_name\" and the \"server_name\" (if any) and completes the handshake as usual. If in Shared Mode, the server will still know the true SNI, and can use it for certificate selection. In Split Mode, it may not know the true SNI and so will generally be configured to use a single certificate communicating-sni describes a mechanism for communicating the true SNI to the hidden server. [[OPEN ISSUE: Do we want \"encrypted_server_name\" in EE? It's clearer communication, but gets in the way of stock servers.]] 6.", "comments": "Current text says: the \"encryptedservername\" extension. However, I am not sure if this is the desired behavior. I would rather suggest encrypting even when the hostname exceeds the padded_length. Having some protection is better than none. The issue is that in certain deployments it is impossible to guess what the maximum length of the hostname will be. Consider . It uses a wildcard certificate. Anybody can add a new hostname of any length by creating a new Github account (it might be true that Github some limitation on the length, but that does not necessarily mean that Github shares the maximum length information with the fronting CDN, assuming that it uses a CDN).\nI have filed that fixes another issue (it's just a clarification of an open issue) related to wildcard delegation.\nTechnically, it is possible to guess the maximum length of the hostname, as that maximum textual representation length is defined as 253 bytes per RFC 1034.\nYes. So the question is if a CDN that has a wildcard mapping needs to set to 253 (since current text is a MUST NOT for sending a server name longer than the paddedlength), or if we should introduce a different rule.\nHow about something like: padded_length : The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the fronting server expects to support rounded up the nearest multiple of 16. If the fronting server supports wildcard names, it SHOULD set this value to 256.\nNAME Thank you for the suggestion. I was feeling uneasy (and am still feeling a bit) about requiring the client to send an ESNI extension as large as ~300 bytes, assuming that CDN's would have at least one wildcard mappings. But it's about the first packet sent from the client, and we typically have enough room even when 0-RTT data is involved. So I think that we can live with that.\nTo be\nThank you for working on this! LGTM aside from the two comments shown below.Sorry, I missed the padding thing, so you'll need to adjust", "new_text": "5.3. A server operating in Shared Mode uses PaddedServerNameList.sni as if it were the \"server_name\" extension to finish the handshake. It SHOULD pad the Certificate message, via padding at the record layer, such that its length equals the size of the largest possible Certificate (message) covered by the same ESNI key. 5.4. The Hidden Server ignores both the \"encrypted_server_name\" and the \"server_name\" (if any) and completes the handshake as usual. If in Shared Mode, the server will still know the true SNI, and can use it for certificate selection. In Split Mode, it may not know the true SNI and so will generally be configured to use a single certificate. communicating-sni describes a mechanism for communicating the true SNI to the hidden server. Similar to the Shared Mode behavior, the hidden server in Split Mode SHOULD pad the Certificate message, via padding at the record layer such that its length equals the size of the largest possible Certificate (message) covered by the same ESNI key. [[OPEN ISSUE: Do we want \"encrypted_server_name\" in EE? It's clearer communication, but gets in the way of stock servers.]] 6."}
{"id": "q-en-draft-ietf-tls-esni-bf698c528d95001f2b76a1f2735aa737617d5cca8185ccaaaddc291a9204f9e6", "old_text": "Records are signed via a server private key, ESNIKeys have no authenticity or provenance information. This means that any attacker which can inject DNS responses or poison DNS caches, which is a common scenario in client access netowrks, can supply clients with fake ESNIKeys (so that the client encrypts SNI to them) or strip the ESNIKeys from the response. However, in the face of an attacker that controls DNS, no SNI encryption scheme can work because the attacker", "comments": "Current text says: the \"encryptedservername\" extension. However, I am not sure if this is the desired behavior. I would rather suggest encrypting even when the hostname exceeds the padded_length. Having some protection is better than none. The issue is that in certain deployments it is impossible to guess what the maximum length of the hostname will be. Consider . It uses a wildcard certificate. Anybody can add a new hostname of any length by creating a new Github account (it might be true that Github some limitation on the length, but that does not necessarily mean that Github shares the maximum length information with the fronting CDN, assuming that it uses a CDN).\nI have filed that fixes another issue (it's just a clarification of an open issue) related to wildcard delegation.\nTechnically, it is possible to guess the maximum length of the hostname, as that maximum textual representation length is defined as 253 bytes per RFC 1034.\nYes. So the question is if a CDN that has a wildcard mapping needs to set to 253 (since current text is a MUST NOT for sending a server name longer than the paddedlength), or if we should introduce a different rule.\nHow about something like: padded_length : The length to pad the ServerNameList value to prior to encryption. This value SHOULD be set to the largest ServerNameList the fronting server expects to support rounded up the nearest multiple of 16. If the fronting server supports wildcard names, it SHOULD set this value to 256.\nNAME Thank you for the suggestion. I was feeling uneasy (and am still feeling a bit) about requiring the client to send an ESNI extension as large as ~300 bytes, assuming that CDN's would have at least one wildcard mappings. But it's about the first packet sent from the client, and we typically have enough room even when 0-RTT data is involved. So I think that we can live with that.\nTo be\nThank you for working on this! LGTM aside from the two comments shown below.Sorry, I missed the padding thing, so you'll need to adjust", "new_text": "Records are signed via a server private key, ESNIKeys have no authenticity or provenance information. This means that any attacker which can inject DNS responses or poison DNS caches, which is a common scenario in client access networks, can supply clients with fake ESNIKeys (so that the client encrypts SNI to them) or strip the ESNIKeys from the response. However, in the face of an attacker that controls DNS, no SNI encryption scheme can work because the attacker"}
{"id": "q-en-draft-ietf-tls-esni-7e24339502e1baa87a941d000b2d6a7b674e57145903b9a4d3a1c1be66b756ee", "old_text": "for the \"encrypted_client_hello\" extension. Clients MUST ignore any \"ECHConfig\" structure with a version they do not support. The length, in bytes, of the next field. An opaque byte string whose contents depend on the version. For this specification, the contents are an \"ECHConfigContents\"", "comments": "The goal here is really to provide more clarify about cardinality, if there's a better way or place to do that, that'd be fine.\nI don't believe this is necessary. This idiom is well understood in TLS and this just appears to be excess verbiage.\nHiya, (As with all these PRs, I'm ok if they're not accepted...) For me, that is better than the original, but I do think we ought state the expected cardinality that implementers need to handle somewhere. I reckon it's better to not assume implementers are very familiar with SVCB as for libraries at least, the DNS stuff happens elsewhere. They will have to figure that out eventually, but better to say it somewhere explicitly I think, or we may face some important code base that doesn't handle >1 public key when it ought. Cheers, S.", "new_text": "for the \"encrypted_client_hello\" extension. Clients MUST ignore any \"ECHConfig\" structure with a version they do not support. The length, in bytes, of the next field. This length field allows implementations to skip over the elements in such a list where they cannot parse the specific version of ECHConfig. An opaque byte string whose contents depend on the version. For this specification, the contents are an \"ECHConfigContents\""}
{"id": "q-en-draft-ietf-tls-esni-bdf933f9e753440b51c3792b9cf80388d4b8885f9c651f5b948a3b4c8383a3cc", "old_text": "ECHClientHello.config_id since it can be used as a tracking vector. In such cases, the second method should be used for matching the ECHClientHello to a known ECHConfig. See optional-configs. Unless specified by the application using (D)TLS or externally configured, implementations MUST use the first method. The server then iterates over the candidate ECHConfig values, attempting to decrypt the \"encrypted_client_hello\" extension:", "comments": "As previously written, the wording could be understood as referring to an application using some other (D)TLS mechanism to indicate which ECHConfig candidate value selection method to use.", "new_text": "ECHClientHello.config_id since it can be used as a tracking vector. In such cases, the second method should be used for matching the ECHClientHello to a known ECHConfig. See optional-configs. Unless specified by the application profile or otherwise externally configured, implementations MUST use the first method. The server then iterates over the candidate ECHConfig values, attempting to decrypt the \"encrypted_client_hello\" extension:"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "Abstract This document describes an interface for importing external PSK (Pre- Shared Key) into TLS 1.3. 1. TLS 1.3 RFC8446 supports pre-shared key (PSK) authentication, wherein PSKs can be established via session tickets from prior connections or externally via some out-of-band mechanism. The protocol mandates that each PSK only be used with a single hash function. This was", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "Abstract This document describes an interface for importing external Pre- Shared Keys (PSKs) into TLS 1.3. 1. TLS 1.3 RFC8446 supports Pre-Shared Key (PSK) authentication, wherein PSKs can be established via session tickets from prior connections or externally via some out-of-band mechanism. The protocol mandates that each PSK only be used with a single hash function. This was"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "key derivation. Moreover, it requires external PSKs to be provisioned for specific hash functions. To mitigate these problems, external PSKs can be bound to a specific KDF and hash function when used in TLS 1.3, even if they are associated with a different hash function when provisioned. This document specifies an interface by which external PSKs may be imported for use in a TLS 1.3 connection to achieve this goal. In particular, it describes how KDF-bound PSKs can be differentiated by the target (D)TLS protocol version and KDF for which the PSK will be used. This produces a set of candidate PSKs, each of which are bound to a specific target protocol and KDF. This expands what would normally have been a single PSK identity into a set of PSK identities. 2.", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "key derivation. Moreover, it requires external PSKs to be provisioned for specific hash functions. To mitigate these problems, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific KDF and hash function for use in TLS 1.3. In particular, it describes a mechanism for differentiating external PSKs by the target KDF, (D)TLS protocol version, and an optional context string. This process yields a set of candidate PSKs, each of which are bound to a target KDF and protocol. This expands what would normally have been a single PSK and identity into a set of PSKs and identities. 2."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "3. Key importers mirror the concept of key exporters in TLS in that they diversify a key based on some contextual information before use in a connection. In contrast to key exporters, wherein differentiation is done via an explicit label and context string, the key importer defined herein diversifies external one PSK into one or more PSKs for use via a target protocol, KDF identifier, and optional context string. Additionally, the resulting PSK binder key is modified with a new derivation label to prevent confusion with non-imported PSKs. Imported keys do not require negotiation for use, as a client and server will not agree upon identities if not imported correctly. Endpoints may incrementally deploy PSK importer support by offering non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See rollout for more details. Clients which import external keys TLS MUST NOT use these keys for any other purpose. Moreover, each external PSK MUST be associated with at most one hash function. 3.1.", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "3. The PSK Importer interface mirrors that of the TLS Exporters interface in that it diversifies a key based on some contextual information. In contrast to the Exporters interface, wherein differentiation is done via an explicit label and context string, the PSK Importer interface defined herein takes an external PSK and identity and \"imports\" it into TLS, creating a set of \"derived\" PSKs and identities. Each of these derived PSKs are bound a target protocol, KDF identifier, and optional context string. Additionally, the resulting PSK binder keys are modified with a new derivation label to prevent confusion with non-imported PSKs. Imported keys do not require negotiation for use since a client and server will not agree upon identities if imported incorrectly. Endpoints may incrementally deploy PSK Importer support by offering non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See rollout for more details. Clients which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from RFC8446. See security- considerations for more discussion. 3.1."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "Target KDF: The KDF for which a PSK is imported for use. Imported PSK (IPSK): A PSK derived from an EPSK, external identity, optional context string, and target protocol and KDF. Imported Identity: A sequence of bytes used to identify an IPSK. 4. A key importer diversifies an input EPSK into one or more PSKs advertised on the wire via a target protocol, KDF identifier, and optional context string. Additionally, the resulting PSK binder key is modified with a new derivation label to prevent confusion with non-imported PSKs. This section describes the diversification mechanism and binder key computation change. 4.1. A key importer takes as input an EPSK with external identity \"external_identity\" and base key \"epsk\", as defined in terminology, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on supported (target) protocols and KDFs. In particular, for each supported target protocol \"target_protocol\" and KDF \"target_kdf\", the importer constructs an ImportedIdentity structure as follows: The list of \"target_kdf\" values is maintained by IANA as described in", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "Target KDF: The KDF for which a PSK is imported for use. Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, and target protocol and KDF. Imported Identity: A sequence of bytes used to identify an IPSK. 4. This section describes the PSK Importer interface and its underlying diversification mechanism and binder key computation modification. 4.1. The PSK Importer interface takes as input an EPSK with External Identity \"external_identity\" and base key \"epsk\", as defined in terminology, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"target_protocol\" and KDF \"target_kdf\", the importer constructs an ImportedIdentity structure as follows: The list of \"target_kdf\" values is maintained by IANA as described in"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "and QUICv1 QUIC use 0x0304. Note that this means future versions of TLS will increase the number of PSKs derived from an external PSK. An Imported PSK derived from an EPSK with base key 'epsk' bound to this identity is then computed as follows: L is corresponds to the KDF output length of ImportedIdentity.target_kdf as defined in IANA. For hash-based KDFs, such as HKDF_SHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length suitable for supported ciphersuites. The identity of 'ipskx' as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The hash function used for HKDF RFC5869 is that which is associated with the EPSK. It is not the hash function associated with", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "and QUICv1 QUIC use 0x0304. Note that this means future versions of TLS will increase the number of PSKs derived from an external PSK. Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: L corresponds to the KDF output length of ImportedIdentity.target_kdf as defined in IANA. For hash-based KDFs, such as HKDF_SHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length suitable for supported ciphersuites. The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". The hash function used for HKDF RFC5869 is that which is associated with the EPSK. It is not the hash function associated with"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "SHA-256 MUST be used. Diversifying EPSK by ImportedIdentity.target_kdf ensures that an IPSK is only used as input keying material to at most one KDF, thus satisfying the requirements in RFC8446. Endpoints generate a compatible ipskx for each target ciphersuite they offer. For example, importing a key for TLS_AES_128_GCM_SHA256 and TLS_AES_256_GCM_SHA384 would yield two PSKs, one for HKDF-SHA256 and another for HKDF-SHA384. In contrast, if TLS_AES_128_GCM_SHA256 and TLS_CHACHA20_POLY1305_SHA256 are supported, only one derived key is necessary. The resulting IPSK base key 'ipskx' is then used as the binder key in TLS 1.3 with identity ImportedIdentity. With knowledge of the supported KDFs, one may import PSKs before the start of a connection. EPSKs may be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means ALPN, QUIC transport settings, etc., must be provisioned alongside these EPSKs. 4.2. To prevent PSK Importers from being confused with standard out-of- band PSKs, imported PSKs change the PSK binder key derivation label. In particular, the standard TLS 1.3 PSK binder key computation is defined as follows: Imported PSKs replace the string \"ext binder\" with \"imp binder\" when deriving \"binder_key\". This means the binder key is now computed as follows: This new label differentiates non-imported and imported external PSKs. Specifically, a client and server will negotiate use of an external PSK if and only if (a) both endpoints import the PSK or (b) neither endpoint imports the PSK. As \"binder_key\" is a leaf key, changing its computation does not affect any other key.", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "SHA-256 MUST be used. Diversifying EPSK by ImportedIdentity.target_kdf ensures that an IPSK is only used as input keying material to at most one KDF, thus satisfying the requirements in RFC8446. See security-considerations for more details. Endpoints SHOULD generate a compatible \"ipskx\" for each target ciphersuite they offer. For example, importing a key for TLS_AES_128_GCM_SHA256 and TLS_AES_256_GCM_SHA384 would yield two PSKs, one for HKDF-SHA256 and another for HKDF-SHA384. In contrast, if TLS_AES_128_GCM_SHA256 and TLS_CHACHA20_POLY1305_SHA256 are supported, only one derived key is necessary. EPSKs may be imported before the start of a connection if the target KDFs, protocols, and context string(s) are known a priori. EPSKs may also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means ALPN, QUIC transport parameters (if used for QUIC), etc., must be provisioned alongside these EPSKs. 4.2. To prevent confusion between imported and non-imported PSKs, imported PSKs change the PSK binder key derivation label. In particular, the standard TLS 1.3 PSK binder key computation is defined as follows: Imported PSKs replace the string \"ext binder\" with \"imp binder\" when deriving \"binder_key\". This means the binder key is computed as follows: This new label ensures a client and server will negotiate use of an external PSK if and only if (a) both endpoints import the PSK or (b) neither endpoint imports the PSK. As \"binder_key\" is a leaf key, changing its computation does not affect any other key."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity, target protocol, and target KDF. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. Indeed, this is necessary for incremental deployment. 7. The Key Importer security goals can be roughly stated as follows: avoid PSK re-use across KDFs while properly authenticating endpoints. When modeled as computational extractors, KDFs assume that input keying material (IKM) is sampled from some \"source\" probability", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. Indeed, this is necessary for incremental deployment. 7. The PSK Importer security goals can be roughly stated as follows: avoid PSK re-use across KDFs while properly authenticating endpoints. When modeled as computational extractors, KDFs assume that input keying material (IKM) is sampled from some \"source\" probability"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-7f78f6881a3d064e553ea7d4f0dcf11ea1d9bcf5e80b8b2452781a0ce677a729", "old_text": "authentication protocols, wherein at least one is uncompromised, jointly authenticates all protocols. Authenticating with an externally provisioned PSK, therefore, should ideally authenticate both the TLS connection and the external provision process. Typically, the external provision process produces a PSK and corresponding context from which the PSK was derived and in which it should be used. We refer to an external PSK without such context as \"context free\". Thus, in considering the source-independence and compound authentication requirements, the Key Import API described in this document aims to achieve the following goals: Externally provisioned PSKs imported into TLS achieve compound authentication of the provision step(s) and connection. Context-free PSKs only achieve authentication within the context of a single connection. Imported PSKs are used as IKM for two different KDFs. Imported PSKs do not collide with existing PSKs used for TLS 1.2 and below.", "comments": "This is mainly editorial, so there should be nothing surprising here. Please let me if that's not the case! It also includes clarifying text per NAME recommendation raised during WGLC. cc NAME NAME\nFrom Rob Sayre: We should note the latter is a requirement from Section 4.2.11 of RFC8446. Let's also fix the typo \"import external keys for TLS...\"", "new_text": "authentication protocols, wherein at least one is uncompromised, jointly authenticates all protocols. Authenticating with an externally provisioned PSK, therefore, should ideally authenticate both the TLS connection and the external provisioning process. Typically, the external provision process produces a PSK and corresponding context from which the PSK was derived and in which it should be used. If available, this is used as the ImportedIdentity.context value. We refer to an external PSK without such context as \"context-free\". Thus, in considering the source-independence and compound authentication requirements, the PSK Import interface described in this document aims to achieve the following goals: Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Imported PSKs are not used as IKM for two different KDFs. Imported PSKs do not collide with existing PSKs used for TLS 1.2 and below."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 RFC8446 and DTLS 1.3 DTLS13. In", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. In cases where this is not possible, e.g., there are already deployed external PSKs or provisioning is otherwise limited, re-using external PSKs across different versions of TLS may produce related outputs, which may in turn lead to security problems; see RFC8446, Section E.7. To mitigate against such problems, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 RFC8446 and DTLS 1.3 DTLS13. In"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "and imported PSKs are distinct since their identities are different on the wire. See rollout for more details. Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from RFC8446. See security- considerations for more discussion. 3.1. External PSK (EPSK): A PSK established or provisioned out-of-band, i.e., not from a TLS connection, which is a tuple of (Base Key, External Identity, Hash).", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "and imported PSKs are distinct since their identities are different on the wire. See rollout for more details. Endpoints which import external keys MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs. Moreover, each external PSK fed to the importer process MUST be associated with at most one hash function. This is analogous to the rules in Section 4.2.11 of RFC8446. See security-considerations for more discussion. 3.1. The following terms are used throughout this document: External PSK (EPSK): A PSK established or provisioned out-of-band, i.e., not from a TLS connection, which is a tuple of (Base Key, External Identity, Hash)."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "Target KDF: The KDF for which a PSK is imported for use. Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. Imported Identity: A sequence of bytes used to identify an IPSK. 4. This section describes the PSK Importer interface and its underlying", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "Target KDF: The KDF for which a PSK is imported for use. Imported PSK (IPSK): A PSK derived from an EPSK, optional context string, target protocol, and target KDF. Imported Identity: A sequence of bytes used to identify an IPSK. This document uses presentation language from RFC8446, Section 3. 4. This section describes the PSK Importer interface and its underlying"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "PSKs for TLS 1.3 and non-imported PSKs for earlier versions co-exist for incremental deployment. ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may include information about peer roles or identities to mitigate Selfie-style reflection attacks Selfie. See mitigate-selfie for more details. If the EPSK is a key derived from some other protocol or", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "PSKs for TLS 1.3 and non-imported PSKs for earlier versions co-exist for incremental deployment. ImportedIdentity.context MUST include the context used to determine the EPSK, if any exists. For example, ImportedIdentity.context may include information about peer roles or identities to mitigate Selfie-style reflection attacks Selfie. See mitigate-selfie for more details. If the EPSK is a key derived from some other protocol or"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "L corresponds to the KDF output length of ImportedIdentity.target_kdf as defined in IANA. For hash-based KDFs, such as HKDF_SHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length suitable for supported ciphersuites. The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". As the maximum size of the PSK extension is 2^16 - 1 octets, an Imported Identity that exceeds this size is likely to cause a", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "L corresponds to the KDF output length of ImportedIdentity.target_kdf as defined in IANA. For hash-based KDFs, such as HKDF_SHA256(0x0001), this is the length of the hash function output, e.g., 32 octets for SHA256. This is required for the IPSK to be of length suitable for supported ciphersuites. Internally, HKDF-Expand- Label uses a label corresponding to ImportedIdentity.target_protocol, e.g., \"tls13\" for TLS 1.3, as per RFC8446, Section 7.1, or \"dtls13\" for DTLS 1.3, as per I-D.ietf-tls-dtls13, Section 5.10. The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'. As the maximum size of the PSK extension is 2^16 - 1 octets, an Imported Identity that exceeds this size is likely to cause a"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "The hash function used for HKDF RFC5869 is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.target_kdf. If no hash function is specified, SHA-256 SHA2 MUST be used. Diversifying EPSK by ImportedIdentity.target_kdf ensures that an IPSK is only used as input keying material to at most one KDF, thus satisfying the requirements in RFC8446. See security-considerations for more", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "The hash function used for HKDF RFC5869 is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.target_kdf. If no hash function is specified, SHA-256 SHA2 SHOULD be used. Diversifying EPSK by ImportedIdentity.target_kdf ensures that an IPSK is only used as input keying material to at most one KDF, thus satisfying the requirements in RFC8446. See security-considerations for more"}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "TLS_AES_128_GCM_SHA256 and TLS_AES_256_GCM_SHA384 would yield two PSKs, one for HKDF-SHA256 and another for HKDF-SHA384. In contrast, if TLS_AES_128_GCM_SHA256 and TLS_CHACHA20_POLY1305_SHA256 are supported, only one derived key is necessary. EPSKs MAY be imported before the start of a connection if the target KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation RFC7301, QUIC transport parameters (if used for QUIC), and any other relevant parameters that are negotiated for early data MUST be provisioned alongside these EPSKs. 4.2.", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "TLS_AES_128_GCM_SHA256 and TLS_AES_256_GCM_SHA384 would yield two PSKs, one for HKDF-SHA256 and another for HKDF-SHA384. In contrast, if TLS_AES_128_GCM_SHA256 and TLS_CHACHA20_POLY1305_SHA256 are supported, only one derived key is necessary. Each ciphersuite uniquely identifies the target KDF. Future specifications that change the way the KDF is negotiated will need to update this specification to make clear how target KDFs are determined for the import process. EPSKs MAY be imported before the start of a connection if the target KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to the protocol settings and configuration that are required for sending early data. Minimally, that means Application-Layer Protocol Negotiation value RFC7301, QUIC transport parameters (if used for QUIC), and any other relevant parameters that are negotiated for early data MUST be provisioned alongside these EPSKs. 4.2."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "6. Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. Indeed, this is necessary for incremental deployment. Specifically, existing applications using TLS 1.2 with non-imported PSKs can safely enable TLS 1.3 with imported PSKs in clients and servers without interoperability risk. 7.", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "6. The mechanism defined in this document requires that an EPSK is only ever used as an EPSK and not for any other purpose. In particular, this requirement disallows direct use of the EPSK as a PSK in TLS 1.2. The importer process produces distinct IPSKs derived from the target protocol and KDF, which in turn protects against cross- protocol collisions for protocol versions using this process by ensuring that each IPSK can only be used with one protocol and KDF. This is a distinct contrast to TLS 1.2, where a given PSK might be used with multiple KDFs in different handshakes, and importers are not available. Furthermore, the KDF used in TLS 1.2 might be the same KDF used by the importer mechanism itself. In deployments that already have PSKs provisioned and in use with TLS 1.2, attempting to incrementally deploy the importer mechanism would then result in concurrent use of the already provisioned PSK both directly as a TLS 1.2 PSK and as an EPSK, which in turn could mean that the same KDF and key would be used in two different protocol contexts. There are no known related outputs or security issues that would arise from this arrangement. However, only limited analysis has been done, and as such is not a recommended configuration. However, the benefits of using TLS 1.3 and of using PSK importers may prove sufficiently compelling that existing deployments choose to enable this noncompliant configuration for a brief transition period while new software (using TLS 1.3 and importers) is deployed. Operators are advised to make any such transition period as short as possible. 7."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "Imported PSKs do not collide with future protocol versions and KDFs. There is no known interference between the process for computing Imported PSKs from an external PSK and the processing of existing external PSKs used in (D)TLS 1.2 and below. However, only limited analysis has been done, which is an additional reason why applications SHOULD provision separate PSKs for (D)TLS 1.3 and prior versions, even when the importer interface is used in (D)TLS 1.3. The PSK Importer does not prevent applications from constructing non- importer PSK identities that collide with imported PSK identities.", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "Imported PSKs do not collide with future protocol versions and KDFs. There are no known related outputs or security issues caused from the process for computing Imported PSKs from an external PSK and the processing of existing external PSKs used in (D)TLS 1.2 and below, as noted in rollout. However, only limited analysis has been done, which is an additional reason why applications SHOULD provision separate PSKs for (D)TLS 1.3 and prior versions, even when the importer interface is used in (D)TLS 1.3. The PSK Importer does not prevent applications from constructing non- importer PSK identities that collide with imported PSK identities."}
{"id": "q-en-draft-ietf-tls-external-psk-importer-2af619ac8f00afefcff20aa5f073a8c350dfc58757b879e3de67d2b826a16c82", "old_text": "systems and use cases, this identity may become a persistent tracking identifier. 9. This specification introduces a new registry for TLS KDF identifiers,", "comments": "I don't love a couple of the ways I phrased things here, so feel free to offer suggestions to improve it! modulo test vectors. (NAME it'll likely take some time to get those vectors. Do you think we should hold this document until that's done?)\nThis looks good modulo the open issue about incremental deployment. I don't think we need to hold the document up in order to get test vectors. Benjamin Kaduk has entered the following ballot position for draft-ietf-tls-external-psk-importer-06: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about IESG DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Sorry about the large volume of comments; this document is actually in pretty good shape, there are just a bunch of details that can be subtle to get exactly right. I specifically call out the note in Section 4.1 about which HKDF-Expand-Label is to be used, since that would affect the actual derived keys for DTLS 1.3 (and any hypothetical future TLS versions). Section 1 While there is no known way in which the same external PSK might produce related output in TLS 1.3 and prior versions, only limited analysis has been done. Applications SHOULD provision separate PSKs for TLS 1.3 and prior versions. To mitigate against any interference, this document specifies a PSK Importer interface by which external PSKs may be imported and subsequently bound to a specific key derivation function (KDF) and hash function for use in TLS 1.3 [RFC8446] and DTLS 1.3 [DTLS13]. [...] IIRC the main target use cases for this mechanism are for existing external PSKs (violating the \"SHOULD\" from the first quoted paragraph), as well as deployments that are simplified by only needing to provision a single PSK. Does it make sense to include such clarifying description here? (Similarly, is it accurate to say that \"[a]pplications SHOULD provision separate PSKs [...], but the mechanism defined in this document provides a partial mitigation for use when that is not possible\"? Also, we only use the word \"interference\" twice in this document (this is the first instance), so it might be worth a couple more words to clarify the nature of the potential interference. Section 3 non-imported keys for TLS versions prior to TLS 1.3. Non-imported and imported PSKs are distinct since their identities are different on the wire. See Section 6 for more details. (side note?) I think the precise story here is a little subtle -- for any given key, the imported and non-imported identities are distinct, but in principle one could construct a non-imported identity that overlaps with an imported identity for a different key (and we say as much at the end of Section 7). This is vanishingly rare to happen by chance, but makes the statement as written not quite categorically true. That said, the change to the binder key derivation (\u00a7 4.2) seems to obviate any potential consequences to such a collision (though the connection attempt would presumably fail). Endpoints which import external keys MUST NOT use either the external keys or the derived keys for any other purpose. Moreover, each IIUC, this is \"MUST NOT use the keys that are input to the import process for any purpose other than the importer, and MUST NOT use the derived keys for any purpose other than TLS PSKs\". Is it worth spelling it out explicitly like that? external PSK MUST be associated with at most one hash function, as per the rules in Section 4.2.11 from [RFC8446]. See Section 7 for more discussion. Does this requirement apply to the inputs to the importer process, the outputs, or both? Section 3.1 Imported PSK (IPSK): A PSK derived from an EPSK, External Identity, optional context string, target protocol, and target KDF. There is an \"External Identity\" that is contained within the EPSK itself; is the \"External Identity\" listed here distinct from that? Section 4.1 The PSK Importer interface takes as input an EPSK with External Identity \"externalidentity\" and base key \"epsk\", as defined in Section 3.1, along with an optional context, and transforms it into a set of PSKs and imported identities for use in a connection based on target protocols and KDFs. In particular, for each supported target protocol \"targetprotocol\" and KDF \"targetkdf\", the importer constructs an ImportedIdentity structure as follows: If I understand correctly the \"targetkdf\" is supposed to be the KDF associated with the cipher suite(s) the derived PSK will be usable for. This is something of a divergence from RFC 8446, where we assume that the KDF will be HKDF and associate only a hash function with the cipher suite. While discussing a more generic KDF concept seems reasonable (and is directly in line with the cryptographic principles that motivate this work), I'd recommend adding a few more words somewhere in this section to clarify the relationship between the targetkdf and the cipher suite(s), probably in the paragraph that starts \"[e]ndpoints SHOULD generate a compatible 'ipskx' for each target ciphersuite they offer\". struct { opaque externalidentity; opaque context; uint16 targetprotocol; uint16 targetkdf; } ImportedIdentity; Should we say that this is using the TLS presentation language? ImportedIdentity.context MUST include the context used to derive the EPSK, if any exists. For example, ImportedIdentity.context may (nit?) \"derive\" suggests a cryptographic key derivation process, but the rest of the prose suggests that this is more a process of \"identification\" or \"determination\". Is there a more appropriate word to use? Given an ImportedIdentity and corresponding EPSK with base key \"epsk\", an Imported PSK IPSK with base key \"ipskx\" is computed as follows: epskx = HKDF-Extract(0, epsk) ipskx = HKDF-Expand-Label(epskx, \"derived psk\", Hash(ImportedIdentity), L) I think we need to say that the HKDF-Expand-Label used (i.e., the URL prefix) is the one corresponding to ImportedIdentity.targetprotocol. L corresponds to the KDF output length of ImportedIdentity.targetkdf as defined in Section 9. For hash-based KDFs, such as HKDFSHA256(0x0001), this is the length of the hash function output, i.e., 32 octets. This is required for the IPSK to be of length (32 octets for SHA256, not all hash-based KDFs) The identity of \"ipskx\" as sent on the wire is ImportedIdentity, i.e., the serialized content of ImportedIdentity is used as the content of PskIdentity.identity in the PSK extension. The corresponding TLS 1.3 binder key is \"ipskx\". In RFC 8446, the \"binderkey\" is the output of the key schedule, but ipskx is an input to the key schedule. So I think we want to say something like \"the corresponding PSK input for the TLS 1.3 key schedule is 'ipskx'\". The hash function used for HKDF [RFC5869] is that which is associated with the EPSK. It is not the hash function associated with ImportedIdentity.targetkdf. If no hash function is specified, I predict that someone will mess up the hash function selection while implementing this. Perhaps we could provide test vectors for the full matrix of (EPSK Hash, targetkdf hash) pairs using SHA256/SHA384? SHA-256 [SHA2] MUST be used. Diversifying EPSK by I think we might be able to tolerate the protocol* having a strict requirement that the associated hash function is specified, and provide a recommended value that would be used by people deploying the protocol in the absence of other information. This might in some sense help future-proof the document for the scenario where SHA256 weakens or is broken and the default should change. KDFs, protocols, and context string(s) are known a priori. EPSKs MAY also be imported for early data use if they are bound to protocol settings and configurations that would otherwise be required for early data with normal (ticket-based PSK) resumption. Minimally, that means Application-Layer Protocol Negotiation [RFC7301], QUIC RFC 8446 does not limit 0-RTT data to resumption, so I think we just want \"if they are bound to the protocol settings and configuration that are required for sending early data\". Also, nit: I think we usually refer to the \"ALPN value\" instead of just \"ALPN\" being configured. Section 5 If a client or server wish to deprecate a hash function and no longer use it for TLS 1.3, they remove the corresponding KDF from the set of target KDFs used for importing keys. This does not affect the KDF operation used to derive Imported PSKs. Should we also include a paragraph about how to deprecate the Hash used in deriving Imported PSKs? (It's fixed per EPSK, so, \"configure a new EPSK\", basically.) Section 6 Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Thus, an EPSK may be used with the same KDF (and underlying HMAC hash algorithm) as TLS 1.3 with importers. However, critically, the derived PSK will not be the same since the importer differentiates the PSK via the identity and target KDF and protocol. Thus, PSKs imported for TLS 1.3 are distinct from those used in TLS 1.2, and thereby avoid cross-protocol collisions. [...] As I read this I recalled that up in Section 1 we say that applications \"SHOULD provision separate PSKs for TLS 1.3 and prior versions\". The scenario being described here sounds like it can only occur if that SHOULD is ignored, which might be worth reiterating. We also say in \u00a7 4.1 that \"external PSKs MUST NOT be imported for (D)TLS or prior versions\". In light of these two observations, my best interpretation of the quoted text is that it should be equivalent to: NEW: Recall that TLS 1.2 permits computing the TLS PRF with any hash algorithm and PSK. Accordingly, a given EPSK might be both used directly as a TLS 1.2 PSK and used with TLS 1.3 via the importer mechanism (noting that this configuration is not recommended, per Section 1), using the same KDF algorithm. However, because the TLS 1.3 importer mechanism includes an additional key-derivation step, the actual PSK used in the TLS 1.3 key schedule will be distinct from the one used in the TLS 1.2 key schedule, so there are no cross-protocol collisions possible. Note that this does not preclude endpoints from using non-imported PSKs for TLS 1.2. For some reason this line implies to me that endpoints might use imported PSKs for TLS 1.2, but this is forbidden by Section 4.1 (as quoted above). Perhaps \"does not preclude endpoints from continuing to use TLS 1.2 and the non-imported PSKs it requires\"? Section 7 We should probably note that any information placed in ImportedIdentity.context will be visible on the wire in cleartext, and thus that confidential identifiers from other protocols should either not be used or should be protected (e.g., by hashing) prior to use. Externally provisioned PSKs imported into a TLS connection achieve compound authentication of the provisioning process and connection. Context-free PSKs only achieve authentication within the context of a single connection. Do we need to use pskdhke to get the strong compound authentication property, or is it attainable with pskke as well? Section 10.1 I'm not sure why [QUIC] and RFC 5246 are listed as normative references. Appendix B The Selfie attack [Selfie] relies on a misuse of the PSK interface. The PSK interface makes the implicit assumption that each PSK is known only to one client and one server. If multiple clients or multiple servers with distinct roles share a PSK, TLS only authenticates the entire group. A node successfully authenticates its peer as being in the group whether the peer is another node or itself. I think it may be useful to call out that the \"multiple clients\" and \"multiple servers\" case can occur even when there are only two endpoints that know the key (that is, when they do not have predetermined client/server roles). struct { opaque clientmac; opaque server_mac; } Context; We may not need the whole 16k to hold a MAC address, especially since this is just an example. If an attacker then redirects a ClientHello intended for one node to a different node, the receiver will compute a different context string and the handshake will not complete. Similarly to the above, this could be \"a different node (including the node that generated the ClientHello)\".", "new_text": "systems and use cases, this identity may become a persistent tracking identifier. Note also that ImportedIdentity.context is visible in cleartext on the wire as part of the PSK identity. Unless otherwise protected by a mechanism such as TLS Encrypted ClientHello ECH, applications SHOULD not put sensitive information in this field. 9. This specification introduces a new registry for TLS KDF identifiers,"}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "track documents need to be marked as recommended. If an item is marked as not recommended it does not necessarily mean that it is flawed, rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases.", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "track documents need to be marked as recommended. If an item is marked as not recommended it does not necessarily mean that it is flawed; rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases."}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "Update the \"Reference\" to also refer to this document. Add the following notes: Experts are to verify that there is in fact a publicly available standard. An Internet Draft that is posted and never published or a standard in another standards body, industry consortium, university site, etc. suffices. As specified in RFC8126, assignments made in the Private Use space are not generally useful for broad interoperability. It is the responsibility of those making use of the Private Use range to ensure that no conflicts occur (within the intended scope of use). For widespread experiments, temporary reservations are available. See expert-pool for additional information about the designated expert pool.", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "Update the \"Reference\" to also refer to this document. See expert-pool for additional information about the designated expert pool."}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "extension with the value \"Yes\", a Standards Track document RFC8126 is REQUIRED. IESG Approval is REQUIRED for a Yes->No transition. The following is from I-D.ietf-tls-tls13 and is included here to ensure alignment between these specifications.", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "extension with the value \"Yes\", a Standards Track document RFC8126 is REQUIRED. IESG Approval is REQUIRED for a Yes->No transition. IANA [SHALL update/has added] the following notes: Experts are to verify that there is in fact a publicly available specification. An Internet Draft that is posted and never published or a specification in another standards body, industry consortium, university site, etc. suffices. As specified in RFC8126, assignments made in the Private Use space are not generally useful for broad interoperability. It is the responsibility of those making use of the Private Use range to ensure that no conflicts occur (within the intended scope of use). For widespread experiments, temporary reservations are available. Extensions marked as \"Yes\" are those allocated via Standards Track RFCs. Extensions marked as \"No\" are not. If an item is not marked as recommended it does not necessarily mean that it is flawed; rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases. The following is from I-D.ietf-tls-tls13 and is included here to ensure alignment between these specifications."}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "represents a security trade-off that may not be appropriate for general environments. IANA [SHALL add/has added] the following notes for additional information:", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "represents a security trade-off that may not be appropriate for general environments. If an item is not marked as recommended it does not necessarily mean that it is flawed; rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases. IANA [SHALL add/has added] the following notes for additional information:"}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "groups marked \"No\" range from \"good\" to \"bad\" from a cryptographic standpoint. The designated expert RFC8126 only ensures that the specification is publicly available. An Internet Draft that is posted and never published or a standard in another standards body, industry", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "groups marked \"No\" range from \"good\" to \"bad\" from a cryptographic standpoint. If an item is not marked as recommended it does not necessarily mean that it is flawed; rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases. The designated expert RFC8126 only ensures that the specification is publicly available. An Internet Draft that is posted and never published or a standard in another standards body, industry"}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "allocated via Standards Track RFCs. ClientCertificateTypes marked as \"No\" are not. 12. To align with TLS implementations and to align the naming", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "allocated via Standards Track RFCs. ClientCertificateTypes marked as \"No\" are not. If an item is not marked as recommended it does not necessarily mean that it is flawed; rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases. 12. To align with TLS implementations and to align the naming"}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "Exporters Labels marked as \"Yes\" are those allocated via Standards Track RFCs. Exporter Labels marked as \"No\" are not. IANA [SHALL update/has updated] the reference for this registry to also refer to this document.", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "Exporters Labels marked as \"Yes\" are those allocated via Standards Track RFCs. Exporter Labels marked as \"No\" are not. If an item is not marked as recommended it does not necessarily mean that it is flawed; rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases. IANA [SHALL update/has updated] the reference for this registry to also refer to this document."}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-8a6c1e3e7e6f3b23ac5b2013c744c25d9cf84f9b776154af32c9a34329624ab3", "old_text": "Certificate Types marked as \"Yes\" are those allocated via Standards Track RFCs. Certificate Types marked as \"No\" are not. IANA [SHALL update/has updated] the reference for this registry to also refer this document.", "comments": "NAME PTAL\nLGTM - \u2018twas just a comment anyway. W I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf", "new_text": "Certificate Types marked as \"Yes\" are those allocated via Standards Track RFCs. Certificate Types marked as \"No\" are not. If an item is not marked as recommended it does not necessarily mean that it is flawed; rather, it indicates that either the item has not been through the IETF consensus process, has limited applicability, or is intended only for specific use cases. IANA [SHALL update/has updated] the reference for this registry to also refer this document."}
{"id": "q-en-draft-ietf-tls-iana-registry-updates-b08ef7a9b8277ebffca23ad4ac823943fd3a8d339f5ad806d5157743b22c9514", "old_text": "SignatureAlgorithm registries to also refer to this document. [SHALL update/has updated] the TLS HashAlgorithm Registry to list values 7-223 as \"Reserved\" and the TLS SignatureAlgorithm registry to list values 4-223 as \"Reserved\". Despite the fact that the HashAlgorithm and SignatureAlgorithm registries are orphaned, it is still important to warn implementers", "comments": "4492bis includes some values we were marking as reserved and that's wrong. We also have 224-255 that we might also mark as reserved. Effectively, this closes these two registries.\nNAME PTAL\nNAME -- This looks good. I have a quick comment on: I presume this will or won't happen as the result of conversations in the WG? Also, I'll note -- for the (only) -- that only 224-253 are necessary to mark as reserved (or, really, since they were previously okay for private use, I think the term to use is \"deprecated\"), since 254 and 255 are in the TLS 1.3 \"reserved for private use\" range of 0xfe00-0xffff\nI'm planning to forward the PR to the WG after I heard from you. I'll add in \"deprecated\" for those other values before sending it along.", "new_text": "SignatureAlgorithm registries to also refer to this document. [SHALL update/has updated] the TLS HashAlgorithm Registry to list values 7 and 9-223 as \"Reserved\" and the TLS SignatureAlgorithm registry to list values 4-6 and 9-223 as \"Reserved\". Despite the fact that the HashAlgorithm and SignatureAlgorithm registries are orphaned, it is still important to warn implementers"}
{"id": "q-en-draft-ietf-tls-md5-sha1-deprecate-1c711116df666b628d7048bb25437a6af2a5fa2096ae12e5d08febedf450d6ab", "old_text": "4. Servers MUST NOT include MD5 and SHA-1 in ServerKeyExchange messages. If no other signature algorithms are available (for example, if the client does not send a signature_algorithms extension), the server MUST abort the handshake with a handshake_failure alert or select a different cipher suite. 5. Clients MUST NOT include MD5 and SHA-1 in CertificateVerify messages. If a server receives a CertificateVerify message with MD5 or SHA-1 it MUST abort the connection with handshake_failure or insufficient_security alert. 6.", "comments": "Addressing this . NAME NAME PTAL\nI am really growing to dislike these \"here is the alert you must use\" constructions. Are they necessary? A lot of this was born out of the deprecate TLS 1/1.1 RFC that spun this I-D off. I believe this is the thread back in mid-2019: URL spt", "new_text": "4. Servers MUST NOT include MD5 and SHA-1 in ServerKeyExchange messages. If the client receives a ServerKeyExchange message indicating MD5 or SHA-1, then it MUST abort the connection with an illegal_parameter alert. 5. Clients MUST NOT include MD5 and SHA-1 in CertificateVerify messages. If a server receives a CertificateVerify message with MD5 or SHA-1 it MUST abort the connection with an illegal_parameter alert. 6."}
{"id": "q-en-draft-ietf-tls-ticketrequest-afc5668ba2c4422fe8a9325923b4428ef9af919324c280577093d4032db2d08f", "old_text": "Connection racing: Happy Eyeballs V2 RFC8305 describes techniques for performing connection racing. The Transport Services Architecture implementation from I-D.ietf-taps-impl also describes how connections may race across interfaces and address families. In cases where clients have early data to send and want to minimize or avoid ticket re-use, unique tickets for each unique connection attempt are useful. Moreover, as some servers may implement single-use tickets (and even session ticket encryption keys), distinct tickets will be needed to prevent premature ticket invalidation by racing. Connection priming: In some systems, connections may be primed or", "comments": "Good catch -- will fix. Good suggestion! We can certainly add this. As the extension is merely a hint to servers when deciding how many tickets to vend, I think this is out of scope for the document. It's intended to be once per session, and we'll add that. Also a good catch. Like other extensions that are not affected by the possibly updated ClientHello, it must not change. We need to specify the server side behavior, too. That seems reasonable to me.\nThanks! Approved pending", "new_text": "Connection racing: Happy Eyeballs V2 RFC8305 describes techniques for performing connection racing. The Transport Services Architecture implementation from TAPS also describes how connections may race across interfaces and address families. In cases where clients have early data to send and want to minimize or avoid ticket re-use, unique tickets for each unique connection attempt are useful. Moreover, as some servers may implement single-use tickets (and even session ticket encryption keys), distinct tickets will be needed to prevent premature ticket invalidation by racing. Connection priming: In some systems, connections may be primed or"}
{"id": "q-en-draft-ietf-tls-ticketrequest-afc5668ba2c4422fe8a9325923b4428ef9af919324c280577093d4032db2d08f", "old_text": "e.g., public key cryptographic operations, avoiding waste is desirable. 3. Clients may indicate to servers their desired number of tickets via the following \"ticket_request\" extension: Clients may send this extension in ClientHello. It contains the following structure:", "comments": "Good catch -- will fix. Good suggestion! We can certainly add this. As the extension is merely a hint to servers when deciding how many tickets to vend, I think this is out of scope for the document. It's intended to be once per session, and we'll add that. Also a good catch. Like other extensions that are not affected by the possibly updated ClientHello, it must not change. We need to specify the server side behavior, too. That seems reasonable to me.\nThanks! Approved pending", "new_text": "e.g., public key cryptographic operations, avoiding waste is desirable. Decline resumption: Clients may indicate they have no intention of resuming connections by sending a ticket request with count of zero. 3. Clients may indicate to servers their desired number of tickets for a single connection via the following \"ticket_request\" extension: Clients may send this extension in ClientHello. It contains the following structure:"}
{"id": "q-en-draft-ietf-tls-ticketrequest-afc5668ba2c4422fe8a9325923b4428ef9af919324c280577093d4032db2d08f", "old_text": "send more than 255 tickets to clients. Servers that support ticket requests MUST NOT echo \"ticket_request\" in the EncryptedExtensions. 4.", "comments": "Good catch -- will fix. Good suggestion! We can certainly add this. As the extension is merely a hint to servers when deciding how many tickets to vend, I think this is out of scope for the document. It's intended to be once per session, and we'll add that. Also a good catch. Like other extensions that are not affected by the possibly updated ClientHello, it must not change. We need to specify the server side behavior, too. That seems reasonable to me.\nThanks! Approved pending", "new_text": "send more than 255 tickets to clients. Servers that support ticket requests MUST NOT echo \"ticket_request\" in the EncryptedExtensions message. A client MUST abort the connection with an \"illegal_parameter\" alert if the \"ticket_request\" extension is present in the EncryptedExtensions message. Clients MUST NOT change the value of TicketRequestContents.count in second ClientHello messages sent in response to a HelloRetryRequest. 4."}
{"id": "q-en-draft-ietf-webtrans-http3-dd221859a061b8220b75f9c4747aeea232225f0f0b281ad901169689a85cec1e", "old_text": "meaning that the server is not willing to receive any WebTransport sessions. 3.2. RFC8441 defines an extended CONNECT method in Section 4, enabled by", "comments": "Require SETTINGSH3DATAGRAM and maxdatagramframe_size TP\nInteresting thought. One way to do that replaces the entire section with this: This is certainly more concise, although it does also remove some of the explanations that help people understand the various layers involved. Or perhaps we should include those same additional details but just turn those paragraphs into bullets?\nWhile technically true, I thought we'd said that we wanted to require all the features all of the time, but agreed, let's take that to an editorial issue if needed.\nMy point is that the server could support both regular HTTP and WebTransport. If a client comes in without webtransport settings and only sends GETs, that's fine. If a client comes in with webtransport settings and makes a webtransport request, that's fine. If a client comes in without webtransport settings and makes a webtransport request, that's not fine. In that case the server needs to reject the webtransport request, not kill the connection.\nAh, that makes more sense, thank you\nPer discussion at IETF 114, we should explicitly require support for each layer of capabilities that are needed. There was also a discussion about whether or not we needed to require datagram support, since requiring capsules technically means that you could use a datagram capsule and implementations may exist that do not want to have datagram support at all. Documenting some of that discussion in replies to this issue, as well.\nSpecific to datagrams and capsules: WT supported means that capsule protocol is supported, thus you must always support capsules. Proposal: You need to have datagrams in order to do WT over HTTP/3 Adding additional signaling to negotiate datagram support introduces a lot of additional complexity and different configurations that may harm interoperability. Dropping a datagram is about as easy as doing any other error path HTTP Datagrams must already be supported in the receive direction for capsules (and you need capsules for WT) either way Intermediaries will need to be able to either translate for upstream or ensure that upstream can provide an equivalent feature set More generally: We want to explicitly call out dependencies/prerequisites so that we don't need to teach layers about other layers. For example, WT being supported doesn't mean that the layers that implement capsules, H3 datagrams, or QUIC datagrams should have any idea about the existence of WebTransport\nDiscussed at IETF 115, consensus in room to require all the relevant settings and transport parameters to be sent individually\nSo we need: HTTP Setting: SETTINGSH3DATAGRAM (0x33) with a value of 1 for HTTP datagrams QUIC TP: maxdatagramframesize with a value greater than 0 for QUIC datagrams Capsule protocol support is implicit in the upgrade token, as explicitly called out by RFC 9297 or we can require to be sent on every extended connect request as well. to be added to the existing HTTP Setting: SETTINGSENABLEWEBTRANSPORT = 1 HTTP Setting: SETTINGSWEBTRANSPORTMAXSESSIONS > 0 (although we may be removing SETTINGSENABLEWEBTRANSPORT)\nSGTM I think we can skip mentioning Capsule-Protocol to match CONNECT-UDP\nWould this be easier to process as a list rather than multiple separate requirements?The list text in that comment isn't correct - the server shouldn't close the connection because it's legal for the client to not support datagrams if it only sends GETs and POSTs. However, the text in this PR is correct. I propose we land the PR as-is and can make editorial changes later.", "new_text": "meaning that the server is not willing to receive any WebTransport sessions. Because WebTransport over HTTP/3 requires support for HTTP/3 datagrams and the Capsule Protocol, both the client and the server MUST indicate support for HTTP/3 datagrams by sending a SETTINGS_H3_DATAGRAM value set to 1 in their SETTINGS frame (see Section 2.1.1 of HTTP-DATAGRAM). WebTransport over HTTP/3 also requires support for QUIC datagrams. To indicate support, both the client and the server MUST send a max_datagram_frame_size transport parameter with a value greater than 0 (see Section 3 of QUIC-DATAGRAM). 3.2. RFC8441 defines an extended CONNECT method in Section 4, enabled by"}
{"id": "q-en-draft-ietf-webtrans-http3-908f23682713bf5bca46b25fe3b5c6a20d07db3a5ecafdfad9629d818dc9fa30", "old_text": "From the client's perspective, a WebTransport session is established when the client receives a 200 response. From the server's perspective, a session is established once it sends a 200 response. Both endpoints MUST NOT open any streams or send any datagrams on a given session before that session is established. WebTransport over HTTP/3 does not support 0-RTT. 3.4.", "comments": ".\ncc NAME NAME\nLooks great!\nFor a simple application that wants to send \"hello world\" over a WebTransport stream, it would take a minimum of 2.5 round trips for the data to arrive on the server. This is still an improvement over WebSockets but I'm wondering if we could do better. If the client is able to create streams prior to receiving the CONNECT response, then it would remove a round trip. The server would need to buffer any data received on streams associated with an unknown WebTransport session. If the server responds with a non-200 status code, then these streams are discarded. I believe 0-RTT is disabled for WebTransport because CONNECT is not idempotent and would be subject to replay attacks. An idempotent CONNECT would remove a round trip when 0-RTT is used... although I'm not sure this is a good idea.\nI think this is a reasonable optimization\nThe client requirement is odd in comparison to MASQUE / CONNECT-UDP which says \"client can risk sending datagrams before getting a 200 response, a different response means that datagrams will be discarded\". What is the motivation for the requirement in WebTransport? If you need some kind of ordering guarantees, then stream IDs provide that.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it.\nAfter the client initiates the WebTransport session by sending a CONNECT request, the server sends a 200 response and is free to create streams. This means that a server-initiated stream may arrive prior to the CONNECT response. Is the client able to infer that the session was successfully established, or does the client need to buffer this stream until it receives a proper response?\nSince there are no ordering guarantees across streams on the receiver side, I'd say you have to buffer until the 200 is processed.\nI agree with Lucas here. We also had a similar problem in QuicTransport (server has to buffer streams before checking the origin), and we handled it by buffering.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it.", "new_text": "From the client's perspective, a WebTransport session is established when the client receives a 200 response. From the server's perspective, a session is established once it sends a 200 response. WebTransport over HTTP/3 does not support 0-RTT. 3.4."}
{"id": "q-en-draft-ietf-webtrans-http3-908f23682713bf5bca46b25fe3b5c6a20d07db3a5ecafdfad9629d818dc9fa30", "old_text": "MTUs can vary. TODO: Describe how the path MTU can be computed, specifically propagation across HTTP proxies. 5. An WebTransport session over HTTP/3 is terminated when either", "comments": ".\ncc NAME NAME\nLooks great!\nFor a simple application that wants to send \"hello world\" over a WebTransport stream, it would take a minimum of 2.5 round trips for the data to arrive on the server. This is still an improvement over WebSockets but I'm wondering if we could do better. If the client is able to create streams prior to receiving the CONNECT response, then it would remove a round trip. The server would need to buffer any data received on streams associated with an unknown WebTransport session. If the server responds with a non-200 status code, then these streams are discarded. I believe 0-RTT is disabled for WebTransport because CONNECT is not idempotent and would be subject to replay attacks. An idempotent CONNECT would remove a round trip when 0-RTT is used... although I'm not sure this is a good idea.\nI think this is a reasonable optimization\nThe client requirement is odd in comparison to MASQUE / CONNECT-UDP which says \"client can risk sending datagrams before getting a 200 response, a different response means that datagrams will be discarded\". What is the motivation for the requirement in WebTransport? If you need some kind of ordering guarantees, then stream IDs provide that.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it.\nAfter the client initiates the WebTransport session by sending a CONNECT request, the server sends a 200 response and is free to create streams. This means that a server-initiated stream may arrive prior to the CONNECT response. Is the client able to infer that the session was successfully established, or does the client need to buffer this stream until it receives a proper response?\nSince there are no ordering guarantees across streams on the receiver side, I'd say you have to buffer until the 200 is processed.\nI agree with Lucas here. We also had a similar problem in QuicTransport (server has to buffer streams before checking the origin), and we handled it by buffering.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it.", "new_text": "MTUs can vary. TODO: Describe how the path MTU can be computed, specifically propagation across HTTP proxies. 4.5. In WebTransport over HTTP/3, the client MAY send its SETTINGS frame, as well as multiple WebTransport CONNECT requests, WebTransport data streams and WebTransport datagrams, all within a single flight. As those can arrive out of order, a WebTransport server could be put into a situation where it receives a stream or a datagram without a corresponding session. Similarly, a client may receive a server- initiated stream or a datagram before receiving the CONNECT response headers from the server. To handle this case, WebTransport endpoints SHOULD buffer streams and datagrams until those can be associated with an established session. To avoid resource exhaustion, the endpoints MUST limit the number of buffered streams and datagrams. When the number of buffered streams is exceeded, a stream SHALL be closed by sending a RESET_STREAM and/ or STOP_SENDING with the \"H3_WEBTRANSPORT_BUFFERED_STREAM_REJECTED\" error code. When the number of buffered datagrams is exceeded, a datagram SHALL be dropped. It is up to an implementation to choose what stream or datagram to discard. 5. An WebTransport session over HTTP/3 is terminated when either"}
{"id": "q-en-draft-ietf-webtrans-http3-908f23682713bf5bca46b25fe3b5c6a20d07db3a5ecafdfad9629d818dc9fa30", "old_text": "This document Both ", "comments": ".\ncc NAME NAME\nLooks great!\nFor a simple application that wants to send \"hello world\" over a WebTransport stream, it would take a minimum of 2.5 round trips for the data to arrive on the server. This is still an improvement over WebSockets but I'm wondering if we could do better. If the client is able to create streams prior to receiving the CONNECT response, then it would remove a round trip. The server would need to buffer any data received on streams associated with an unknown WebTransport session. If the server responds with a non-200 status code, then these streams are discarded. I believe 0-RTT is disabled for WebTransport because CONNECT is not idempotent and would be subject to replay attacks. An idempotent CONNECT would remove a round trip when 0-RTT is used... although I'm not sure this is a good idea.\nI think this is a reasonable optimization\nThe client requirement is odd in comparison to MASQUE / CONNECT-UDP which says \"client can risk sending datagrams before getting a 200 response, a different response means that datagrams will be discarded\". What is the motivation for the requirement in WebTransport? If you need some kind of ordering guarantees, then stream IDs provide that.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it.\nAfter the client initiates the WebTransport session by sending a CONNECT request, the server sends a 200 response and is free to create streams. This means that a server-initiated stream may arrive prior to the CONNECT response. Is the client able to infer that the session was successfully established, or does the client need to buffer this stream until it receives a proper response?\nSince there are no ordering guarantees across streams on the receiver side, I'd say you have to buffer until the 200 is processed.\nI agree with Lucas here. We also had a similar problem in QuicTransport (server has to buffer streams before checking the origin), and we handled it by buffering.\nDiscussed at IETF 110. There were no objections to buffering (with some limits), so I think we should go ahead and add it.", "new_text": "This document Both 7.5. The following entry is added to the \"HTTP/3 Error Code\" registry established by HTTP3: H3_WEBTRANSPORT_BUFFERED_STREAM_REJECTED 0x3994bd84 WebTransport data stream rejected due to lack of associated session. This document. "}
{"id": "q-en-draft-ietf-webtrans-http3-ec5602ae359a1700365d13672e64cac2a3be4cd7befbfcc0a39c330cb05efc12", "old_text": "encoded using the QUIC variable length integer scheme described in RFC9000. 4.1. Once established, both endpoints can open unidirectional streams.", "comments": "The H3 draft is light on error handling guidance. There are number of cases for bad session IDs: The session ID is not a client initiated bidirectional stream ID The session ID refers to a stream that did not perform a WebTransport handshake The session ID refers to a stream that has been closed or reset Presumably there is no error when a DATAGRAM arrives with a bad session id -- you just drop it? For streams, perhaps HTTPIDERROR is appropriate? Issues and both refer to cases where you receive a session ID that has not been created yet. Best to keep the discussion there, but it's likely we want to allow clients/servers to buffer some of these at their own discretion rather than require an error.\nFor 1, I think it should be an error. For 2, we may want to clarify that the headers should be already known at that point, but it should be an error. For 3, I am not sure how to enforce this without running into potential race conditions, so we may want to leave those as-is. As for the datagrams, I think h3-datagram draft should handle this.\nThinking about 2 more, I think there's also a race condition in which a data stream could be received between the stream being open and headers being processed. I'll write up a PR for 1.\nI think for 2, I meant the ID refers to a stream where the headers were known but it was not a CONNECT stream.\nDiscussed at IETF 111. Sense of the room for MUST close for 1, MAY close in other scenarios.", "new_text": "encoded using the QUIC variable length integer scheme described in RFC9000. If at any point a session ID is received that cannot a valid ID for a client-initiated bidirectional stream, the recepient MUST close the connection with an H3_ID_ERROR error code. 4.1. Once established, both endpoints can open unidirectional streams."}
{"id": "q-en-draft-ietf-webtrans-http3-1e981353d379c6e919464fa8fb99ab45d03ecb81a743b6d2c5b1b25a888a9226", "old_text": "may be using a version of WebTransport extension that is different from the one used by the server. 3.2. RFC8441 defines an extended CONNECT method in Section 4, enabled by", "comments": "Capsule design team output. Minimal changes to HTTP/3, which was already using capsules and is now joined by . Same content as draft PR, thank you to those who have already reviewed!\nDoes this actually with a no-op, or are we supposed to discuss that at 115? Otherwise LGTM.\nIt as a no-op with the exception of one bullet point that we did want to address, that got split out into , which we'll discuss at 115 separately from this PR.\nShould the protocol share framing semantics with the HTTP/2 draft so that it is easy/easier to take datagrams or streams and put them on the CONNECT stream? Should it be possible to use that mechanism, or should it instead be forbidden? As much as possible, it would be desirable to have the forwarding logic be fairly uniform across different protocol versions. Right now, we have two almost completely different protocols.\nI was discussing this with NAME and NAME What happens when an H3 WT server receives \"version independent\" WT framing on its CONNECT stream. Since WT over H3 requires H3 datagrams, then there is no need to fold datagrams into the CONNECT stream. There shouldn't be a need to fold streams either, unless you want to defeat QUIC stream limits :) Allowing this would also increase work to implement an H3 only WT client/server: you have to re-implement QUIC over an HTTP byte-stream just in case the peer decides to send you \"version independent\" WT. Right now I think they are totally different and never the twain shall meet. Proxies can convert between the two versions. If we did that, should we have different upgrade tokens?\nIs this issue still relevant now that we have the outcome of HTTP Datagram Design Team?\nNAME gentle ping\nActually never mind, this is blocked on the capsule design team - the output of that design team will decide on whether the h2 capsules are allowed in h3 or not\nThe H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to. There's already a PR open () with a design idea.\nI'd like to propose a different model than the one in : The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID. To indicate WT is not supported, server sends 0. To allow for only 1 session ever, server sends 4, and never updates it. To allow for only 1 session at a time, server sends 4, and updates it by 4 as each session closes/resets. To allow for N parallel sessions, server sends 4 * N. and updates as sessions close/reset. This mirrors H3 PUSH, and requires at least one more frame (MAXWTSESSIONID). Perhaps it would also require CANCELWT_SESSION? A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport.\nI wanted to have a negotiation mechanism in , but my conclusion after talking with NAME about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not.\nThough we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection\nThat's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling.\nNAME since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable\nI'm also considering use cases for WT which are not tied to the JS API.\nNAME that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful?\nNAME : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling.\nNAME how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that.\nActually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time.\n(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance.\nI think my main problem with is that most people who want to disallow pooling want to exclude not only other WebTransport sessions, but also general HTTP traffic. was written with that assumption in mind.\nOk, MAXWTSESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem.\nAs I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server. Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation. A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this. Thus, I'm a supporter of option 3: MAXWTSESSIONS or similar.\nI agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit).\nChair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection\nNote that all of the below works when you swap \"server\" and \"client\". I wanted to focus on a poor Javascript client implementation, but it's equally possible to run into these cases with a poor server implementation. Let's say that you have a Http3Transport session that shares a connection with HTTP/3 or another Http3Transport session. The client has limited buffer space and hypothetically advertises MAXDATA of 1MB for the connection. If the server pushes 1MB of unprocessed data over a Http3Transport session, then it will prevent the server from sending data over the connection, including HTTP/3 responses and other Http3Transport sessions. This can occur if the application does not accept all remote-created streams or does not fully read every stream. This will be VERY easy for a developer to mess up. It can also occur if the application can not process data faster than it is received. In my case, the server creates unidirectional streams and pushes a large amount of media over them. The flow control limit would be hit if the Javascript application: Only accepts bidirectional streams (ex. initialization) Does not accept a unidirectional stream (ex. error or teardown) Does not fully read or abort every stream (ex. parsing error) Is not able to parse the media fast enough (ex. underpowered). This is mostly intended behavior, otherwise these \"bugs\" would cause unbounded memory usage. However, the blast radius is bigger when sharing a connection. In my use-case, hitting the flow control limit would also block any HTTP/3 RPC calls or unrelated traffic on the connection. This introduces more scenarios for the connection to become deadlocked or generally stalled. HTTP/3 implementations use MAXSTREAMS to limit the number of simultaneous requests. This use-case no longer works when sharing a connection with a Http3Transport session, and HTTP/3 implementations must instead return explicit error status codes when this threshold is reached. The same issues with MAXDATA also apply to MAXSTREAMS. A Javascript application that fails to accept or read/abort every stream will consume flow control resources. This can manifest itself in strange ways, such as HTTP/3 being unable to create control streams, or simply being unable to create new request streams.\nThanks for the salient overview, it makes it easier to understand your concerns. My first comment would be that some of these points apply to QuicTransport too. Its just that their made worse by Http3Transport pooling allowing the mashing together of orthogonal protocols with no application-layer oversight. Just as you might interfere with a page load, a browser could easily interfere with your finely crafted application protocol. I would anticipate that, due to different browsers having different HTTP usage patterns, and different pages being composed differently by time, region, etc, one might find that Http3Transport apps subject to pooling would experience inconsistencies and transient issues. You rightly point out that the connection flow control is a shared resource. But the problems described seem to relate to stream flow control and overlooks the different flow control setting afforded by the Transport Parameters. A possible mitigation is to more tightly constrain the initial flow windows of server-initiated streams so that they can't interfere with requests. However, I still think that the notion of pooling together WebTransports in an uncoordinated fashion is going to manifest undesirable properties. The MAX_STREAMS observation is a good one. I think the competition between request streams and client bidirectional streams is going to be hard to resolve.\nYeah, and enforcing any per sort of per-session flow control might get difficult because you can only identify the protocol and session after you receive the first few bytes of each stream. For example, assume a brand new connection (for simplicity only) and you receive a STREAM frame with id=100, offset=16, and size=1000. You don't know yet which protocol or session created this stream or the 25 preceding it. And hopefully you eventually get those first few bytes for each stream...\nChair notes from IETF 112 meeting: this needs more discussion (on the issue or mailing list)\nChair: discussed at 113, consensus in room to: use a setting to limit number of WT sessions (see ) figure out MAXSTREAMS, a way to limit the number of streams within a session abandon MAXDATA as too hard, do not limit amount of data sent within a session (anyone can open a new issue to revisit this discussion if they have a concrete proposal on how to do this) let QUIC take care of MAXSTREAMDATA\nCapsule DT is doing all of the above bullet points except the second one, for which I opened . Apologies for the extremely delayed response here, but hearing no objections we are declaring consensus and merging the PRs. David David Schinazi wrote:", "new_text": "may be using a version of WebTransport extension that is different from the one used by the server. In addition to the setting above, the server MUST send a SETTINGS_MAX_WEBTRANSPORT_SESSIONS parameter indicating the maximum number of concurrent sessions it is willing to receive. The default value for the SETTINGS_MAX_WEBTRANSPORT_SESSIONS parameter is \"0\", meaning that the server is not willing to receive any WebTransport sessions. 3.2. RFC8441 defines an extended CONNECT method in Section 4, enabled by"}
{"id": "q-en-draft-ietf-webtrans-http3-1e981353d379c6e919464fa8fb99ab45d03ecb81a743b6d2c5b1b25a888a9226", "old_text": "3.4. From the flow control perspective, WebTransport sessions count against the stream flow control just like regular HTTP requests, since they are established via an HTTP CONNECT request. This document does not make any effort to introduce a separate flow control mechanism for sessions, nor to separate HTTP requests from WebTransport data streams. If the server needs to limit the rate of incoming requests, it has alternative mechanisms at its disposal: \"HTTP_REQUEST_REJECTED\" error code defined in HTTP3 indicates to the receiving HTTP/3 stack that the request was not processed in any way. HTTP status code 429 indicates that the request was rejected due to rate limiting RFC6585. Unlike the previous method, this signal", "comments": "Capsule design team output. Minimal changes to HTTP/3, which was already using capsules and is now joined by . Same content as draft PR, thank you to those who have already reviewed!\nDoes this actually with a no-op, or are we supposed to discuss that at 115? Otherwise LGTM.\nIt as a no-op with the exception of one bullet point that we did want to address, that got split out into , which we'll discuss at 115 separately from this PR.\nShould the protocol share framing semantics with the HTTP/2 draft so that it is easy/easier to take datagrams or streams and put them on the CONNECT stream? Should it be possible to use that mechanism, or should it instead be forbidden? As much as possible, it would be desirable to have the forwarding logic be fairly uniform across different protocol versions. Right now, we have two almost completely different protocols.\nI was discussing this with NAME and NAME What happens when an H3 WT server receives \"version independent\" WT framing on its CONNECT stream. Since WT over H3 requires H3 datagrams, then there is no need to fold datagrams into the CONNECT stream. There shouldn't be a need to fold streams either, unless you want to defeat QUIC stream limits :) Allowing this would also increase work to implement an H3 only WT client/server: you have to re-implement QUIC over an HTTP byte-stream just in case the peer decides to send you \"version independent\" WT. Right now I think they are totally different and never the twain shall meet. Proxies can convert between the two versions. If we did that, should we have different upgrade tokens?\nIs this issue still relevant now that we have the outcome of HTTP Datagram Design Team?\nNAME gentle ping\nActually never mind, this is blocked on the capsule design team - the output of that design team will decide on whether the h2 capsules are allowed in h3 or not\nThe H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to. There's already a PR open () with a design idea.\nI'd like to propose a different model than the one in : The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID. To indicate WT is not supported, server sends 0. To allow for only 1 session ever, server sends 4, and never updates it. To allow for only 1 session at a time, server sends 4, and updates it by 4 as each session closes/resets. To allow for N parallel sessions, server sends 4 * N. and updates as sessions close/reset. This mirrors H3 PUSH, and requires at least one more frame (MAXWTSESSIONID). Perhaps it would also require CANCELWT_SESSION? A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport.\nI wanted to have a negotiation mechanism in , but my conclusion after talking with NAME about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not.\nThough we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection\nThat's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling.\nNAME since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable\nI'm also considering use cases for WT which are not tied to the JS API.\nNAME that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful?\nNAME : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling.\nNAME how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that.\nActually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time.\n(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance.\nI think my main problem with is that most people who want to disallow pooling want to exclude not only other WebTransport sessions, but also general HTTP traffic. was written with that assumption in mind.\nOk, MAXWTSESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem.\nAs I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server. Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation. A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this. Thus, I'm a supporter of option 3: MAXWTSESSIONS or similar.\nI agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit).\nChair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection\nNote that all of the below works when you swap \"server\" and \"client\". I wanted to focus on a poor Javascript client implementation, but it's equally possible to run into these cases with a poor server implementation. Let's say that you have a Http3Transport session that shares a connection with HTTP/3 or another Http3Transport session. The client has limited buffer space and hypothetically advertises MAXDATA of 1MB for the connection. If the server pushes 1MB of unprocessed data over a Http3Transport session, then it will prevent the server from sending data over the connection, including HTTP/3 responses and other Http3Transport sessions. This can occur if the application does not accept all remote-created streams or does not fully read every stream. This will be VERY easy for a developer to mess up. It can also occur if the application can not process data faster than it is received. In my case, the server creates unidirectional streams and pushes a large amount of media over them. The flow control limit would be hit if the Javascript application: Only accepts bidirectional streams (ex. initialization) Does not accept a unidirectional stream (ex. error or teardown) Does not fully read or abort every stream (ex. parsing error) Is not able to parse the media fast enough (ex. underpowered). This is mostly intended behavior, otherwise these \"bugs\" would cause unbounded memory usage. However, the blast radius is bigger when sharing a connection. In my use-case, hitting the flow control limit would also block any HTTP/3 RPC calls or unrelated traffic on the connection. This introduces more scenarios for the connection to become deadlocked or generally stalled. HTTP/3 implementations use MAXSTREAMS to limit the number of simultaneous requests. This use-case no longer works when sharing a connection with a Http3Transport session, and HTTP/3 implementations must instead return explicit error status codes when this threshold is reached. The same issues with MAXDATA also apply to MAXSTREAMS. A Javascript application that fails to accept or read/abort every stream will consume flow control resources. This can manifest itself in strange ways, such as HTTP/3 being unable to create control streams, or simply being unable to create new request streams.\nThanks for the salient overview, it makes it easier to understand your concerns. My first comment would be that some of these points apply to QuicTransport too. Its just that their made worse by Http3Transport pooling allowing the mashing together of orthogonal protocols with no application-layer oversight. Just as you might interfere with a page load, a browser could easily interfere with your finely crafted application protocol. I would anticipate that, due to different browsers having different HTTP usage patterns, and different pages being composed differently by time, region, etc, one might find that Http3Transport apps subject to pooling would experience inconsistencies and transient issues. You rightly point out that the connection flow control is a shared resource. But the problems described seem to relate to stream flow control and overlooks the different flow control setting afforded by the Transport Parameters. A possible mitigation is to more tightly constrain the initial flow windows of server-initiated streams so that they can't interfere with requests. However, I still think that the notion of pooling together WebTransports in an uncoordinated fashion is going to manifest undesirable properties. The MAX_STREAMS observation is a good one. I think the competition between request streams and client bidirectional streams is going to be hard to resolve.\nYeah, and enforcing any per sort of per-session flow control might get difficult because you can only identify the protocol and session after you receive the first few bytes of each stream. For example, assume a brand new connection (for simplicity only) and you receive a STREAM frame with id=100, offset=16, and size=1000. You don't know yet which protocol or session created this stream or the 25 preceding it. And hopefully you eventually get those first few bytes for each stream...\nChair notes from IETF 112 meeting: this needs more discussion (on the issue or mailing list)\nChair: discussed at 113, consensus in room to: use a setting to limit number of WT sessions (see ) figure out MAXSTREAMS, a way to limit the number of streams within a session abandon MAXDATA as too hard, do not limit amount of data sent within a session (anyone can open a new issue to revisit this discussion if they have a concrete proposal on how to do this) let QUIC take care of MAXSTREAMDATA\nCapsule DT is doing all of the above bullet points except the second one, for which I opened . Apologies for the extremely delayed response here, but hearing no objections we are declaring consensus and merging the PRs. David David Schinazi wrote:", "new_text": "3.4. This document defines a SETTINGS_MAX_WEBTRANSPORT_SESSIONS parameter that allows the server to limit the maximum number of concurrent WebTransport sessions on a single HTTP/3 connection. The client MUST NOT open more sessions than indicated in the server SETTINGS parameters. The server MUST NOT close the connection if the client opens sessions exceeding this limit, as the client and the server do not have a consistent view of how many sessions are open due to the asynchronous nature of the protocol; instead, it MUST reset all of the CONNECT streams it is not willing to process with the \"HTTP_REQUEST_REJECTED\" status defined in HTTP3. Just like other HTTP requests, WebTransport sessions, and data sent on those sessions, are counted against flow control limits. This document does not introduce additional mechanisms for endpoints to limit the relative amount of flow control credit consumed by different WebTransport sessions, however servers that wish to limit the rate of incoming requests on any particular session have alternative mechanisms: The \"HTTP_REQUEST_REJECTED\" error code defined in HTTP3 indicates to the receiving HTTP/3 stack that the request was not processed in any way. HTTP status code 429 indicates that the request was rejected due to rate limiting RFC6585. Unlike the previous method, this signal"}
{"id": "q-en-draft-ietf-webtrans-http3-1e981353d379c6e919464fa8fb99ab45d03ecb81a743b6d2c5b1b25a888a9226", "old_text": "8.2. The following entry is added to the \"HTTP/3 Settings\" registry established by HTTP3: The \"SETTINGS_ENABLE_WEBTRANSPORT\" parameter indicates that the", "comments": "Capsule design team output. Minimal changes to HTTP/3, which was already using capsules and is now joined by . Same content as draft PR, thank you to those who have already reviewed!\nDoes this actually with a no-op, or are we supposed to discuss that at 115? Otherwise LGTM.\nIt as a no-op with the exception of one bullet point that we did want to address, that got split out into , which we'll discuss at 115 separately from this PR.\nShould the protocol share framing semantics with the HTTP/2 draft so that it is easy/easier to take datagrams or streams and put them on the CONNECT stream? Should it be possible to use that mechanism, or should it instead be forbidden? As much as possible, it would be desirable to have the forwarding logic be fairly uniform across different protocol versions. Right now, we have two almost completely different protocols.\nI was discussing this with NAME and NAME What happens when an H3 WT server receives \"version independent\" WT framing on its CONNECT stream. Since WT over H3 requires H3 datagrams, then there is no need to fold datagrams into the CONNECT stream. There shouldn't be a need to fold streams either, unless you want to defeat QUIC stream limits :) Allowing this would also increase work to implement an H3 only WT client/server: you have to re-implement QUIC over an HTTP byte-stream just in case the peer decides to send you \"version independent\" WT. Right now I think they are totally different and never the twain shall meet. Proxies can convert between the two versions. If we did that, should we have different upgrade tokens?\nIs this issue still relevant now that we have the outcome of HTTP Datagram Design Team?\nNAME gentle ping\nActually never mind, this is blocked on the capsule design team - the output of that design team will decide on whether the h2 capsules are allowed in h3 or not\nThe H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to. There's already a PR open () with a design idea.\nI'd like to propose a different model than the one in : The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID. To indicate WT is not supported, server sends 0. To allow for only 1 session ever, server sends 4, and never updates it. To allow for only 1 session at a time, server sends 4, and updates it by 4 as each session closes/resets. To allow for N parallel sessions, server sends 4 * N. and updates as sessions close/reset. This mirrors H3 PUSH, and requires at least one more frame (MAXWTSESSIONID). Perhaps it would also require CANCELWT_SESSION? A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport.\nI wanted to have a negotiation mechanism in , but my conclusion after talking with NAME about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not.\nThough we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection\nThat's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling.\nNAME since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable\nI'm also considering use cases for WT which are not tied to the JS API.\nNAME that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful?\nNAME : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling.\nNAME how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that.\nActually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time.\n(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance.\nI think my main problem with is that most people who want to disallow pooling want to exclude not only other WebTransport sessions, but also general HTTP traffic. was written with that assumption in mind.\nOk, MAXWTSESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem.\nAs I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server. Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation. A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this. Thus, I'm a supporter of option 3: MAXWTSESSIONS or similar.\nI agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit).\nChair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection\nNote that all of the below works when you swap \"server\" and \"client\". I wanted to focus on a poor Javascript client implementation, but it's equally possible to run into these cases with a poor server implementation. Let's say that you have a Http3Transport session that shares a connection with HTTP/3 or another Http3Transport session. The client has limited buffer space and hypothetically advertises MAXDATA of 1MB for the connection. If the server pushes 1MB of unprocessed data over a Http3Transport session, then it will prevent the server from sending data over the connection, including HTTP/3 responses and other Http3Transport sessions. This can occur if the application does not accept all remote-created streams or does not fully read every stream. This will be VERY easy for a developer to mess up. It can also occur if the application can not process data faster than it is received. In my case, the server creates unidirectional streams and pushes a large amount of media over them. The flow control limit would be hit if the Javascript application: Only accepts bidirectional streams (ex. initialization) Does not accept a unidirectional stream (ex. error or teardown) Does not fully read or abort every stream (ex. parsing error) Is not able to parse the media fast enough (ex. underpowered). This is mostly intended behavior, otherwise these \"bugs\" would cause unbounded memory usage. However, the blast radius is bigger when sharing a connection. In my use-case, hitting the flow control limit would also block any HTTP/3 RPC calls or unrelated traffic on the connection. This introduces more scenarios for the connection to become deadlocked or generally stalled. HTTP/3 implementations use MAXSTREAMS to limit the number of simultaneous requests. This use-case no longer works when sharing a connection with a Http3Transport session, and HTTP/3 implementations must instead return explicit error status codes when this threshold is reached. The same issues with MAXDATA also apply to MAXSTREAMS. A Javascript application that fails to accept or read/abort every stream will consume flow control resources. This can manifest itself in strange ways, such as HTTP/3 being unable to create control streams, or simply being unable to create new request streams.\nThanks for the salient overview, it makes it easier to understand your concerns. My first comment would be that some of these points apply to QuicTransport too. Its just that their made worse by Http3Transport pooling allowing the mashing together of orthogonal protocols with no application-layer oversight. Just as you might interfere with a page load, a browser could easily interfere with your finely crafted application protocol. I would anticipate that, due to different browsers having different HTTP usage patterns, and different pages being composed differently by time, region, etc, one might find that Http3Transport apps subject to pooling would experience inconsistencies and transient issues. You rightly point out that the connection flow control is a shared resource. But the problems described seem to relate to stream flow control and overlooks the different flow control setting afforded by the Transport Parameters. A possible mitigation is to more tightly constrain the initial flow windows of server-initiated streams so that they can't interfere with requests. However, I still think that the notion of pooling together WebTransports in an uncoordinated fashion is going to manifest undesirable properties. The MAX_STREAMS observation is a good one. I think the competition between request streams and client bidirectional streams is going to be hard to resolve.\nYeah, and enforcing any per sort of per-session flow control might get difficult because you can only identify the protocol and session after you receive the first few bytes of each stream. For example, assume a brand new connection (for simplicity only) and you receive a STREAM frame with id=100, offset=16, and size=1000. You don't know yet which protocol or session created this stream or the 25 preceding it. And hopefully you eventually get those first few bytes for each stream...\nChair notes from IETF 112 meeting: this needs more discussion (on the issue or mailing list)\nChair: discussed at 113, consensus in room to: use a setting to limit number of WT sessions (see ) figure out MAXSTREAMS, a way to limit the number of streams within a session abandon MAXDATA as too hard, do not limit amount of data sent within a session (anyone can open a new issue to revisit this discussion if they have a concrete proposal on how to do this) let QUIC take care of MAXSTREAMDATA\nCapsule DT is doing all of the above bullet points except the second one, for which I opened . Apologies for the extremely delayed response here, but hearing no objections we are declaring consensus and merging the PRs. David David Schinazi wrote:", "new_text": "8.2. The following entries are added to the \"HTTP/3 Settings\" registry established by HTTP3: The \"SETTINGS_ENABLE_WEBTRANSPORT\" parameter indicates that the"}
{"id": "q-en-draft-ietf-webtrans-http3-1e981353d379c6e919464fa8fb99ab45d03ecb81a743b6d2c5b1b25a888a9226", "old_text": "This document 8.3. The following entry is added to the \"HTTP/3 Frame Type\" registry", "comments": "Capsule design team output. Minimal changes to HTTP/3, which was already using capsules and is now joined by . Same content as draft PR, thank you to those who have already reviewed!\nDoes this actually with a no-op, or are we supposed to discuss that at 115? Otherwise LGTM.\nIt as a no-op with the exception of one bullet point that we did want to address, that got split out into , which we'll discuss at 115 separately from this PR.\nShould the protocol share framing semantics with the HTTP/2 draft so that it is easy/easier to take datagrams or streams and put them on the CONNECT stream? Should it be possible to use that mechanism, or should it instead be forbidden? As much as possible, it would be desirable to have the forwarding logic be fairly uniform across different protocol versions. Right now, we have two almost completely different protocols.\nI was discussing this with NAME and NAME What happens when an H3 WT server receives \"version independent\" WT framing on its CONNECT stream. Since WT over H3 requires H3 datagrams, then there is no need to fold datagrams into the CONNECT stream. There shouldn't be a need to fold streams either, unless you want to defeat QUIC stream limits :) Allowing this would also increase work to implement an H3 only WT client/server: you have to re-implement QUIC over an HTTP byte-stream just in case the peer decides to send you \"version independent\" WT. Right now I think they are totally different and never the twain shall meet. Proxies can convert between the two versions. If we did that, should we have different upgrade tokens?\nIs this issue still relevant now that we have the outcome of HTTP Datagram Design Team?\nNAME gentle ping\nActually never mind, this is blocked on the capsule design team - the output of that design team will decide on whether the h2 capsules are allowed in h3 or not\nThe H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to. There's already a PR open () with a design idea.\nI'd like to propose a different model than the one in : The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID. To indicate WT is not supported, server sends 0. To allow for only 1 session ever, server sends 4, and never updates it. To allow for only 1 session at a time, server sends 4, and updates it by 4 as each session closes/resets. To allow for N parallel sessions, server sends 4 * N. and updates as sessions close/reset. This mirrors H3 PUSH, and requires at least one more frame (MAXWTSESSIONID). Perhaps it would also require CANCELWT_SESSION? A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport.\nI wanted to have a negotiation mechanism in , but my conclusion after talking with NAME about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not.\nThough we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection\nThat's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling.\nNAME since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable\nI'm also considering use cases for WT which are not tied to the JS API.\nNAME that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful?\nNAME : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling.\nNAME how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that.\nActually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time.\n(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance.\nI think my main problem with is that most people who want to disallow pooling want to exclude not only other WebTransport sessions, but also general HTTP traffic. was written with that assumption in mind.\nOk, MAXWTSESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem.\nAs I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server. Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation. A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this. Thus, I'm a supporter of option 3: MAXWTSESSIONS or similar.\nI agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit).\nChair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection\nNote that all of the below works when you swap \"server\" and \"client\". I wanted to focus on a poor Javascript client implementation, but it's equally possible to run into these cases with a poor server implementation. Let's say that you have a Http3Transport session that shares a connection with HTTP/3 or another Http3Transport session. The client has limited buffer space and hypothetically advertises MAXDATA of 1MB for the connection. If the server pushes 1MB of unprocessed data over a Http3Transport session, then it will prevent the server from sending data over the connection, including HTTP/3 responses and other Http3Transport sessions. This can occur if the application does not accept all remote-created streams or does not fully read every stream. This will be VERY easy for a developer to mess up. It can also occur if the application can not process data faster than it is received. In my case, the server creates unidirectional streams and pushes a large amount of media over them. The flow control limit would be hit if the Javascript application: Only accepts bidirectional streams (ex. initialization) Does not accept a unidirectional stream (ex. error or teardown) Does not fully read or abort every stream (ex. parsing error) Is not able to parse the media fast enough (ex. underpowered). This is mostly intended behavior, otherwise these \"bugs\" would cause unbounded memory usage. However, the blast radius is bigger when sharing a connection. In my use-case, hitting the flow control limit would also block any HTTP/3 RPC calls or unrelated traffic on the connection. This introduces more scenarios for the connection to become deadlocked or generally stalled. HTTP/3 implementations use MAXSTREAMS to limit the number of simultaneous requests. This use-case no longer works when sharing a connection with a Http3Transport session, and HTTP/3 implementations must instead return explicit error status codes when this threshold is reached. The same issues with MAXDATA also apply to MAXSTREAMS. A Javascript application that fails to accept or read/abort every stream will consume flow control resources. This can manifest itself in strange ways, such as HTTP/3 being unable to create control streams, or simply being unable to create new request streams.\nThanks for the salient overview, it makes it easier to understand your concerns. My first comment would be that some of these points apply to QuicTransport too. Its just that their made worse by Http3Transport pooling allowing the mashing together of orthogonal protocols with no application-layer oversight. Just as you might interfere with a page load, a browser could easily interfere with your finely crafted application protocol. I would anticipate that, due to different browsers having different HTTP usage patterns, and different pages being composed differently by time, region, etc, one might find that Http3Transport apps subject to pooling would experience inconsistencies and transient issues. You rightly point out that the connection flow control is a shared resource. But the problems described seem to relate to stream flow control and overlooks the different flow control setting afforded by the Transport Parameters. A possible mitigation is to more tightly constrain the initial flow windows of server-initiated streams so that they can't interfere with requests. However, I still think that the notion of pooling together WebTransports in an uncoordinated fashion is going to manifest undesirable properties. The MAX_STREAMS observation is a good one. I think the competition between request streams and client bidirectional streams is going to be hard to resolve.\nYeah, and enforcing any per sort of per-session flow control might get difficult because you can only identify the protocol and session after you receive the first few bytes of each stream. For example, assume a brand new connection (for simplicity only) and you receive a STREAM frame with id=100, offset=16, and size=1000. You don't know yet which protocol or session created this stream or the 25 preceding it. And hopefully you eventually get those first few bytes for each stream...\nChair notes from IETF 112 meeting: this needs more discussion (on the issue or mailing list)\nChair: discussed at 113, consensus in room to: use a setting to limit number of WT sessions (see ) figure out MAXSTREAMS, a way to limit the number of streams within a session abandon MAXDATA as too hard, do not limit amount of data sent within a session (anyone can open a new issue to revisit this discussion if they have a concrete proposal on how to do this) let QUIC take care of MAXSTREAMDATA\nCapsule DT is doing all of the above bullet points except the second one, for which I opened . Apologies for the extremely delayed response here, but hearing no objections we are declaring consensus and merging the PRs. David David Schinazi wrote:", "new_text": "This document The \"SETTINGS_WEBTRANSPORT_MAX_SESSIONS\" parameter indicates that the specified HTTP/3 server is WebTransport-capable and the number of concurrent sessions it is willing to receive. WEBTRANSPORT_MAX_SESSIONS 0x2b603743 0 This document 8.3. The following entry is added to the \"HTTP/3 Frame Type\" registry"}
{"id": "q-en-draft-ietf-webtrans-http3-1e981353d379c6e919464fa8fb99ab45d03ecb81a743b6d2c5b1b25a888a9226", "old_text": "WebTransport application error codes. This document. ", "comments": "Capsule design team output. Minimal changes to HTTP/3, which was already using capsules and is now joined by . Same content as draft PR, thank you to those who have already reviewed!\nDoes this actually with a no-op, or are we supposed to discuss that at 115? Otherwise LGTM.\nIt as a no-op with the exception of one bullet point that we did want to address, that got split out into , which we'll discuss at 115 separately from this PR.\nShould the protocol share framing semantics with the HTTP/2 draft so that it is easy/easier to take datagrams or streams and put them on the CONNECT stream? Should it be possible to use that mechanism, or should it instead be forbidden? As much as possible, it would be desirable to have the forwarding logic be fairly uniform across different protocol versions. Right now, we have two almost completely different protocols.\nI was discussing this with NAME and NAME What happens when an H3 WT server receives \"version independent\" WT framing on its CONNECT stream. Since WT over H3 requires H3 datagrams, then there is no need to fold datagrams into the CONNECT stream. There shouldn't be a need to fold streams either, unless you want to defeat QUIC stream limits :) Allowing this would also increase work to implement an H3 only WT client/server: you have to re-implement QUIC over an HTTP byte-stream just in case the peer decides to send you \"version independent\" WT. Right now I think they are totally different and never the twain shall meet. Proxies can convert between the two versions. If we did that, should we have different upgrade tokens?\nIs this issue still relevant now that we have the outcome of HTTP Datagram Design Team?\nNAME gentle ping\nActually never mind, this is blocked on the capsule design team - the output of that design team will decide on whether the h2 capsules are allowed in h3 or not\nThe H3 draft by its nature allows for the parallel processing of multiple WT sessions and also the intermixing of normal H3 request with WT. Server endpoints may wish to negotiate or disable this behavior. Clients can simply choose not to do either kind of pooling if they don't want to. There's already a PR open () with a design idea.\nI'd like to propose a different model than the one in : The server sends the maximum WebTransport Session ID, or since 0 is a valid value, it's really the minimum invalid WebTransport Session ID. To indicate WT is not supported, server sends 0. To allow for only 1 session ever, server sends 4, and never updates it. To allow for only 1 session at a time, server sends 4, and updates it by 4 as each session closes/resets. To allow for N parallel sessions, server sends 4 * N. and updates as sessions close/reset. This mirrors H3 PUSH, and requires at least one more frame (MAXWTSESSIONID). Perhaps it would also require CANCELWT_SESSION? A client need only declare that it supports webtransport, and even that could be optional. The server will know if it receives a CONNECT with :protocol=webtransport.\nI wanted to have a negotiation mechanism in , but my conclusion after talking with NAME about it was that we should not do negotiation, and servers should just reject extra streams with a 400 (or maybe even a RESET_STREAM) if they ever see more sessions than they expect; meaning it's on the clients to know if they can pool or not.\nThough we may want a more specific 4XX error code so the client knows it should retry in a non-pooled connection\nThat's certainly a lot simpler than session credits ;) The cost of at least an RTT for a client that tries to pool against a server that can't handle it seems somewhat high. I guess it depends what the common client pooling strategies are and how many servers don't support pooling.\nNAME since it's looking like the JavaScript API will provide a way to disable pooling, I expect that the scenario of pooling-attempted-but-rejected will be quite rare, so a round trip cost should be acceptable\nI'm also considering use cases for WT which are not tied to the JS API.\nNAME that's interesting - and those use-cases won't have a similar mechanism to disable pooling? I suspect they'll need a way to get the WT server's hostname somehow, and having config properties attached to that (such as disable_pooling) could be useful?\nNAME : the opposite -- the clients may attempt pooling to servers that might or might not support it. You can't tell from a hostname if it supports pooling.\nNAME how did the client get the hostname in the first place? I'm suggesting to carry pooling_support next to that.\nActually, let's go back to JS for a second. The people who write JS at Facebook are very unlikely to know if the edge server terminating their connection supports pooling, or that configuration could change over time.\n(I appreciate the irony of me saying this as a Googler, but) should the JS people talk to the edge people? I agree that this shouldn't fail, but it being inefficient seems reasonable - these teams should talk if they're trying to reach good performance.\nI think my main problem with is that most people who want to disallow pooling want to exclude not only other WebTransport sessions, but also general HTTP traffic. was written with that assumption in mind.\nOk, MAXWTSESSION_ID doesn't handle that case, so it may be an over-engineered solution to the wrong problem.\nAs I said in the meeting, I don't think that we should have any signals for dedicated connections from either client or server. Yutaka made the point that the client can decide to make a connection dedicated after seeing the server settings. It doesn't need to signal its intent (and signaling would have negative privacy consequences potentially). I agree with this observation. A server needs to support HTTP if we're using HTTP. So pooling with h3 is already a settled matter. If the server doesn't want to answer requests, it can use 406 or 421 or other HTTP mechanisms to tell the client to go away. More generally, the server can just not provide links to that server, such that it might have to respond to GET/OPTIONS/POST/etc. The question then becomes managing resource limits on the server. Having a way to limit the number of WebTransport sessions is a good thing and I believe that will be sufficient for this. Thus, I'm a supporter of option 3: MAXWTSESSIONS or similar.\nI agree that we should have limits of how many WT can be active at the same time on a connection. It should also define what happens if the limit is exceeded. Should a client open another QUIC connection to the same server or should it fail? Probably it should be former, but that should also have limits defined (6 parallel QUIC connection, may be too much? HTTP/1.1 have the 6 parallel connection limit).\nChair: discussed at IETF 113, consensus in room to change the WT SETTING to carry the number of concurrent WT sessions allowed on this h3 connection\nNote that all of the below works when you swap \"server\" and \"client\". I wanted to focus on a poor Javascript client implementation, but it's equally possible to run into these cases with a poor server implementation. Let's say that you have a Http3Transport session that shares a connection with HTTP/3 or another Http3Transport session. The client has limited buffer space and hypothetically advertises MAXDATA of 1MB for the connection. If the server pushes 1MB of unprocessed data over a Http3Transport session, then it will prevent the server from sending data over the connection, including HTTP/3 responses and other Http3Transport sessions. This can occur if the application does not accept all remote-created streams or does not fully read every stream. This will be VERY easy for a developer to mess up. It can also occur if the application can not process data faster than it is received. In my case, the server creates unidirectional streams and pushes a large amount of media over them. The flow control limit would be hit if the Javascript application: Only accepts bidirectional streams (ex. initialization) Does not accept a unidirectional stream (ex. error or teardown) Does not fully read or abort every stream (ex. parsing error) Is not able to parse the media fast enough (ex. underpowered). This is mostly intended behavior, otherwise these \"bugs\" would cause unbounded memory usage. However, the blast radius is bigger when sharing a connection. In my use-case, hitting the flow control limit would also block any HTTP/3 RPC calls or unrelated traffic on the connection. This introduces more scenarios for the connection to become deadlocked or generally stalled. HTTP/3 implementations use MAXSTREAMS to limit the number of simultaneous requests. This use-case no longer works when sharing a connection with a Http3Transport session, and HTTP/3 implementations must instead return explicit error status codes when this threshold is reached. The same issues with MAXDATA also apply to MAXSTREAMS. A Javascript application that fails to accept or read/abort every stream will consume flow control resources. This can manifest itself in strange ways, such as HTTP/3 being unable to create control streams, or simply being unable to create new request streams.\nThanks for the salient overview, it makes it easier to understand your concerns. My first comment would be that some of these points apply to QuicTransport too. Its just that their made worse by Http3Transport pooling allowing the mashing together of orthogonal protocols with no application-layer oversight. Just as you might interfere with a page load, a browser could easily interfere with your finely crafted application protocol. I would anticipate that, due to different browsers having different HTTP usage patterns, and different pages being composed differently by time, region, etc, one might find that Http3Transport apps subject to pooling would experience inconsistencies and transient issues. You rightly point out that the connection flow control is a shared resource. But the problems described seem to relate to stream flow control and overlooks the different flow control setting afforded by the Transport Parameters. A possible mitigation is to more tightly constrain the initial flow windows of server-initiated streams so that they can't interfere with requests. However, I still think that the notion of pooling together WebTransports in an uncoordinated fashion is going to manifest undesirable properties. The MAX_STREAMS observation is a good one. I think the competition between request streams and client bidirectional streams is going to be hard to resolve.\nYeah, and enforcing any per sort of per-session flow control might get difficult because you can only identify the protocol and session after you receive the first few bytes of each stream. For example, assume a brand new connection (for simplicity only) and you receive a STREAM frame with id=100, offset=16, and size=1000. You don't know yet which protocol or session created this stream or the 25 preceding it. And hopefully you eventually get those first few bytes for each stream...\nChair notes from IETF 112 meeting: this needs more discussion (on the issue or mailing list)\nChair: discussed at 113, consensus in room to: use a setting to limit number of WT sessions (see ) figure out MAXSTREAMS, a way to limit the number of streams within a session abandon MAXDATA as too hard, do not limit amount of data sent within a session (anyone can open a new issue to revisit this discussion if they have a concrete proposal on how to do this) let QUIC take care of MAXSTREAMDATA\nCapsule DT is doing all of the above bullet points except the second one, for which I opened . Apologies for the extremely delayed response here, but hearing no objections we are declaring consensus and merging the PRs. David David Schinazi wrote:", "new_text": "WebTransport application error codes. This document. 8.6. The following entries are added to the \"HTTP Capsule Types\" registry established by HTTP-DATAGRAM: The \"CLOSE_WEBTRANSPORT_SESSION\" capsule. 0x2843 CLOSE_WEBTRANSPORT_SESSION permanent This document IETF WebTransport Working Group webtransport@ietf.org [1] None 9. References 9.1. URIs [1] mailto:webtransport@ietf.org "}
{"id": "q-en-draft-ietf-webtrans-http3-e2099773314d4322d1850918ffc28556e0e0b84ffa12ef41f2287d02b68587f9", "old_text": "set to \"webtransport\", the HTTP/3 server can check if it has a WebTransport server associated with the specified \":authority\" and \":path\" values. If it does not, it SHOULD reply with status code 404 (Section 15.5.4, HTTP). If it does, it MAY accept the session by replying with a 2xx series status code, as defined in Section 15.3 of HTTP. The WebTransport server MUST verify the \"Origin\" header to ensure that the specified origin is allowed to access the server in question. From the client's perspective, a WebTransport session is established when the client receives a 2xx response. From the server's", "comments": "In URL, it was suggested that 403 status code should be used when Origin verification fails.\nThank you for suggestions! Applied.\nThank you for reviews! If this looks good can someone merge this?\nLooking good, thank you! Added a few suggestions to cause it to generate links to the actual sections referenced, even if it makes it a little more repetitive.Approved modulo one small change", "new_text": "set to \"webtransport\", the HTTP/3 server can check if it has a WebTransport server associated with the specified \":authority\" and \":path\" values. If it does not, it SHOULD reply with status code 404 (Section 15.5.5 of HTTP). When the request contains the \"Origin\" header, the WebTransport server MUST verify the \"Origin\" header to ensure that the specified origin is allowed to access the server in question. If the verification fails, the WebTransport server SHOULD reply with status code 403 (Section 15.5.4 of HTTP). If all checks pass, the WebTransport server MAY accept the session by replying with a 2xx series status code, as defined in Section 15.3 of HTTP. From the client's perspective, a WebTransport session is established when the client receives a 2xx response. From the server's"}
{"id": "q-en-draft-irtf-nwcrg-network-coding-satellites-0636abb5fa80dd0cf640ed1a5bec6dc2de9f266ebd57e22e81fe7886a9530a10", "old_text": "the radio resource has been in the core design of SATellite COMmunication (SATCOM) systems. The trade-off often resided in how much redundancy a system adds to cope from link impairments, without reducing the good-put when the channel quality is high. There is usually enough redundancy to guarantee a Quasi-Error Free transmission. However, physical layer reliability mechanisms may not recover transmission losses (e.g. with a mobile user) and layer 2 (or above) re-transmissions induce 500 ms one-way delay with a geostationary satellite. Further exploiting coding schemes at higher OSI-layers is an opportunity for releasing constraints on the physical layer in such cases and improving the performance of SATCOM systems. We have noticed an active research activity on coding and SATCOM in the past. That being said, not much has actually made it to", "comments": "Answer to issue\nA first sentence says quasi-error-free is usually guarranteed thanks to redundancy, while the following sentence tends to say the opposite. Not very clear.The intro fails to clarify the targets. Among them there is \"coding to recover from packet losses\" (to avoid confusion with bit-error corrections), but also \"coding to improve the usage of radio resource\". You should also clarify better what is meant by coding: it's not a content coding (e.g., to compress a video flow) but rather a computation of linear combination of packets. Say since the begining you'll rely on RFC8406 whenever possible. Parts of the text currently in section 3 (1st paragraph) deserve to be moved here.Typo: s/to cope from/to cope with/\nWe have updated the introduction structure and changed the usage of coding in the whole document. We hope this is better. cf PR and PR\nthis is cherrypicking, not discussing link/channel FEC (Error) or ACK. the heading should really be 'of network coding schemes'. \"to cover cases where where\"? At the physical layer, FEC mechanisms can be exploited.-- Erasure or Error? \"Based on public information, coding does not seem to be widely used at higher layers.\" it isn't, and that section can be reduced to just that sentence.\nVincent in item X1, 1st sentence, it suggests QUIC is a video streaming application. Change it.Fig. 2 raises a general question: it refers to upper applications that are by nature on the end-hosts. Everything is imaginable here and this is not specific to a SATCOM system. Why don't you limit yourself to the communication layers + middleware? I'm a bit confused.When you say: \"Based on public information, coding does not seem to be widely used at higher layers.\" I guess you limit yourself to the \"packetization UDP/IP + middleware\" but do not consider upper applications. It's also confusing.\nWe think that these comments have been assessed with PR\n\"However, physical layer reliability mechanisms may not recover transmission losses (e.g. with a mobile user) and layer 2 (or above) re-transmissions induce 500 ms one-way delay with a geostationary satellite\" Look, forward error coding (FEC) and link ACKs are complementary on a sliding scale depending on delay/channel quality, and this is talking solely about retransmissions. In fact, link or channel FEC do recover a large number of transmission bit/symbol losses without adding extra delay.\nWe have clarified this point in PR", "new_text": "the radio resource has been in the core design of SATellite COMmunication (SATCOM) systems. The trade-off often resided in how much redundancy a system adds to cope from link impairments, without reducing the good-put when the channel quality is good. There is usually enough redundancy to guarantee a Quasi-Error Free transmission. The recovery time depends on the encoding block size. Considering for instance geostationary satellite system (GEO), physical or link layers erasure coding mechanisms recover transmission losses within a negligible delay compared to link delay. However, when retransmissions are triggered, this leads to an non- negligible additional delay in particular over GEO link. Further exploiting coding schemes at application or transport layers is an opportunity for releasing constraints on the physical layer and improving the performance of SATCOM systems. We have noticed an active research activity on coding and SATCOM in the past. That being said, not much has actually made it to"}
{"id": "q-en-draft-irtf-nwcrg-network-coding-satellites-a894879bf8b0a849a61e8d198790422011b63bc15f5da987264f82fc1e7b9135", "old_text": "A research challenge would be the optimization of the NFV service function chaining, considering a virtualized infrastructure and other SATCOM specific functions, to guarantee an efficient radio usage and easy-to-deploy SATCOM services. 5.3.", "comments": "I have no idea what this section means. Virtualised antennas? virtualised reflectors? Good luck with that. chin-nfvrg-cloud-5g-core-structure-yang referring to an expired -00 internet-draft does not add credibility.\nMJM : But on the virtualization comment: we had presentations in the group on NFV (network virtualization) with NC for satellites. I am sure that the authors of this work could add to Lloyd\u2019s comments on that section.\nWe have added a research challenge and changed the title of section in PR . The section aims at identifying research challenge related to the interaction between NC and virtualization (function chaining, buffer management, etc).", "new_text": "A research challenge would be the optimization of the NFV service function chaining, considering a virtualized infrastructure and other SATCOM specific functions, to guarantee an efficient radio usage and easy-to-deploy SATCOM services. Moreover, another challenge related to a virtualized SATCOM terminals is the management of limited buffered equipments. 5.3."}
{"id": "q-en-draft-irtf-nwcrg-network-coding-satellites-3bea717e3bc5664b5cf153fe428e9eb588f1ab7aff7a8b19108852a229987c3a", "old_text": "5. This document discuses some opportunities to introduce these techniques at a wider scale in satellite telecommunications systems. Even though this document focuses on satellite systems, it is worth", "comments": "First sentence, the use of \"these techniques\" is ambiguous. Which techniques?\nThanks. Updated in PR", "new_text": "5. This document discuses some opportunities to introduce coding techniques at a wider scale in satellite telecommunications systems. Even though this document focuses on satellite systems, it is worth"}
{"id": "q-en-draft-webtransport-http2-835a26889e93c3238f9c2ecf91f4479174f9b2b9a599cec166ed9aafca39054a", "old_text": "An endpoint uses a frame (type=0x04) to abruptly terminate the sending part of a stream. After sending a , an endpoint ceases transmission and retransmission of frames on the identified stream. A receiver of can discard any data that it already received on that stream. The frame defines the following fields:", "comments": "Reset stream doesn't cease retransmission.\nretransmission of WTSTREAM frames on the identified stream. A receiver of WTRESET_STREAM can discard any data that it already received on that stream.\" This is using TCP and should not talk about ceasing retransmissions.\nVery good point! Will clean up", "new_text": "An endpoint uses a frame (type=0x04) to abruptly terminate the sending part of a stream. After sending a , an endpoint ceases transmission of frames on the identified stream. A receiver of can discard any data that it already received on that stream. The frame defines the following fields:"}
{"id": "q-en-dtls-conn-id-857a88eac5f9380a92b4fd2b6bbc55fd8c1a627f10a35a68547da021b78c04b0", "old_text": "during the lifetime of an ongoing DTLS session then the receiver will be unable to locate the correct security context. 1. The Datagram Transport Layer Security (DTLS) RFC6347 protocol was", "comments": "As suggested by \u00c9ric Vyncke, mention that the new ciphertext record format provides ContentType encryption and per-record padding in both abstract and introduction.\nURL Update from \u00c9ric re: Section 6\nAbout \"-- Section 3 --, DTLS / zero-length CID\" \"An extension type MUST NOT appear in the ServerHello unless the same extension type appeared in the corresponding ClientHello.\" The \"zero-length CID\" therefore is mainly used for the very common cases, where only the clients are required to send such a CID, but not the server. Maybe adding a hint to RFC5246 7.4.1.4 will help?\nAbout \"-- Section 6 --\" Yes, that's still very complex: First, processing the record seems to be easier (less restricted), than use a changed source address for sending data back. Using a changed source address for sending data back to, is not generally solved. There are some special case, e.g., if the data sent back is very small, there is no risk (at least, none has explained one). There may also be some upper-layer mechanisms, which helps out (e.g. for coap). And there is a idea of a general solution , but that will require more time. Because of the required time, the idea was to release this spec without RRC and postpone that general work (see also issue ).\nalso in the abstract ? Some initial attempt at this in\nLGTM \u00c9ric Vyncke has entered the following ballot position for draft-ietf-tls-dtls-connection-id-11: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you for the work put into this document. This specification addresses the IPv4-mainly issue of NAT binding and is still required. I am also trusted the security ADs for section 5. Please find below some non-blocking COMMENT points (but replies would be appreciated), and some nits. I hope that this helps to improve the document, Regards, -\u00e9ric == COMMENTS == -- Abstract -- As an important part of this document is the padding, should it be mentioned also in the abstract ? -- Section 3 -- While I am not a DTLS expert, I find this section quite difficult to understand the reasoning behind the specification as little explanations are given about, e.g, what is the motivation of \"A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID.\" -- Section 6 -- I am puzzled by the text: \"There is a strategy for ensuring that the new peer address is able to receive and process DTLS records. No such strategy is defined in this specification.\" Does this mean that there is no way to update the peer IP address ? == NITS == -- Section 1 -- Please expand CID on first use outside of the abstract. -- Section 4 -- Suggest to add a short paragraph as a preamble to figure 3. Currently, it looks like figure 3 belongs to the 'zeros' field description.", "new_text": "during the lifetime of an ongoing DTLS session then the receiver will be unable to locate the correct security context. The new ciphertext record format with CID also provides content type encryption and record-layer padding. 1. The Datagram Transport Layer Security (DTLS) RFC6347 protocol was"}
{"id": "q-en-dtls-conn-id-857a88eac5f9380a92b4fd2b6bbc55fd8c1a627f10a35a68547da021b78c04b0", "old_text": "(CID) to the DTLS record layer. The presence of the CID is negotiated via a DTLS extension. 2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\",", "comments": "As suggested by \u00c9ric Vyncke, mention that the new ciphertext record format provides ContentType encryption and per-record padding in both abstract and introduction.\nURL Update from \u00c9ric re: Section 6\nAbout \"-- Section 3 --, DTLS / zero-length CID\" \"An extension type MUST NOT appear in the ServerHello unless the same extension type appeared in the corresponding ClientHello.\" The \"zero-length CID\" therefore is mainly used for the very common cases, where only the clients are required to send such a CID, but not the server. Maybe adding a hint to RFC5246 7.4.1.4 will help?\nAbout \"-- Section 6 --\" Yes, that's still very complex: First, processing the record seems to be easier (less restricted), than use a changed source address for sending data back. Using a changed source address for sending data back to, is not generally solved. There are some special case, e.g., if the data sent back is very small, there is no risk (at least, none has explained one). There may also be some upper-layer mechanisms, which helps out (e.g. for coap). And there is a idea of a general solution , but that will require more time. Because of the required time, the idea was to release this spec without RRC and postpone that general work (see also issue ).\nalso in the abstract ? Some initial attempt at this in\nLGTM \u00c9ric Vyncke has entered the following ballot position for draft-ietf-tls-dtls-connection-id-11: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: Thank you for the work put into this document. This specification addresses the IPv4-mainly issue of NAT binding and is still required. I am also trusted the security ADs for section 5. Please find below some non-blocking COMMENT points (but replies would be appreciated), and some nits. I hope that this helps to improve the document, Regards, -\u00e9ric == COMMENTS == -- Abstract -- As an important part of this document is the padding, should it be mentioned also in the abstract ? -- Section 3 -- While I am not a DTLS expert, I find this section quite difficult to understand the reasoning behind the specification as little explanations are given about, e.g, what is the motivation of \"A zero-length value indicates that the server will send with the client's CID but does not wish the client to include a CID.\" -- Section 6 -- I am puzzled by the text: \"There is a strategy for ensuring that the new peer address is able to receive and process DTLS records. No such strategy is defined in this specification.\" Does this mean that there is no way to update the peer IP address ? == NITS == -- Section 1 -- Please expand CID on first use outside of the abstract. -- Section 4 -- Suggest to add a short paragraph as a preamble to figure 3. Currently, it looks like figure 3 belongs to the 'zeros' field description.", "new_text": "(CID) to the DTLS record layer. The presence of the CID is negotiated via a DTLS extension. Adding a CID to the ciphertext record format presents an opportunity to make other changes to the record format. In keeping with the best practices established by TLS 1.3, the type of the record is encrypted, and a mechanism provided for adding padding to obfuscate the plaintext length. 2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\","}
{"id": "q-en-dtls-conn-id-633f533580defc629a14944ae340435a46a54ea6f68489427234ed3a6b625304", "old_text": "With multi-homing, an adversary is able to correlate the communication interaction over the two paths, which adds further privacy concerns. Importantly, the sequence number makes it possible for a passive attacker to correlate packets across CID changes. Thus, even if a", "comments": "Based on URL\nThanks!\nMartin wrote: URL LGTM. I would strike \", if these privacy properties are important in a given deployment\" from the acknowledgments section (which is an odd place for the accompanying statement. I would add an explicit note about the lack of CID update making this unsuitable for mobility scenarios. That's a common use case for this sort of mechanism, but this design lacks any defense against linkability. >", "new_text": "With multi-homing, an adversary is able to correlate the communication interaction over the two paths, which adds further privacy concerns. The lack of a CID update mechanism makes this extension unsuitable for mobility scenarios where correlation must be considered. Importantly, the sequence number makes it possible for a passive attacker to correlate packets across CID changes. Thus, even if a"}
{"id": "q-en-dtls-conn-id-55ee1d54fcc0048c80dfaa78e58d8082adbf60627f6d95a22ef13572f6d088f3", "old_text": "Connection Identifiers for DTLS 1.2 draft-ietf-tls-dtls-connection-id-07 Abstract", "comments": "We incorrectly put a normative reference on the RRC draft in the DTLS 1.2 CID draft.\nI lost somehow the track on that RRC and the discussion around it. The referred document mentions: and I would assume, that a on-path adversary could black hole traffic in both directions, even without CID. Not only message sent to a peer could be dropped, it would also be easy to drop a message sent back from that peer (at least, if the route back is on the same path). I would also assume, that a on-path adversary, which is able to change the source address, would also be able to change the destination address. So a simple traffic redirect will also be possible even without CID. In my opinion, the only threat left, which is caused by the usage of a CID, would be a DDoS attack using amplified traffic towards third party. For that it seems to be most important to throttle the traffic by limiting the number and size of messages until the address is verified. Though such a DDoS will be naturally less efficient, because the adversary has to wait for messages to change the sources-address, throttling would make it even more unattractive.", "new_text": "Connection Identifiers for DTLS 1.2 draft-ietf-tls-dtls-connection-id-latest Abstract"}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "1. The Datagram Transport Layer Security (DTLS) protocol was designed for securing connection-less transports, like UDP. DTLS, like TLS, starts with a handshake, which can be computationally demanding (particularly when public key cryptography is used). After a successful handshake, symmetric key cryptography is used to apply data origin authentication, integrity and confidentiality protection. This two-step approach allows endpoints to amortize the cost of the initial handshake across subsequent application data protection. Ideally, the second phase where application data is protected lasts over a longer period of time since the established keys will only need to be updated once the key lifetime expires. In the current version of DTLS, the IP address and port of the peer are used to identify the DTLS association. Unfortunately, in some cases, such as NAT rebinding, these values are insufficient. This is a particular issue in the Internet of Things when devices enter", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "1. The Datagram Transport Layer Security (DTLS) RFC6347 protocol was designed for securing connection-less transports, like UDP. DTLS, like TLS, starts with a handshake, which can be computationally demanding (particularly when public key cryptography is used). After a successful handshake, symmetric key cryptography is used to apply data origin authentication, integrity and confidentiality protection. This two-step approach allows endpoints to amortize the cost of the initial handshake across subsequent application data protection. Ideally, the second phase where application data is protected lasts over a long period of time since the established keys will only need to be updated once the key lifetime expires. In DTLS as specified in RFC 6347, the IP address and port of the peer are used to identify the DTLS association. Unfortunately, in some cases, such as NAT rebinding, these values are insufficient. This is a particular issue in the Internet of Things when devices enter"}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "an endpoint to use a globally constant length for such connection identifiers. This can in turn ease parsing and connection lookup, for example by having the length in question be a compile-time constant. Implementations, which want to use variable-length CIDs, are responsible for constructing the CID in such a way that its length can be determined on reception. Such implementations must still be able to send CIDs of different length to other parties. Note that there is no CID length information included in the record itself. In DTLS 1.2, CIDs are exchanged at the beginning of the DTLS session only. There is no dedicated \"CID update\" message that allows new", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "an endpoint to use a globally constant length for such connection identifiers. This can in turn ease parsing and connection lookup, for example by having the length in question be a compile-time constant. Such implementations MUST still be able to send CIDs of different length to other parties. Implementations that want to use variable-length CIDs are responsible for constructing the CID in such a way that its length can be determined on reception. Note that there is no CID length information included in the record itself. In DTLS 1.2, CIDs are exchanged at the beginning of the DTLS session only. There is no dedicated \"CID update\" message that allows new"}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "record is found in DTLSInnerPlaintext.real_type after decryption. The CID value, cid_length bytes long, as agreed at the time the extension has been negotiated. The encrypted form of the serialized DTLSInnerPlaintext structure.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "record is found in DTLSInnerPlaintext.real_type after decryption. The CID value, cid_length bytes long, as agreed at the time the extension has been negotiated. Recall that (as discussed previously) each peer chooses the CID value it will receive and use to identify the connection, so an implementation can choose to always recieve CIDs of a fixed length. If, however, an implementation chooses to receive different lengths of CID, the assigned CID values must be self-delineating since there is no other mechanism available to determine what connection (and thus, what CID length) is in use. The encrypted form of the serialized DTLSInnerPlaintext structure."}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "5. Several types of ciphers have been defined for use with TLS and DTLS and the MAC calculation for those ciphers differs slightly. This specification modifies the MAC calculation defined in RFC6347 and RFC7366 as well as the definition of the additional data used with AEAD ciphers provided in RFC6347 for records with content type tls12_cid. The modified algorithm MUST NOT be applied to records that do not carry a CID, i.e., records with content type other than tls12_cid.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "5. Several types of ciphers have been defined for use with TLS and DTLS and the MAC calculations for those ciphers differ slightly. This specification modifies the MAC calculation as defined in RFC6347 and RFC7366, as well as the definition of the additional data used with AEAD ciphers provided in RFC6347, for records with content type tls12_cid. The modified algorithm MUST NOT be applied to records that do not carry a CID, i.e., records with content type other than tls12_cid."}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "The following fields are defined in this document; all other fields are as defined in the cited documents. Value of the negotiated CID. 1 byte field indicating the length of the negotiated CID. The length (in bytes) of the serialised DTLSInnerPlaintext. The length MUST NOT exceed 2^14. Note \"+\" denotes concatenation.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "The following fields are defined in this document; all other fields are as defined in the cited documents. Value of the negotiated CID (variable length). 1 byte field indicating the length of the negotiated CID. The length (in bytes) of the serialised DTLSInnerPlaintext (two- byte integer). The length MUST NOT exceed 2^14. Note \"+\" denotes concatenation."}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "The above is necessary to protect against attacks that use datagrams with spoofed addresses or replayed datagrams to trigger attacks. Note that there is no requirement to use of the anti-replay window mechanism defined in Section 4.1.2.6 of DTLS 1.2. Both solutions, the \"anti-replay window\" or \"newer algorithm\" will prevent address updates from replay attacks while the latter will only apply to peer address updates and the former applies to any application layer traffic. Application protocols that implement protection against these attacks depend on being aware of changes in peer addresses so that they can engage the necessary mechanisms. When delivered such an event, an application layer-specific address validation mechanism can be triggered, for example one that is based on successful exchange of minimal amount of ping-pong traffic with the peer. Alternatively, an DTLS-specific mechanism may be used, as described in I-D.tschofenig- tls-dtls-rrc.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "The above is necessary to protect against attacks that use datagrams with spoofed addresses or replayed datagrams to trigger attacks. Note that there is no requirement for use of the anti-replay window mechanism defined in Section 4.1.2.6 of DTLS 1.2. Both solutions, the \"anti-replay window\" or \"newer\" algorithm, will prevent address updates from replay attacks while the latter will only apply to peer address updates and the former applies to any application layer traffic. Note that datagrams that pass the DTLS cryptographic verification procedures but do not trigger a change of peer address are still valid DTLS records and are still to be passed to the application. Application protocols that implement protection against these attacks depend on being aware of changes in peer addresses so that they can engage the necessary mechanisms. When delivered such an event, an application layer-specific address validation mechanism can be triggered, for example one that is based on successful exchange of a minimal amount of ping-pong traffic with the peer. Alternatively, an DTLS-specific mechanism may be used, as described in I-D.tschofenig- tls-dtls-rrc."}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "dtls-example2 shows an example exchange where a CID is used uni- directionally from the client to the server. To indicate that a zero-length CID we use the term 'connection_id=empty'. Note: In the example exchange the CID is included in the record layer once encryption is enabled. In DTLS 1.2 only one handshake message is encrypted, namely the Finished message. Since the example shows how to use the CID for payloads sent from the client to the server only the record layer payloads containing the Finished messages include a CID. Application data payloads sent from the client to the server contain a CID in this example as well. 8.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "dtls-example2 shows an example exchange where a CID is used uni- directionally from the client to the server. To indicate that a zero-length CID is present in the \"connection_id\" extension we use the notation 'connection_id=empty'. Note: In the example exchange the CID is included in the record layer once encryption is enabled. In DTLS 1.2 only one handshake message is encrypted, namely the Finished message. Since the example shows how to use the CID for payloads sent from the client to the server, only the record layer payloads containing the Finished message or application data include a CID. 8."}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "details about record padding can be found in Section 5.4 and Appendix E.3 of RFC 8446. Finally, endpoints can use the CID to attach arbitrary metadata to each record they receive. This may be used as a mechanism to communicate per-connection information to on-path observers. There is no straightforward way to address this concern with CIDs that contain arbitrary values. Implementations concerned about this aspects SHOULD refuse to use CIDs. 9.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "details about record padding can be found in Section 5.4 and Appendix E.3 of RFC 8446. Finally, endpoints can use the CID to attach arbitrary per-connection metadata to each record they receive on a given connection. This may be used as a mechanism to communicate per-connection information to on-path observers. There is no straightforward way to address this concern with CIDs that contain arbitrary values. Implementations concerned about this aspect SHOULD refuse to use CIDs. 9."}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "asymmetry of request/response message sizes. Additionally, an attacker able to observe the data traffic exchanged between two DTLS peers is able to replay datagrams with modified IP address/port numbers. The topic of peer address updates is discussed in peer-address- update.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "asymmetry of request/response message sizes. Additionally, an attacker able to observe the data traffic exchanged between two DTLS peers is able to modify IP address/port numbers. When the optional DTLS replay detection is not in use, such an attacker can also replay the observed packets, and the multiple copies of each packet can have different IP address/port numbers as well. The topic of peer address updates is discussed in peer-address- update."}
{"id": "q-en-dtls-conn-id-795d0a5bc3870b88390cd5268ab2c151e1163906467ae94efb22f065dbe7f9ec", "old_text": "\"ExtensionType Values\" registry, defined in RFC5246, for connection_id(TBD1) as described in the table below. IANA is requested to add an extra column to the TLS ExtensionType Values registry to indicate whether an extension is only applicable to DTLS. Note: The value \"N\" in the Recommended column is set because this extension is intended only for specific use cases. This document describes an extension for DTLS 1.2 only; it is not to TLS (1.3). The DTLS 1.3 functionality is described in I-D.ietf-tls-dtls13. IANA is requested to allocate tls12_cid(TBD2) in the \"TLS ContentType Registry\". The tls12_cid ContentType is only applicable to DTLS 1.2.", "comments": "This is supposed to all be editorial stuff, and I split a couple changes out into separate commits in case they're not actually desired.", "new_text": "\"ExtensionType Values\" registry, defined in RFC5246, for connection_id(TBD1) as described in the table below. IANA is requested to add an extra column to the TLS ExtensionType Values registry to indicate whether an extension is only applicable to DTLS and to include this document as an additional reference for the registry. Note: The value \"N\" in the Recommended column is set because this extension is intended only for specific use cases. This document describes the behavior of this extension for DTLS 1.2 only; it is not applicable to TLS, and its usage for DTLS 1.3 is described in I- D.ietf-tls-dtls13. IANA is requested to allocate tls12_cid(TBD2) in the \"TLS ContentType Registry\". The tls12_cid ContentType is only applicable to DTLS 1.2."}
{"id": "q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f", "old_text": "confidence to the receiving peer that the sending peer is reachable at the indicated address and port. 2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\",", "comments": "and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit.", "new_text": "confidence to the receiving peer that the sending peer is reachable at the indicated address and port. Note however that, irrespective of CID, if RRC has been successfully negotiated by the peers, path validation can be used at any time by either endpoint. For instance, an endpoint might use RRC to check that a peer is still in possession of its address after a period of quiescence. 2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\","}
{"id": "q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f", "old_text": "I-D.ietf-tls-dtls13. The presentation language used in this document is described in Section 4 of RFC8446. 3. The use of RRC is negotiated via the \"rrc\" DTLS-only extension. On", "comments": "and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit.", "new_text": "I-D.ietf-tls-dtls13. The presentation language used in this document is described in Section 4 of RFC8446. This document reuses the definition of \"anti-amplification limit\" from RFC9000 to mean three times the amount of data received from an unvalidated address. This includes all DTLS records originating from that source address, excluding discarded ones. 3. The use of RRC is negotiated via the \"rrc\" DTLS-only extension. On"}
{"id": "q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f", "old_text": "The \"return_routability_check\" message MUST be authenticated and encrypted using the currently active security context. The receiver that observes the peer's address and or port update MUST stop sending any buffered application data (or limit the data sent to a TBD threshold) and initiate the return routability check that proceeds as follows: A cookie is placed in a \"return_routability_check\" message of type path_challenge; The message is sent to the observed new address and a timeout T is started; The peer endpoint, after successfully verifying the received \"return_routability_check\" message echoes the cookie value in a \"return_routability_check\" message of type path_response; When the initiator receives and verifies the \"return_routability_check\" message contains the sent cookie, it", "comments": "and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit.", "new_text": "The \"return_routability_check\" message MUST be authenticated and encrypted using the currently active security context. 5. The receiver that observes the peer's address or port update MUST stop sending any buffered application data (or limit the data sent to the unvalidated address to the anti-amplification limit) and initiate the return routability check that proceeds as follows: An unpredictable cookie is placed in a \"return_routability_check\" message of type path_challenge; The message is sent to the observed new address and a timer T (see timer-choice) is started; The peer endpoint, after successfully verifying the received \"return_routability_check\" message responds by echoing the cookie value in a \"return_routability_check\" message of type path_response; When the initiator receives and verifies the \"return_routability_check\" message contains the sent cookie, it"}
{"id": "q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f", "old_text": "After this point, any pending send operation is resumed to the bound peer address. 5. The example TLS 1.3 handshake shown in fig-handshake shows a client and a server negotiating the support for CID and for the RRC", "comments": "and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit.", "new_text": "After this point, any pending send operation is resumed to the bound peer address. path-challenge-reqs and path-response-reqs contain the requirements for the initiator and responder roles, broken down per protocol phase. 5.1. The initiator MAY send multiple \"return_routability_check\" messages of type path_challenge to cater for packet loss on the probed path. Each path_challenge SHOULD go into different transport packets. (Note that the DTLS implementation may not have control over the packetization done by the transport layer.) The transmission of subsequent path_challenge messages SHOULD be paced to decrease the chance of loss. Each path_challenge message MUST contain random data. The initiator MAY use padding using the record padding mechanism available in DTLS 1.3 (and in DTLS 1.2, when CID is enabled on the sending direction) up to the anti-amplification limit to probe if the path MTU (PMTU) for the new path is still acceptable. 5.2. The responder MUST NOT delay sending an elicited path_response message. The responder MUST send exactly one path_response messages for each received path_request. The responder MUST send the path_response on the network path where the corresponding path_challenge has been received, so that validation succeeds only if the path is functional in both directions. The initiator MUST NOT enforce this behaviour The initiator MUST silently discard any invalid path_response it receives. Note that RRC does not cater for PMTU discovery on the reverse path. If the responder wants to do PMTU discovery using RRC, it should initiate a new path validation procedure. 5.3. When setting T, implementations are cautioned that the new path could have a longer round-trip time (RTT) than the original. In settings where there is external information about the RTT of the active path, implementations SHOULD use T = 3xRTT. If an implementation has no way to obtain information regarding the RTT of the active path, a value of 1s SHOULD be used. Profiles for specific deployment environments - for example, constrained networks I-D.ietf-uta-tls13-iot-profile - MAY specify a different, more suitable value. 6. The example TLS 1.3 handshake shown in fig-handshake shows a client and a server negotiating the support for CID and for the RRC"}
{"id": "q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f", "old_text": "However, the server wants to test the reachability of the client at his new IP address. 6. Note that the return routability checks do not protect against flooding of third-parties if the attacker is on-path, as the attacker", "comments": "and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit.", "new_text": "However, the server wants to test the reachability of the client at his new IP address. 7. Note that the return routability checks do not protect against flooding of third-parties if the attacker is on-path, as the attacker"}
{"id": "q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f", "old_text": "those datagrams are cryptographically authenticated). On-path adversaries can, in general, pose a harm to connectivity. 7. IANA is requested to allocate an entry to the TLS \"ContentType\" registry, for the \"return_routability_check(TBD2)\" defined in this", "comments": "and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit.", "new_text": "those datagrams are cryptographically authenticated). On-path adversaries can, in general, pose a harm to connectivity. 8. IANA is requested to allocate an entry to the TLS \"ContentType\" registry, for the \"return_routability_check(TBD2)\" defined in this"}
{"id": "q-en-dtls-rrc-d64ddea4579f274a04699c7b6177988bcc6e22109d991068ea5d4e7b0ef5be8f", "old_text": "\"rrc\" extension to the \"TLS ExtensionType Values\" registry as described in tbl-ext. 8. Issues against this document are tracked at https://github.com/tlswg/ dtls-rrc/issues 9. We would like to thank Achim Kraus, Hanno Becker, Hanno Boeck, Manuel Pegourie-Gonnard, Mohit Sahni and Rich Salz for their input to this", "comments": "and\nFor example, allow multiple outstanding path_challenge messages to cater for potential packet loss.\nWe want to limit the application data sent to a reasonable threshold while we run the RRC.\nRFC 9000, section 8 \"address validation\" (second paragraph, last line): 3 times the amount of data received.\nHere is the text: Address Validation Address validation ensures that an endpoint cannot be used for a traffic amplification attack. In such an attack, a packet is sent to a server with spoofed source address information that identifies a victim. If a server generates more or larger packets in response to that packet, the attacker can use the server to send more data toward the victim than it would be able to send on its own. The primary defense against amplification attacks is verifying that a peer is able to receive packets at the transport address that it claims. Therefore, after receiving packets from an address that is not yet validated, an endpoint MUST limit the amount of data it sends to the unvalidated address to three times the amount of data received from that address. This limit on the size of responses is known as the anti-amplification limit.", "new_text": "\"rrc\" extension to the \"TLS ExtensionType Values\" registry as described in tbl-ext. 9. Issues against this document are tracked at https://github.com/tlswg/ dtls-rrc/issues 10. We would like to thank Achim Kraus, Hanno Becker, Hanno Boeck, Manuel Pegourie-Gonnard, Mohit Sahni and Rich Salz for their input to this"}
{"id": "q-en-dtls13-spec-f0fd5c9cf863f5428524c0206bb7947aa72271ac6acf4e8841868e59128c19ba", "old_text": "upper layer protocol MUST NOT write any record that exceeds the maximum record size of 2^14 bytes. Note that DTLS does not defend against spoofed ICMP messages; implementations SHOULD ignore any such messages that indicate PMTUs below the IPv4 and IPv6 minimums of 576 and 1280 bytes respectively The DTLS record layer SHOULD allow the upper layer protocol to discover the amount of record expansion expected by the DTLS processing. If there is a transport protocol indication (either via ICMP or via a refusal to send the datagram as in Section 14 of RFC4340), then the DTLS record layer MUST inform the upper layer protocol of the error. The DTLS record layer SHOULD NOT interfere with upper layer protocols performing PMTU discovery, whether via RFC1191 or RFC4821 mechanisms.", "comments": "I am OK with this change. There are no normative changes in there because the normative text segment is only moved to a different paragraph.\nI was referring to this added clause: \"alternately it MAY report PMTU estimates minus the estimated expansion from the transport layer and DTLS record framing.\"\nWhat is awkward here is that this is phrased as normative requirements on an API. Maybe worded as advice \"an application that uses DTLS might need information from DTLS to allow it to estimate the PMTU; DTLS could report a PMTU estimate less any DTLS overheads, or it can report those overheads and allow the application to perform its own estimation\"\nThere has historically been API language here, so I think we should keep it and just adjust it as here.\nThis one probably needs consensus given new normative language.", "new_text": "upper layer protocol MUST NOT write any record that exceeds the maximum record size of 2^14 bytes. The DTLS record layer SHOULD also allow the upper layer protocol to discover the amount of record expansion expected by the DTLS processing; alternately it MAY report PMTU estimates minus the estimated expansion from the transport layer and DTLS record framing. Note that DTLS does not defend against spoofed ICMP messages; implementations SHOULD ignore any such messages that indicate PMTUs below the IPv4 and IPv6 minimums of 576 and 1280 bytes respectively If there is a transport protocol indication that the PMTU was exceeded (either via ICMP or via a refusal to send the datagram as in Section 14 of RFC4340), then the DTLS record layer MUST inform the upper layer protocol of the error. The DTLS record layer SHOULD NOT interfere with upper layer protocols performing PMTU discovery, whether via RFC1191 or RFC4821 mechanisms."}
{"id": "q-en-dtls13-spec-f0fd5c9cf863f5428524c0206bb7947aa72271ac6acf4e8841868e59128c19ba", "old_text": "DTLS implementations SHOULD follow the following rules: If the DTLS record layer informs the DTLS handshake layer that a message is too big, it SHOULD immediately attempt to fragment it, using any existing information about the PMTU. If repeated retransmissions do not result in a response, and the PMTU is unknown, subsequent retransmissions SHOULD back off to a", "comments": "I am OK with this change. There are no normative changes in there because the normative text segment is only moved to a different paragraph.\nI was referring to this added clause: \"alternately it MAY report PMTU estimates minus the estimated expansion from the transport layer and DTLS record framing.\"\nWhat is awkward here is that this is phrased as normative requirements on an API. Maybe worded as advice \"an application that uses DTLS might need information from DTLS to allow it to estimate the PMTU; DTLS could report a PMTU estimate less any DTLS overheads, or it can report those overheads and allow the application to perform its own estimation\"\nThere has historically been API language here, so I think we should keep it and just adjust it as here.\nThis one probably needs consensus given new normative language.", "new_text": "DTLS implementations SHOULD follow the following rules: If the DTLS record layer informs the DTLS handshake layer that a message is too big, the handshake layer SHOULD immediately attempt to fragment the message, using any existing information about the PMTU. If repeated retransmissions do not result in a response, and the PMTU is unknown, subsequent retransmissions SHOULD back off to a"}
{"id": "q-en-dtls13-spec-313ca7a576a5d70dd359fe1d60d86ed438a3d44b870a24f8f1efab76a66532e6", "old_text": "EndOfEarlyData is not necessary to determine when the early data is complete, and because DTLS is lossy, attackers can trivially mount the deletion attacks that EndOfEarlyData prevents in TLS. Servers SHOULD aggressively age out the epoch 1 keys upon receiving the first epoch 2 record and SHOULD NOT accept epoch 1 data after the first epoch 3 record is received. (See dtls-epoch for the definitions of each epoch.) 5.6.", "comments": "You don't want to age out before you get epoch 3 because otherwise you might be dropping when you have handshake packets.\nFor my understanding: what is the harm of keeping the 0-RTT keys around till the end of the handshake?\n\"Yes, the goal is to encourage people not to keep 0-RTT keys too long, because they might not have PCS. The replay risk is essentially a non-issue, except to the extent that it might already have been replayed.\"", "new_text": "EndOfEarlyData is not necessary to determine when the early data is complete, and because DTLS is lossy, attackers can trivially mount the deletion attacks that EndOfEarlyData prevents in TLS. Servers SHOULD NOT accept records from epoch 1 indefinitely once they are able to process records from epoch 3. Though reordering of IP packets can result in records from epoch 1 arriving after records from epoch 3, this is not likely to persist for very long relative to the round trip time. Servers could discard epoch 1 keys after the first epoch 3 data arrives, or retain keys for processing epoch 1 data for a short period. is received. (See dtls-epoch for the definitions of each epoch.) 5.6."}
{"id": "q-en-dtls13-spec-55373f0b18e12002d098bf73ab5f36956fc269de165e66fc24fdf7c01c0e1592", "old_text": "provided. Endpoints MUST NOT have more than one NewConnectionId message outstanding. If the client and server have negotiated use of the \"connection_id\" extension, either side can request a new CID using the RequestConnectionId message. In general, implementations SHOULD use a new CID whenever sending on a new path, and SHOULD request new CIDs for this purpose if path changes are anticipated. The number of CIDs desired.", "comments": "It's possible to negotiate the CID extension but that you want to receive an empty CID (this is how you get asymmetrical CID lengths). The current text says: CID. If an implementation receives these messages when CIDs were not negotiated, it MUST abort the connection with an unexpected_message alert. But does this mean that (for instance) you can negotiate receiving an empty CID and then switch to a non-empty? I am tempted to say \"no\" as I recall QUIC did.\nDoing what QUIC did seems prudent ... and if I didn't know what QUIC did I think I would also lean towards disallowing switching between empty and non-empty on a live connection.", "new_text": "provided. Endpoints MUST NOT have more than one NewConnectionId message outstanding. Implementations which either did not negotiate the \"connection_id\" extension or which have negotiated receiving an empty CID MUST NOT send NewConnectionId. Implementations MUST NOT send RequestConnectionId when sending an empty connection ID. Implementations which detect a violation of these rules MUST terminate the connection with an \"unexpected_message\" alert. Implementations SHOULD use a new CID whenever sending on a new path, and SHOULD request new CIDs for this purpose if path changes are anticipated. The number of CIDs desired."}
{"id": "q-en-dtls13-spec-c5d9f49320c2d2dc68b9aa601f1c9affd14f68b1a63d9bb9feb4eb57d2880afd", "old_text": "endpoint: Either the client or server of the connection. handshake: An initial negotiation between client and server that establishes the parameters of the connection.", "comments": "Thanks for this document -- badly needed and easy to read. Thanks also to Bernard Adoba for his recent tsvart review. I support his comments. COMMENTS: Sec 2. It might be useful to introduce the term \"epoch\" in the glossary, for those who read this front to back. Sec 4.2.3: \"The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process.\" The text is unclear if the XOR is applied to the expanded sequence number or merely the 1-2 octets on the wire. I presume it's the latter, but this should be clarified. Sec 4.2.3: It's implied here that the snkey rotates with the epoch. As this is different from QUIC, it's probably worth spelling out. Sec 5.1 is a bit vague about the amplification limit; why not at least RECOMMEND 3, as we've converged on this elsewhere? Sec 5.1. Reading between the lines, it's clear that the cookie can't be used as address verification across connections in the way that a NEWTOKEN token is. It would be good to spell this out for clients -- use the resumption token or whatever instead. Sec 7.2 \"As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight.\" I think this should be s/entire flight/entire previous flight? Sec 7.2 \"Upon receipt of an ACK that leaves it with only some messages from a flight having been acknowledged an implementation SHOULD retransmit the unacknowledged messages or fragments.\" This language appears inconsistent with Figure 12, where Record 1 has not been acknowledged but is also not retransmitted. It appears there is an implied handling of empty ACKs that isn't written down anywhere in Sec 7.2 Sec 9. Should there be any flow control limits on newconnectionid? Or should receivers be free to simply drop CIDs they can't handle? It might be good to specify. Finally, a really weird one. Reading this document and references to connection ID prompted to me to think how QUIC-LB could apply to DTLS. The result is here: URL Please note the rather unfortunate third-to-last paragraph. I'm happy to take the answer that this use case doesn't matter, since I made it up today. But if it does, it would be very helpful if (1) DTLS 1.3 clients MUST include a connectionid extension in their ClientHello, even if zero length, and/or (2) this draft updated 4.1.4 of 8446 to allow the server to include connectionid in HelloRetryRequest even if the client didn't offer it. Thoughts? NITS: s/select(HandshakeType)/select(msg_type). Though with pseudocode your mileage may vary as to what's clearer. s/consitute/constitute Sec 5.7 In table 1, why include one ACK in the diagram but not the other? It's clear from the note, but the figure is a weird omission.\nAddressed in URL", "new_text": "endpoint: Either the client or server of the connection. epoch: one set of cryptographic keys used for encryption and decryption. handshake: An initial negotiation between client and server that establishes the parameters of the connection."}
{"id": "q-en-dtls13-spec-c5d9f49320c2d2dc68b9aa601f1c9affd14f68b1a63d9bb9feb4eb57d2880afd", "old_text": "MSL: Maximum Segment Lifetime The reader is assumed to be familiar with the TLS 1.3 specification. As in TLS 1.3 the HelloRetryRequest has the same format as a ServerHello message but for convenience we use the term HelloRetryRequest throughout this document as if it were a distinct message. The reader is also assumed to be familiar with I-D.ietf-tls-dtls- connection-id as this document applies the CID functionality to DTLS", "comments": "Thanks for this document -- badly needed and easy to read. Thanks also to Bernard Adoba for his recent tsvart review. I support his comments. COMMENTS: Sec 2. It might be useful to introduce the term \"epoch\" in the glossary, for those who read this front to back. Sec 4.2.3: \"The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process.\" The text is unclear if the XOR is applied to the expanded sequence number or merely the 1-2 octets on the wire. I presume it's the latter, but this should be clarified. Sec 4.2.3: It's implied here that the snkey rotates with the epoch. As this is different from QUIC, it's probably worth spelling out. Sec 5.1 is a bit vague about the amplification limit; why not at least RECOMMEND 3, as we've converged on this elsewhere? Sec 5.1. Reading between the lines, it's clear that the cookie can't be used as address verification across connections in the way that a NEWTOKEN token is. It would be good to spell this out for clients -- use the resumption token or whatever instead. Sec 7.2 \"As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight.\" I think this should be s/entire flight/entire previous flight? Sec 7.2 \"Upon receipt of an ACK that leaves it with only some messages from a flight having been acknowledged an implementation SHOULD retransmit the unacknowledged messages or fragments.\" This language appears inconsistent with Figure 12, where Record 1 has not been acknowledged but is also not retransmitted. It appears there is an implied handling of empty ACKs that isn't written down anywhere in Sec 7.2 Sec 9. Should there be any flow control limits on newconnectionid? Or should receivers be free to simply drop CIDs they can't handle? It might be good to specify. Finally, a really weird one. Reading this document and references to connection ID prompted to me to think how QUIC-LB could apply to DTLS. The result is here: URL Please note the rather unfortunate third-to-last paragraph. I'm happy to take the answer that this use case doesn't matter, since I made it up today. But if it does, it would be very helpful if (1) DTLS 1.3 clients MUST include a connectionid extension in their ClientHello, even if zero length, and/or (2) this draft updated 4.1.4 of 8446 to allow the server to include connectionid in HelloRetryRequest even if the client didn't offer it. Thoughts? NITS: s/select(HandshakeType)/select(msg_type). Though with pseudocode your mileage may vary as to what's clearer. s/consitute/constitute Sec 5.7 In table 1, why include one ACK in the diagram but not the other? It's clear from the note, but the figure is a weird omission.\nAddressed in URL", "new_text": "MSL: Maximum Segment Lifetime The reader is assumed to be familiar with TLS13. As in TLS 1.3, the HelloRetryRequest has the same format as a ServerHello message, but for convenience we use the term HelloRetryRequest throughout this document as if it were a distinct message. The reader is also assumed to be familiar with I-D.ietf-tls-dtls- connection-id as this document applies the CID functionality to DTLS"}
{"id": "q-en-dtls13-spec-c5d9f49320c2d2dc68b9aa601f1c9affd14f68b1a63d9bb9feb4eb57d2880afd", "old_text": "The sn_key is computed as follows: [sender] denotes the sending side. The Secret value to be used is described in Section 7.3 of TLS13. The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process. This procedure requires the ciphertext length be at least 16 bytes. Receivers MUST reject shorter records as if they had failed", "comments": "Thanks for this document -- badly needed and easy to read. Thanks also to Bernard Adoba for his recent tsvart review. I support his comments. COMMENTS: Sec 2. It might be useful to introduce the term \"epoch\" in the glossary, for those who read this front to back. Sec 4.2.3: \"The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process.\" The text is unclear if the XOR is applied to the expanded sequence number or merely the 1-2 octets on the wire. I presume it's the latter, but this should be clarified. Sec 4.2.3: It's implied here that the snkey rotates with the epoch. As this is different from QUIC, it's probably worth spelling out. Sec 5.1 is a bit vague about the amplification limit; why not at least RECOMMEND 3, as we've converged on this elsewhere? Sec 5.1. Reading between the lines, it's clear that the cookie can't be used as address verification across connections in the way that a NEWTOKEN token is. It would be good to spell this out for clients -- use the resumption token or whatever instead. Sec 7.2 \"As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight.\" I think this should be s/entire flight/entire previous flight? Sec 7.2 \"Upon receipt of an ACK that leaves it with only some messages from a flight having been acknowledged an implementation SHOULD retransmit the unacknowledged messages or fragments.\" This language appears inconsistent with Figure 12, where Record 1 has not been acknowledged but is also not retransmitted. It appears there is an implied handling of empty ACKs that isn't written down anywhere in Sec 7.2 Sec 9. Should there be any flow control limits on newconnectionid? Or should receivers be free to simply drop CIDs they can't handle? It might be good to specify. Finally, a really weird one. Reading this document and references to connection ID prompted to me to think how QUIC-LB could apply to DTLS. The result is here: URL Please note the rather unfortunate third-to-last paragraph. I'm happy to take the answer that this use case doesn't matter, since I made it up today. But if it does, it would be very helpful if (1) DTLS 1.3 clients MUST include a connectionid extension in their ClientHello, even if zero length, and/or (2) this draft updated 4.1.4 of 8446 to allow the server to include connectionid in HelloRetryRequest even if the client didn't offer it. Thoughts? NITS: s/select(HandshakeType)/select(msg_type). Though with pseudocode your mileage may vary as to what's clearer. s/consitute/constitute Sec 5.7 In table 1, why include one ACK in the diagram but not the other? It's clear from the note, but the figure is a weird omission.\nAddressed in URL", "new_text": "The sn_key is computed as follows: [sender] denotes the sending side. The Secret value to be used is described in Section 7.3 of TLS13. Note that a new key is used for each epoch: because the epoch is in the clear, this does not result in ambiguity. The encrypted sequence number is computed by XORing the leading bytes of the Mask with the on-the-wire representation of the sequence number. Decryption is accomplished by the same process. This procedure requires the ciphertext length be at least 16 bytes. Receivers MUST reject shorter records as if they had failed"}
{"id": "q-en-dtls13-spec-c5d9f49320c2d2dc68b9aa601f1c9affd14f68b1a63d9bb9feb4eb57d2880afd", "old_text": "configured not to perform a cookie exchange. The default SHOULD be that the exchange is performed, however. In addition, the server MAY choose not to do a cookie exchange when a session is resumed or, more generically, when the DTLS handshake uses a PSK-based key exchange. Servers which process 0-RTT requests and send 0.5-RTT responses without a cookie exchange risk being used in an amplification attack if the size of outgoing messages greatly exceeds the size of those that are received. A server SHOULD limit the amount of data it sends toward a client address before it verifies that the client is able to receive data at that address. A client address is valid after a cookie exchange or handshake completion. A server MAY apply a higher limit based on heuristics, such as if the client uses the same IP address as the connection on which the cookie was created. Clients MUST be prepared to do a cookie exchange with every handshake. If a server receives a ClientHello with an invalid cookie, it MUST terminate the handshake with an \"illegal_parameter\" alert. This", "comments": "Thanks for this document -- badly needed and easy to read. Thanks also to Bernard Adoba for his recent tsvart review. I support his comments. COMMENTS: Sec 2. It might be useful to introduce the term \"epoch\" in the glossary, for those who read this front to back. Sec 4.2.3: \"The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process.\" The text is unclear if the XOR is applied to the expanded sequence number or merely the 1-2 octets on the wire. I presume it's the latter, but this should be clarified. Sec 4.2.3: It's implied here that the snkey rotates with the epoch. As this is different from QUIC, it's probably worth spelling out. Sec 5.1 is a bit vague about the amplification limit; why not at least RECOMMEND 3, as we've converged on this elsewhere? Sec 5.1. Reading between the lines, it's clear that the cookie can't be used as address verification across connections in the way that a NEWTOKEN token is. It would be good to spell this out for clients -- use the resumption token or whatever instead. Sec 7.2 \"As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight.\" I think this should be s/entire flight/entire previous flight? Sec 7.2 \"Upon receipt of an ACK that leaves it with only some messages from a flight having been acknowledged an implementation SHOULD retransmit the unacknowledged messages or fragments.\" This language appears inconsistent with Figure 12, where Record 1 has not been acknowledged but is also not retransmitted. It appears there is an implied handling of empty ACKs that isn't written down anywhere in Sec 7.2 Sec 9. Should there be any flow control limits on newconnectionid? Or should receivers be free to simply drop CIDs they can't handle? It might be good to specify. Finally, a really weird one. Reading this document and references to connection ID prompted to me to think how QUIC-LB could apply to DTLS. The result is here: URL Please note the rather unfortunate third-to-last paragraph. I'm happy to take the answer that this use case doesn't matter, since I made it up today. But if it does, it would be very helpful if (1) DTLS 1.3 clients MUST include a connectionid extension in their ClientHello, even if zero length, and/or (2) this draft updated 4.1.4 of 8446 to allow the server to include connectionid in HelloRetryRequest even if the client didn't offer it. Thoughts? NITS: s/select(HandshakeType)/select(msg_type). Though with pseudocode your mileage may vary as to what's clearer. s/consitute/constitute Sec 5.7 In table 1, why include one ACK in the diagram but not the other? It's clear from the note, but the figure is a weird omission.\nAddressed in URL", "new_text": "configured not to perform a cookie exchange. The default SHOULD be that the exchange is performed, however. In addition, the server MAY choose not to do a cookie exchange when a session is resumed or, more generically, when the DTLS handshake uses a PSK-based key exchange and the IP address matches one associated with the PSK. Servers which process 0-RTT requests and send 0.5-RTT responses without a cookie exchange risk being used in an amplification attack if the size of outgoing messages greatly exceeds the size of those that are received. A server SHOULD limit the amount of data it sends toward a client address to three times the amount of data sent by the client before it verifies that the client is able to receive data at that address. A client address is valid after a cookie exchange or handshake completion. Clients MUST be prepared to do a cookie exchange with every handshake. Note that cookies are only valid for the existing handshake and cannot be stored for future handshakes. If a server receives a ClientHello with an invalid cookie, it MUST terminate the handshake with an \"illegal_parameter\" alert. This"}
{"id": "q-en-dtls13-spec-c5d9f49320c2d2dc68b9aa601f1c9affd14f68b1a63d9bb9feb4eb57d2880afd", "old_text": "where a flight is very large and the receiver is forced to elide acknowledgements for records which have already been ACKed. As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight. 7.3.", "comments": "Thanks for this document -- badly needed and easy to read. Thanks also to Bernard Adoba for his recent tsvart review. I support his comments. COMMENTS: Sec 2. It might be useful to introduce the term \"epoch\" in the glossary, for those who read this front to back. Sec 4.2.3: \"The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process.\" The text is unclear if the XOR is applied to the expanded sequence number or merely the 1-2 octets on the wire. I presume it's the latter, but this should be clarified. Sec 4.2.3: It's implied here that the snkey rotates with the epoch. As this is different from QUIC, it's probably worth spelling out. Sec 5.1 is a bit vague about the amplification limit; why not at least RECOMMEND 3, as we've converged on this elsewhere? Sec 5.1. Reading between the lines, it's clear that the cookie can't be used as address verification across connections in the way that a NEWTOKEN token is. It would be good to spell this out for clients -- use the resumption token or whatever instead. Sec 7.2 \"As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight.\" I think this should be s/entire flight/entire previous flight? Sec 7.2 \"Upon receipt of an ACK that leaves it with only some messages from a flight having been acknowledged an implementation SHOULD retransmit the unacknowledged messages or fragments.\" This language appears inconsistent with Figure 12, where Record 1 has not been acknowledged but is also not retransmitted. It appears there is an implied handling of empty ACKs that isn't written down anywhere in Sec 7.2 Sec 9. Should there be any flow control limits on newconnectionid? Or should receivers be free to simply drop CIDs they can't handle? It might be good to specify. Finally, a really weird one. Reading this document and references to connection ID prompted to me to think how QUIC-LB could apply to DTLS. The result is here: URL Please note the rather unfortunate third-to-last paragraph. I'm happy to take the answer that this use case doesn't matter, since I made it up today. But if it does, it would be very helpful if (1) DTLS 1.3 clients MUST include a connectionid extension in their ClientHello, even if zero length, and/or (2) this draft updated 4.1.4 of 8446 to allow the server to include connectionid in HelloRetryRequest even if the client didn't offer it. Thoughts? NITS: s/select(HandshakeType)/select(msg_type). Though with pseudocode your mileage may vary as to what's clearer. s/consitute/constitute Sec 5.7 In table 1, why include one ACK in the diagram but not the other? It's clear from the note, but the figure is a weird omission.\nAddressed in URL", "new_text": "where a flight is very large and the receiver is forced to elide acknowledgements for records which have already been ACKed. As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight to which it is responding. 7.3."}
{"id": "q-en-dtls13-spec-c5d9f49320c2d2dc68b9aa601f1c9affd14f68b1a63d9bb9feb4eb57d2880afd", "old_text": "to \"cid_spare\", then either existing or new CID MAY be used. Endpoints SHOULD use receiver-provided CIDs in the order they were provided. Endpoints MUST NOT have more than one NewConnectionId message outstanding. Implementations which either did not negotiate the \"connection_id\" extension or which have negotiated receiving an empty CID MUST NOT", "comments": "Thanks for this document -- badly needed and easy to read. Thanks also to Bernard Adoba for his recent tsvart review. I support his comments. COMMENTS: Sec 2. It might be useful to introduce the term \"epoch\" in the glossary, for those who read this front to back. Sec 4.2.3: \"The encrypted sequence number is computed by XORing the leading bytes of the Mask with the sequence number. Decryption is accomplished by the same process.\" The text is unclear if the XOR is applied to the expanded sequence number or merely the 1-2 octets on the wire. I presume it's the latter, but this should be clarified. Sec 4.2.3: It's implied here that the snkey rotates with the epoch. As this is different from QUIC, it's probably worth spelling out. Sec 5.1 is a bit vague about the amplification limit; why not at least RECOMMEND 3, as we've converged on this elsewhere? Sec 5.1. Reading between the lines, it's clear that the cookie can't be used as address verification across connections in the way that a NEWTOKEN token is. It would be good to spell this out for clients -- use the resumption token or whatever instead. Sec 7.2 \"As noted above, the receipt of any record responding to a given flight MUST be taken as an implicit acknowledgement for the entire flight.\" I think this should be s/entire flight/entire previous flight? Sec 7.2 \"Upon receipt of an ACK that leaves it with only some messages from a flight having been acknowledged an implementation SHOULD retransmit the unacknowledged messages or fragments.\" This language appears inconsistent with Figure 12, where Record 1 has not been acknowledged but is also not retransmitted. It appears there is an implied handling of empty ACKs that isn't written down anywhere in Sec 7.2 Sec 9. Should there be any flow control limits on newconnectionid? Or should receivers be free to simply drop CIDs they can't handle? It might be good to specify. Finally, a really weird one. Reading this document and references to connection ID prompted to me to think how QUIC-LB could apply to DTLS. The result is here: URL Please note the rather unfortunate third-to-last paragraph. I'm happy to take the answer that this use case doesn't matter, since I made it up today. But if it does, it would be very helpful if (1) DTLS 1.3 clients MUST include a connectionid extension in their ClientHello, even if zero length, and/or (2) this draft updated 4.1.4 of 8446 to allow the server to include connectionid in HelloRetryRequest even if the client didn't offer it. Thoughts? NITS: s/select(HandshakeType)/select(msg_type). Though with pseudocode your mileage may vary as to what's clearer. s/consitute/constitute Sec 5.7 In table 1, why include one ACK in the diagram but not the other? It's clear from the note, but the figure is a weird omission.\nAddressed in URL", "new_text": "to \"cid_spare\", then either existing or new CID MAY be used. Endpoints SHOULD use receiver-provided CIDs in the order they were provided. Implementations which receive more spare CIDs than they wish to maintain MAY simply discard any extra CIDs. Endpoints MUST NOT have more than one NewConnectionId message outstanding. Implementations which either did not negotiate the \"connection_id\" extension or which have negotiated receiving an empty CID MUST NOT"}
{"id": "q-en-dtls13-spec-58a884f3919d39b01da204ca4458455ab048c6fbf38d962598183b8fd420ab38", "old_text": "When a DTLS implementation receives a handshake message fragment corresponding to the next expected handshake message sequence number, it MUST buffer it until it has the entire handshake message. DTLS implementations MUST be able to handle overlapping fragment ranges. This allows senders to retransmit handshake messages with smaller fragment sizes if the PMTU estimate changes. Senders MUST NOT change handshake message bytes upon retransmission. Receivers MAY check that retransmitted bytes are identical and SHOULD abort the handshake with an \"illegal_parameter\" alert if the value of a byte changes. Note that as with TLS, multiple handshake messages may be placed in the same DTLS record, provided that there is room and that they are", "comments": "NAME here is another try at this.\nI did not realize that the intent of your text was to allow out of order processing. IMO that's off the fairway and we shouldn't allow that any more than we let you reach into the TCP stack and pull out forward messages.\nNAME I don't think the comparison with TCP works - the handshake protocol is internal. We should not force a particular implementation unless there's good reason to. In this case, as explained on the list, there's potential merit of allowing out of order processing of very large handshake messages with known structure. NAME What's your opinion?\nAgain, this is a last minute change during Auth48, so the bar is much higher than \"potential merit\"\nThis isn't motivated by security-related reasons like , so I agree with NAME that this does not raise to the level of an AUTH48 change. I propose we land this PR and close out .\nThanks NAME NAME for considering this in the first place. As I understood NAME since this does not impact the wire-format, we can revisit once the RFC has been finalized?\nOnce the RFC is finalized, it's final. Changes can be requested via the , or via a new document that updates the RFC in question.\nOk, thank you. I was referring to NAME comment on the list\nYes. What I was saying is that you could just write an RFC that said \"you can process out of order\"\ncorresponding to the next expected handshake message sequence number, it MUST buffer it until it has the entire handshake message. As discussed in URL The spec mandates buffering all handshake fragments, but some implementations (e.g. GnuTLS) choose to implement simpler reassembly schemes. It has been argued by Ekr that such should be discouraged because they aren't efficient. On the other hand, as Rich Salz remarks, the choice of reassembly mechanism isn't something the peer could tell (the effects of poor reassembly performance are more retransmissions, which could also be due to poor network reliability), so it does feel like we're overspecifying things here. Weakening the MUST to a SHOULD isn't good either as it allows dropping fragments altogether. There's a tension between the desire for a concrete operational guidance, which the current text clearly provides, and a more abstract formulation along the line of \"implementations MUST use a reassembly scheme ensuring forward progress\". As suggested by Ekr, taking this off the mailing list and continuing discussion here. To be clear: This has no bearing on the protocol itself, it's about how much implementation details the spec should enforce.\nAs mentioned on the list, I don't like the in-order qualification. I agree it's debatable whether there will be cases where it makes sense to process data out of order, but (see list again) it is not inconceivable, and I see no reason to exclude this here. I would therefore prefer to drop the in-order qualification. Hi all, Both DTLS 1.2 and DTLS 1.3 mandate: Can someone explain the underlying rationale? It seems that in the context of very large key material or certificate chains (think e.g. PQC), gradual processing of handshake messages (where possible) is useful to reduce RAM usage. Is there a security risk in doing this? It would also be useful for stateless handling of fragmented ClientHello messages. I'm sure this was discussed before but I don't remember where and who said it, but a server implementation could peek into the initial fragment of a ClientHello, check if it contains a valid cookie, and if so, allocate state for subsequent full reassembly. That wouldn't be compliant with the above MUST, though, as far as I understand it. Thanks! Hanno IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "new_text": "When a DTLS implementation receives a handshake message fragment corresponding to the next expected handshake message sequence number, it MUST process it, either by buffering it until it has the entire handshake message or by processing any in-order portions of the message. DTLS implementations MUST be able to handle overlapping fragment ranges. This allows senders to retransmit handshake messages with smaller fragment sizes if the PMTU estimate changes. Senders MUST NOT change handshake message bytes upon retransmission. Receivers MAY check that retransmitted bytes are identical and SHOULD abort the handshake with an \"illegal_parameter\" alert if the value of a byte changes. Note that as with TLS, multiple handshake messages may be placed in the same DTLS record, provided that there is room and that they are"}
{"id": "q-en-dtls13-spec-40f88461eb3a402b92b324ee6c2a564a0c50d98a42850937797e1f46d6fe00eb", "old_text": "Because the integrity check indirectly depends on a sequence number, if record N is not received, then the integrity check on record N+1 will be based on the wrong sequence number and thus will fail. DTLS solves this problem by adding explicit sequence numbers. The TLS handshake is a lock-step cryptographic handshake. Messages must be transmitted and received in a defined order; any", "comments": "The draft states in three places that DTLS use explicit sequence numbers, while this was true for DTLS 1.2, the sequence number in the DTLSCipherstructure is a mixture of implicit and explict. Maybe \"Partly Explicit\"?\nPlease provide me with a PR to add your name to acknowledgements.\nHi, I don\u2019t know what PR is an abbreviation for but I guess it is not clear from my username who I am. I should probably update it, I did not intend to be anonymous :) John Mattsson, Ericsson, EMAIL", "new_text": "Because the integrity check indirectly depends on a sequence number, if record N is not received, then the integrity check on record N+1 will be based on the wrong sequence number and thus will fail. DTLS solves this problem by adding sequence numbers. The TLS handshake is a lock-step cryptographic handshake. Messages must be transmitted and received in a defined order; any"}
{"id": "q-en-dtls13-spec-40f88461eb3a402b92b324ee6c2a564a0c50d98a42850937797e1f46d6fe00eb", "old_text": "The DTLSCiphertext structure omits the superfluous version number and type fields. DTLS adds an explicit epoch and sequence number to the TLS record header. This sequence number allows the recipient to correctly verify the DTLS MAC. However, the number of bits used for the epoch and sequence number fields in the DTLSCiphertext structure have been reduced. The DTLSCiphertext structure has a variable length header.", "comments": "The draft states in three places that DTLS use explicit sequence numbers, while this was true for DTLS 1.2, the sequence number in the DTLSCipherstructure is a mixture of implicit and explict. Maybe \"Partly Explicit\"?\nPlease provide me with a PR to add your name to acknowledgements.\nHi, I don\u2019t know what PR is an abbreviation for but I guess it is not clear from my username who I am. I should probably update it, I did not intend to be anonymous :) John Mattsson, Ericsson, EMAIL", "new_text": "The DTLSCiphertext structure omits the superfluous version number and type fields. DTLS adds an epoch and sequence number to the TLS record header. This sequence number allows the recipient to correctly verify the DTLS MAC. However, the number of bits used for the epoch and sequence number fields in the DTLSCiphertext structure have been reduced. The DTLSCiphertext structure has a variable length header."}
{"id": "q-en-dtls13-spec-40f88461eb3a402b92b324ee6c2a564a0c50d98a42850937797e1f46d6fe00eb", "old_text": "4.2.1. DTLS uses an explicit sequence number, rather than an implicit one, carried in the sequence_number field of the record. Sequence numbers are maintained separately for each epoch, with each sequence_number initially being 0 for each epoch. The epoch number is initially zero and is incremented each time keying material changes and a sender aims to rekey. More details are", "comments": "The draft states in three places that DTLS use explicit sequence numbers, while this was true for DTLS 1.2, the sequence number in the DTLSCipherstructure is a mixture of implicit and explict. Maybe \"Partly Explicit\"?\nPlease provide me with a PR to add your name to acknowledgements.\nHi, I don\u2019t know what PR is an abbreviation for but I guess it is not clear from my username who I am. I should probably update it, I did not intend to be anonymous :) John Mattsson, Ericsson, EMAIL", "new_text": "4.2.1. DTLS uses an explicit or partly explicit sequence number, rather than an implicit one, carried in the sequence_number field of the record. Sequence numbers are maintained separately for each epoch, with each sequence_number initially being 0 for each epoch. The epoch number is initially zero and is incremented each time keying material changes and a sender aims to rekey. More details are"}
{"id": "q-en-eap-aka-pfs-06fd8aa05a9b29f976b2dbe155f18bb3203295f5784507c7091d345828036af0", "old_text": "7.4. The Initiator and the Responder must make sure that unprotected data and metadata do not reveal any sensitive information. In particular, this applies to AT_KDF, AT_KDF_FS, AT_PUB_ECDHE, and AT_KDF_INPUT. AT_KDF, AT_KDF_FS, and AT_PUB_ECDHE reveals the used cryptographic algorithms, if these depend on the peer identity they leak information about the peer. AT_KDF_INPUT reveals the network name. An attacker observing network traffic may use such information for traffic flow analysis or to track an endpoint. 7.5.", "comments": "Suggested fix for\nKarl: -- In Section 7.4 it says: \"The Initiator and the Responder must make sure that unprotected data and metadata do not reveal any sensitive information.\" Should it be put in a less requirement-like form? For example: \"unprotected data and metadata can reveal sensitive information and need to be selected with care.\"?\nI agree that the metadata text should not be requirement form as much as it is now \u2013 there\u2019s little that implementations can actually do about this, e.g., about the network name. Will make a PR.", "new_text": "7.4. Unprotected data and metadata can reveal sensitive information and need to be selected with care. In particular, this applies to AT_KDF, AT_KDF_FS, AT_PUB_ECDHE, and AT_KDF_INPUT. AT_KDF, AT_KDF_FS, and AT_PUB_ECDHE reveals the used cryptographic algorithms, if these depend on the peer identity they leak information about the peer. AT_KDF_INPUT reveals the network name, although that is done on purpose to bind the authentication to a particular context. An attacker observing network traffic may use the above types of information for traffic flow analysis or to track an endpoint. 7.5."}
{"id": "q-en-edhoc-77bffc2add05b825f3a3b8361678bbc5d5433754df5354fcf33b50e990ab3dd6", "old_text": "RECOMMENDED to not trust a single source of randomness and to not put unaugmented random numbers on the wire. If ECDSA is supported, \"deterministic ECDSA\" as specified in RFC6979 MAY be used. Pure deterministic elliptic-curve signatures such as deterministic ECDSA and EdDSA have gained popularity over randomized", "comments": "I think this can be merged.\nFixed the problem with ':' in refs. Now merging.\nBased on Stephens review\nReferring to this in :\nFrom the PR : \"As explained by Krawczyk, any attack mitigated by the NIST requiment would mean that the KDF is fully broken and would have to be replaced anyway.\" I didn't understand \"any attack mitigated by the NIST requirement\", and how mitigation could imply that something is broken. As far as I understand, the statement is something like \"if there is a problem deriving secret and non-secret material from the same KDF, then there is a problem with the KDF\" Perhaps \"an attack based on the violation of the NIST requirement is an attack on the KDF itself\"?\nYes, it was likely meants to say \"if any attack is mitigate\" Let's reformulate according to your suggestion\nI made a commit to correct spelling and grammar. I don't think we can give applications more guidance here. How to gererate e.g. private key in certificates are way outside of EDHOC, and it would anyway be hard to say something general as deployments might look very different.I did not want to say anything about randomly generated conenction ID as that is not how connection IDs is meant to be generated. They are intended to be as short as possible. An implentation is allowed to use long random bytestrings but is not recommended in general.\nPR merged", "new_text": "RECOMMENDED to not trust a single source of randomness and to not put unaugmented random numbers on the wire. Implementations might consider deriving secret and non-secret randomness from different PNRG/PRF/KDF instances to limit the damage if the PNRG/PRF/KDF turns out to be fundamentally broken. NIST generally forbids deriving secret and non-secret randomness from the same KDF instance, but this decision has been criticized by Krawczyk HKDFpaper and doing so is common practice. In addition to IVs, other examples are the challenge in EAP-TTLS, the RAND in 3GPP AKAs, and the Session-Id in EAP-TLS 1.3. Note that part of KEYSTREAM_2 is also non-secret randomness as it is known or predictable to an attacker. As explained by Krawczyk, if any attack is mitigated by the NIST requirement it would mean that the KDF is fully broken and would have to be replaced anyway. If ECDSA is supported, \"deterministic ECDSA\" as specified in RFC6979 MAY be used. Pure deterministic elliptic-curve signatures such as deterministic ECDSA and EdDSA have gained popularity over randomized"}
{"id": "q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644", "old_text": "payload = MAC_2 CIPHERTEXT_2 is calculated by using the Expand function as a binary additive stream cipher. plaintext = ( ? PAD, ID_CRED_R / bstr / -24..23, Signature_or_MAC_2, ? EAD_2 ) If ID_CRED_R contains a single 'kid' parameter, i.e.,", "comments": "There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence \u2026 Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes \u2026 Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with", "new_text": "payload = MAC_2 CIPHERTEXT_2 is calculated by using the Expand function as a binary additive stream cipher over the following plaintext: PLAINTEXT_2 = ( ? PAD, ID_CRED_R / bstr / -24..23, Signature_or_MAC_2, ? EAD_2 ) If ID_CRED_R contains a single 'kid' parameter, i.e.,"}
{"id": "q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644", "old_text": "Compute KEYSTREAM_2 = EDHOC-KDF( PRK_2e, TH_2, \"KEYSTREAM_2\", h'', plaintext_length ), where plaintext_length is the length of the plaintext. CIPHERTEXT_2 = plaintext XOR KEYSTREAM_2 Encode message_2 as a sequence of CBOR encoded data items as specified in asym-msg2-form.", "comments": "There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence \u2026 Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes \u2026 Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with", "new_text": "Compute KEYSTREAM_2 = EDHOC-KDF( PRK_2e, TH_2, \"KEYSTREAM_2\", h'', plaintext_length ), where plaintext_length is the length of PLAINTEXT_2. CIPHERTEXT_2 = PLAINTEXT_2 XOR KEYSTREAM_2 Encode message_2 as a sequence of CBOR encoded data items as specified in asym-msg2-form."}
{"id": "q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644", "old_text": "The Initiator SHALL compose message_3 as follows: Compute the transcript hash TH_3 = H(TH_2, CIPHERTEXT_2) where H() is the EDHOC hash algorithm of the selected cipher suite. The transcript hash TH_3 is a CBOR encoded bstr and the input to the hash function is a CBOR Sequence. Note that H(TH_2, CIPHERTEXT_2) can be computed and cached already in the processing of message_2. Compute MAC_3 = EDHOC-KDF( PRK_4x3m, TH_3, \"MAC_3\", << ID_CRED_I,", "comments": "There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence \u2026 Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes \u2026 Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with", "new_text": "The Initiator SHALL compose message_3 as follows: Compute the transcript hash TH_3 = H(TH_2, PLAINTEXT_2) where H() is the EDHOC hash algorithm of the selected cipher suite. The transcript hash TH_3 is a CBOR encoded bstr and the input to the hash function is a CBOR Sequence. Note that H(TH_2, PLAINTEXT_2) can be computed and cached already in the processing of message_2. Compute MAC_3 = EDHOC-KDF( PRK_4x3m, TH_3, \"MAC_3\", << ID_CRED_I,"}
{"id": "q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644", "old_text": "Compute a COSE_Encrypt0 object as defined in Sections 5.2 and 5.3 of I-D.ietf-cose-rfc8152bis-struct, with the EDHOC AEAD algorithm of the selected cipher suite, using the encryption key K_3, the initialization vector IV_3, the plaintext P, and the following parameters as input (see COSE): protected = h''", "comments": "There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence \u2026 Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes \u2026 Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with", "new_text": "Compute a COSE_Encrypt0 object as defined in Sections 5.2 and 5.3 of I-D.ietf-cose-rfc8152bis-struct, with the EDHOC AEAD algorithm of the selected cipher suite, using the encryption key K_3, the initialization vector IV_3, the plaintext PLAINTEXT_3, and the following parameters as input (see COSE): protected = h''"}
{"id": "q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644", "old_text": "iv_length - length of the initialization vector of the EDHOC AEAD algorithm P = ( ? PAD, ID_CRED_I / bstr / -24..23, Signature_or_MAC_3, ? EAD_3 ) If ID_CRED_I contains a single 'kid' parameter, i.e., ID_CRED_I = { 4 : kid_I }, then only the byte string kid_I", "comments": "There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence \u2026 Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes \u2026 Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with", "new_text": "iv_length - length of the initialization vector of the EDHOC AEAD algorithm PLAINTEXT_3 = ( ? PAD, ID_CRED_I / bstr / -24..23, Signature_or_MAC_3, ? EAD_3 ) If ID_CRED_I contains a single 'kid' parameter, i.e., ID_CRED_I = { 4 : kid_I }, then only the byte string kid_I"}
{"id": "q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644", "old_text": "Compute a COSE_Encrypt0 as defined in Sections 5.2 and 5.3 of I- D.ietf-cose-rfc8152bis-struct, with the EDHOC AEAD algorithm of the selected cipher suite, using the encryption key K_4, the initialization vector IV_4, the plaintext P, and the following parameters as input (see COSE): protected = h''", "comments": "There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence \u2026 Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes \u2026 Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with", "new_text": "Compute a COSE_Encrypt0 as defined in Sections 5.2 and 5.3 of I- D.ietf-cose-rfc8152bis-struct, with the EDHOC AEAD algorithm of the selected cipher suite, using the encryption key K_4, the initialization vector IV_4, the plaintext PLAINTEXT_4, and the following parameters as input (see COSE): protected = h''"}
{"id": "q-en-edhoc-3c8deb59f63205fa815ed4f1e518eab11b6bc555f08bdbc9c320d4af51760644", "old_text": "iv_length - length of the initialization vector of the EDHOC AEAD algorithm P = ( ? PAD, ? EAD_4 ) PAD = 1*true is padding that may be used to hide the length of the unpadded plaintext.", "comments": "There has been a lot of suggestions and discussion regarding the key derivation latelely. Below is an attempt to summarize all the current suggstions that currently seems relatively likely to be done. This is not an issue in itself but I think it is good to have somewhere to discucess the complete changes to the key derivation. Suggested to permute the order of element in the TH2 sequence to be more resistant to any future collision attacks in hash functions. Exact order unclear but suggested to put the random and unpredictable GY first. Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes to simplify formal verification. Endpoints agree on that they received the same plaintext rather then the same cipertext.Suggested to use a salt derived from PRK in Extract to avoid usign the PRK keys directly in both Extract and ExpandSuggested to use unsigned integer instead of text string for label to save memory.Suggestion to add a final key derivation PRKout and use that in the exporter. Note that the exporter would also have uint labelsSuggestion to base Key-Update on Expand instead of extract. The label would need IANA registration. hashlength is a new variable that the implementation needs to keep track of. Should considered whether using Exporter for KeyUpdate has any disadvantages. Suggested that more info on how an application protocol is supposed to use KeyUpdate and Exporter together is needed.The EDHOC internal key derivations (i.e., not using the Exporter) would then be something like below. The exact labels should be discussed. Note that that only some of the derivations actually need a label to avoid collisionIncluding KeyUpdate, the IANA registered exporter labels could be:\nLooks good. I had different naming in mind of the salts: PRK3e2m = Extract( SALT3e2m, GRX ) PRK3m = Extract( SALT3m, GIY ) where SALT3e2m = EDHOC-KDF( PRK2e, TH2, 0, h'', hashlength ) SALT3m = EDHOC-KDF( PRK3e2m, TH3, 0, h'', hashlength ) With this the salts are named by its use rather than from what key they are derived.\nThe labels was just to illustrate the fact that they don't need to be separate. I think it is still better to just use 0-7. That makes it very easy for the reader to see that there are no collisions.\nNAME Was this a comment on my proposed names of salts, or a separate comment? (I agree labels 0-7 is good.)\nIt was a separate comments. Your suggestion for salt naming is fine with me. Some other non-technical naming things that a hidden in my summary above I think keylength should use different terms in the derivation of K4 and OSCORE Master Secret. The current naming gives the idea that they are the same. I think the contexts in the MAC2 and MAC3 derivations should be given names. Should consider collection all the EDHOC-KDF derivations in a single place.\nMight want to derive: K4 = EDHOC-KDF( PRK3m, TH4, 0, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 0, h'', ivlength ) to avoid that the applicaiton gets K4 and IV4. Another alternative is to use K4 = EDHOC-Exporter( 7, h'', keylength ) IV4 = EDHOC-Exporter( 8, h'', ivlength ) and forbid the application to use the labels for K4 and IV4\nEditorial; should be: K4 = EDHOC-KDF( PRK3m, TH4, 1, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 2, h'', ivlength ) or, if we want all KDF labels distinct: K4 = EDHOC-KDF( PRK3m, TH4, 8, h'', keylength ) IV4 = EDHOC-KDF( PRK3m, TH4, 9, h'', ivlength ) Alternatively, application access to Exporter only allows uint with: K4 = EDHOC-Exporter( -1, h'', keylength ) IV4 = EDHOC-Exporter( -2, h'', ivlength )\nTrying to compile the suggestions. These have a PR: Suggested to permute the order of element in the TH2 sequence \u2026 Suggested to use PLAINTEXT instead of CIPHERTEXT in transcript hashes \u2026 Suggestion to add a final key derivation PRKout and use that in the exporter The following needs to be addressed: Suggested to use a salt derived from PRK in Extract define salt derive salt Suggested to use unsigned integer instead of text string for label to save memory suggestion to base Key-Update on Expand consider whether using Exporter for KeyUpdate has any disadvantages. more info on how an application protocol is supposed to use KeyUpdate and Exporter together distinguish keylength in K3/K_4 and OSCORE Master Secret define contexts explicitly table of KDF derivations\nTrying to put things together. Let's call this variant A, part 1. where and (edit: depending on method) New salts to avoid double use of PRK, named by its use. Same construction for K4/IV4 as K3/IV3, motivating the renaming of 3m in as 4e3m. The label in the KDF, third argument, is an uint, all distinct for simplicity.\nVariant A, part 2: Distinct \"transcript hash\" in KeyUpdate and Exporter. Exporter does not give access to EDHOC internal keys. OSCORE key length separate from key_length. Edit: Not clear that 'context' is needed in the EDHOC-Exporter. Also, to be strict in labelling we could continue enumeration. Here is variant B, part 2 (B.1 = A.1):\nVariant C.2 (C.1 = A.1): (edited:) Same as B.2 but with different labels. With variant C we have essentially: the internal KDF labels with current list: 0, ..., 9; and the IANA EDHOC Exporter Registry with current list: 0, 1.\nNAME NAME NAME and all: Any implementation considerations or other comments on variant C above? We'll start with the PR(s) soon.\nThe last posts here are confusing.... I don't agree. These are separate. We should definitly not try to keep them separate. I.e. it should be Master Secret = EDHOC-Exporter( 0, h'', oscorekeylength ) Master Salt = EDHOC-Exporter( 1, h'', oscoresaltlength )\nThe above looks good to me with the following not so interesting options. order of the 0-9 labels use 10 or 0 label for Key-Update order of the h'00' and h'01' \"transcripts\"\nI think we should maybe merge the \"context\" and \"transcript hash\" inputs. This simplifies things and avoids using h'00' and h''01' as \"transcript hashes.\nAlternatively\nThe latest version of the summary URL looks good.\nOverview of main changes needed: Merge Rename PRK3m as PRK4e3m Section 4.1 update PRK definitions move section PRKout to 4.2 Section 4.2 Update EDHOC-KDF and info definition Update derivations Include derivations of salts and K4 / IV4 List EDHOC-KDF derivations with labels 0-9 Section 4.3 Update EDHOC-Exporter (use label 10 instead of 11) Remove derivation of K4 / IV4 Section 4.4 Update EDHOC-KeyUpdate (use label 11 instead of 10) Section 5.3.2 Define context2 Section 5.4.2 Define context3 Section 9.1 Update EDHOC Exporter labels Appendix A.1 Update Master Secret/Salt derivation\nAdditional updates: As noted in previous comment, a proposal to interchange labels 10 and 11 to enumerate labels in the order they appear in the specification. (Other proposal for assignment of integer labels is welcome.) Should we align / adapt definition of contextx and externalaad? Currently (for message2 and similar for message3): externalaad = > context2 = > Edit: Seems to be favourable to at least align this blobs, so changed context2 to > and similar for context_3, in 5464707a\nPRKexporter is a key which we only introduce for simple distinction between different KDF applications. Perhaps an overkill? May open up for unnecessary discussion about properties of that key. Here is another variant: Edit: As commented before, Marco thinks a separate PRKExporter is preferred; less error prone etc.\nMost old comments are now addressed, and there are quite a few changes, so it is a good time to merge PR . Remaining points: derivation of K4/ IV4 and text about key confirmation. Not clear if there is any difference in practice between K4 /IV4 being derived from PRK4e3m or PRKout, but the current text about explicit key confirmation is not valid in the former case. May alternatively tune down the key confirmation terminology. Suggested to make clearer distinction between key derivation for EDHOC processing, resulting in PRKout, and key derivation for application, i.e. making use of PRKout in EDHOC-Exporter and EDHOC-KeyUpdate. More guidance on the latter is also proposed.\nTried to address the remaining points in the previous comment with the latest two commits: 5d675ac and 95e35bb. More guidance on KeyUpdate may still be needed. Other points?\nNAME What more guidance is needed on KeyUpdate?\nWas a request from someone. Might be hard to understand how to use it. E.g. that you should alternate key update and exporter Exporter Key Update Exporter Key Update Exporter ...\nAnother guidance needed is how to manage overwrite of PRKout pending confirmation of the derived key from the other endpoint. When the Initiator is updating PRKout it needs both old and new derived keys to be able to communicate securely in lock-step the change to the new key.\nPerhaps good to remark that the entire application contexts (including e.g. sequence numbers) need to be maintained before a transition is made.\nsee instead. The current discussion has very little to do with", "new_text": "iv_length - length of the initialization vector of the EDHOC AEAD algorithm PLAINTEXT_4 = ( ? PAD, ? EAD_4 ) PAD = 1*true is padding that may be used to hide the length of the unpadded plaintext."}
{"id": "q-en-edhoc-41ccfa1ea09177fbc6f24948d66bb507fb2870ed2cc18f447685f7d5758c7927", "old_text": "typically that which is not possible to infer from routing information in the lower layers. Compared to SIGMA, EDHOC adds an explicit method type and expands the message authentication coverage to additional elements such as algorithms, external authorization data, and previous plaintext", "comments": "Further security considerations about how message content is protected against manipulation. Perhaps something like this: \"Manipulations of message1 may not be detected at reception since there is no MAC but, since message1 is included in TH2, if I and R disagree on message1 then subsequent PRKs and MACs will be different. Manipulations of other messages are detected at reception through the MACs, except for manipulation of the encrypted padding of message2, since PAD2 is not integrity protected.\"\nI made a PR for this\nI merged to prepare for submission on -15. is just a simple clarification.\nThe selfie attack mitigation implies that an initiator is stopping the exchange if it receives its own identity. An active attacker can then use this behavior to test if an initiator identity is the same one as a responder identity. In turn, it implies that the initiator anonymity does not hold against an active attacker when this mitigation is enabled. To judge if this is an issue in practice, the following question should be answered: Are there scenarios where an identity is shared between an initiator and a receiver, but the initiator identity should still be protected? One could maybe think about the case where multiple initiators and receivers are on the same server, sometimes sharing public keys and using distinct ports, but it is expected that the initiators identities are still protected. Possible actions: mention privacy loss due to the selfie attack mitigation. update the selfie attack mitigation, by mentioning that initiators should conclude the exchange as usual, but can check at the end whether they are talking to themselves or not. remove the selfie attack mitigation. Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, and the initiator will be talking to itself while it believes to be talking to the receiver. When there are only public keys, as in EDHOC, it is unclear to me if what is currently called in the draft is indeed a selfie attack. An initiator talking to itself knows it and does not believe to be talking to some other identity. As such this is why it could be possible to either remove the mention of selfie attacks in the draft, or update it.\nI and R are expected to stop the exchange it they receive any identity they do not want to talk to. This likely includes their own identity. Good point that this leaks information. I think that should be added, but I don't see that the own identity is special. >Classical selfie attacks are when there is a pre-shared key between an initiator and a receiver, >and the initiator will be talking to itself while it believes to be talking to the receiver. >When there are only public keys, as in EDHOC, it is unclear to me if what is currently >called in the draft is indeed a selfie attack. The selfie attack mitigation is intended for Trust On First Use (TOFU) use cases, where there is no authentication in the initial setup. I don't think an initiator talking to itself in a TOFU use case would not know it except if they checked R's public key. Do you some better suggestion than \"Selfie attack\". I agree that it is not perfect. \"Selfie attacks\" has in older papers been called reflection attacks, but the term reflection attacks has also been used for very different attacks. One option is to not give the attack any name and instead just describe the attack.\nI think I like to not give any name, as it does not perfectly match the classical definition. Here, the property which is violated is \"If an EDHOC exchange is completed, two distinct identities/computers/agents were active\". Which is different to violating the authentication of the protocol as is the case in classical selfie attacks. Good point. So what we are talking about in this issue is a slightly different privacy leak, which happens when the set of parties you are willing to talk to is equal to all responders minus yourself. To be more concrete, let me try to put it in a kind of drafty formal description that could replace the existing formulation. First the existing formulation: Now a possible replacement:\nThanks for the input! This issue is related to the TOFU use case which is still to be detailed (Appendix D.5). We probably should wait with the update until we have progressed that.\nRelates to\nI made PR for this issue. This is also related to PR and PR . The suggested text in is largely based on the suggested text above but I made some changes. I did not think this fitted well with the rest of the draft. The sentence was also not really needed. This is only true in the case where the set of identifiers is a small finite set. In the general case the set of identifiers the Initiator is not willing to communicate with is infinite. I suggest to add the following recommendation for TOFU To not leak any long-term identifiers, it is recommended to use a freshly generated authentication key as identity in each initial TOFU exchange.\nI merged the PR to master. Keeping the issue open for a while.", "new_text": "typically that which is not possible to infer from routing information in the lower layers. EDHOC messages might change in transit due to a noisy channel or through modification by an attacker. Changes in message_1 and message_2 (except PAD_2) are detected when verifying Signature_or_MAC_2. Changes to PAD_2 and message_3 are detected when verifying CIPHERTEXT_3. Changes to message_4 are detected when verifying CIPHERTEXT_4. Compared to SIGMA, EDHOC adds an explicit method type and expands the message authentication coverage to additional elements such as algorithms, external authorization data, and previous plaintext"}
{"id": "q-en-edhoc-a09cce14f59f79c2946a63cd65661cedefa8a03a36393d3048fe306e29a2b4fd", "old_text": "4.1.3. The pseudorandom key PRK_out, derived as shown in fig-edhoc-kdf, is the only secret key shared between Initiator and Responder that needs to be stored after a successful EDHOC exchange, see m3. Keys for applications are derived from PRK_out, see exporter. 4.2.", "comments": "Should be checked for conflicts with\nI think fixing the comments above resolves conflict with , so can then be replaced by this.\nThis proposal for update in the key derivation description also addresses the clarifications requested in , thus replacing . We merge this to include it in -15 and enable further reviews.\nThis is not correct. How to derive and store PRKexporter is completely up to the implementation. One can derive PRKexporter everytime Exporter is called. This make it look like PRKout is overwritten. This should be formulated with and like TLS 1.3 or with and or something. How long to save the old keying material (PRKout, PRK_Exported, any derived keys) is likely up to the application. The importand thing is that they need to be deleted to give FS.\nThere was a request from someone that more guidance on the use of KeyUpdate should be expanded. Maybe not clear how you are supposed to alternate KeyUpdate and Exporter to get FS.\nAs you are considering making the KeyUpdate more detailed, I have a general design question. Are there use cases where one would expect Post-Compromise Security? That is, if one of the participant's PRK_out is compromised, a successful KeyUpdate mechanism between the two honest parties would heal them and lock out the attacker That is typically what is met by the double ratchet from the Signal app. The classical way to achieve that is roughly to have each party send out for the update fresh DH ephemeral shares, and add the corresponding DH secret inside the key derivation. (it would of course require more care than that in the details)\nI think part of the plan is to make it less detailed and leave some stuff to the implementation/application :) The current KeyUpdate mechanism only gives forward secrecy. If you exchange some nonces and use them as a context you might get protection against an attacker is not monitoring all your traffic. I think details of this should be specified outside of EDHOC like in the OSCORE KeyUpdate draft in the CORE WG. The double ratchet was discussed in LAKE WG at an earlier point. The conclusion was that it added to much complexity to specify that as part of EDHOC. To get this kind of security an application has to rerun EDHOC frequently (like TLS 1.3). Signal uses the Diffie-Hellman ratchet very often which makes it understandable to have on optimized Diffie-Hellman ratchet post-handshake. Contrained use cases of EDHOC can not do Diffie-Hellman as often as Signal so it was determined that requiring rerun of the full EDHOC was the best tradeoff. Should the EDHOC specification say more about the benefits of frequently rerunning Diffie-Hellman? I don't remember how much it says. I recently wrote a paragraph about this that was accepted to RFC8446bis.\nThis is not correct. PRKout does not need to be stored. An implementation can derive the application keys directly and never store PRKout. An implementation can also store PRK_exporter if they don't use EDHOC-KeyUpdate.\nThis is not enough. You need to be sure that the old key is not needed more.\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing.\nNo comments received, so I assume this is fine. (KeyUpdate is now in an appendix.)", "new_text": "4.1.3. The pseudorandom key PRK_out, derived as shown in fig-edhoc-kdf is the output of a successful EDHOC exchange. Keys for applications are derived from PRK_out, see exporter. An application using EDHOC- KeyUpdate needs to store PRK_out. If EDHOC-KeyUpdate is not used, an application only needs to store PRK_out or PRK_exporter as long as EDHOC-Exporter is used. (Note that the word \"store\" used here does not imply that the application has access to the plaintext PRK_out since that may be reserved for code within a TEE, see impl-cons). 4.2."}
{"id": "q-en-edhoc-a09cce14f59f79c2946a63cd65661cedefa8a03a36393d3048fe306e29a2b4fd", "old_text": "PRK_exporter is derived from PRK_out: where hash_length denotes the output size in bytes of the EDHOC hash algorithm of the selected cipher suite. PRK_exporter MUST be derived anew from PRK_out if EDHOC-KeyUpdate is used, see keyupdate. The (label, context) pair used in EDHOC-Exporter must be unique, i.e., a (label, context) MUST NOT be used for two different purposes.", "comments": "Should be checked for conflicts with\nI think fixing the comments above resolves conflict with , so can then be replaced by this.\nThis proposal for update in the key derivation description also addresses the clarifications requested in , thus replacing . We merge this to include it in -15 and enable further reviews.\nThis is not correct. How to derive and store PRKexporter is completely up to the implementation. One can derive PRKexporter everytime Exporter is called. This make it look like PRKout is overwritten. This should be formulated with and like TLS 1.3 or with and or something. How long to save the old keying material (PRKout, PRK_Exported, any derived keys) is likely up to the application. The importand thing is that they need to be deleted to give FS.\nThere was a request from someone that more guidance on the use of KeyUpdate should be expanded. Maybe not clear how you are supposed to alternate KeyUpdate and Exporter to get FS.\nAs you are considering making the KeyUpdate more detailed, I have a general design question. Are there use cases where one would expect Post-Compromise Security? That is, if one of the participant's PRK_out is compromised, a successful KeyUpdate mechanism between the two honest parties would heal them and lock out the attacker That is typically what is met by the double ratchet from the Signal app. The classical way to achieve that is roughly to have each party send out for the update fresh DH ephemeral shares, and add the corresponding DH secret inside the key derivation. (it would of course require more care than that in the details)\nI think part of the plan is to make it less detailed and leave some stuff to the implementation/application :) The current KeyUpdate mechanism only gives forward secrecy. If you exchange some nonces and use them as a context you might get protection against an attacker is not monitoring all your traffic. I think details of this should be specified outside of EDHOC like in the OSCORE KeyUpdate draft in the CORE WG. The double ratchet was discussed in LAKE WG at an earlier point. The conclusion was that it added to much complexity to specify that as part of EDHOC. To get this kind of security an application has to rerun EDHOC frequently (like TLS 1.3). Signal uses the Diffie-Hellman ratchet very often which makes it understandable to have on optimized Diffie-Hellman ratchet post-handshake. Contrained use cases of EDHOC can not do Diffie-Hellman as often as Signal so it was determined that requiring rerun of the full EDHOC was the best tradeoff. Should the EDHOC specification say more about the benefits of frequently rerunning Diffie-Hellman? I don't remember how much it says. I recently wrote a paragraph about this that was accepted to RFC8446bis.\nThis is not correct. PRKout does not need to be stored. An implementation can derive the application keys directly and never store PRKout. An implementation can also store PRK_exporter if they don't use EDHOC-KeyUpdate.\nThis is not enough. You need to be sure that the old key is not needed more.\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing.\nNo comments received, so I assume this is fine. (KeyUpdate is now in an appendix.)", "new_text": "PRK_exporter is derived from PRK_out: where hash_length denotes the output size in bytes of the EDHOC hash algorithm of the selected cipher suite. Note that PRK_exporter changes every time EDHOC-KeyUpdate is used, see keyupdate. The (label, context) pair used in EDHOC-Exporter must be unique, i.e., a (label, context) MUST NOT be used for two different purposes."}
{"id": "q-en-edhoc-a09cce14f59f79c2946a63cd65661cedefa8a03a36393d3048fe306e29a2b4fd", "old_text": "4.2.2. To provide forward secrecy in an even more efficient way than re- running EDHOC, EDHOC provides the function EDHOC-KeyUpdate. When EDHOC-KeyUpdate is called, the old PRK_out is deleted and the new PRK_out is calculated as a \"hash\" of the old key using the Expand function as illustrated by the following pseudocode: where hash_length denotes the output size in bytes of the EDHOC hash algorithm of the selected cipher suite.", "comments": "Should be checked for conflicts with\nI think fixing the comments above resolves conflict with , so can then be replaced by this.\nThis proposal for update in the key derivation description also addresses the clarifications requested in , thus replacing . We merge this to include it in -15 and enable further reviews.\nThis is not correct. How to derive and store PRKexporter is completely up to the implementation. One can derive PRKexporter everytime Exporter is called. This make it look like PRKout is overwritten. This should be formulated with and like TLS 1.3 or with and or something. How long to save the old keying material (PRKout, PRK_Exported, any derived keys) is likely up to the application. The importand thing is that they need to be deleted to give FS.\nThere was a request from someone that more guidance on the use of KeyUpdate should be expanded. Maybe not clear how you are supposed to alternate KeyUpdate and Exporter to get FS.\nAs you are considering making the KeyUpdate more detailed, I have a general design question. Are there use cases where one would expect Post-Compromise Security? That is, if one of the participant's PRK_out is compromised, a successful KeyUpdate mechanism between the two honest parties would heal them and lock out the attacker That is typically what is met by the double ratchet from the Signal app. The classical way to achieve that is roughly to have each party send out for the update fresh DH ephemeral shares, and add the corresponding DH secret inside the key derivation. (it would of course require more care than that in the details)\nI think part of the plan is to make it less detailed and leave some stuff to the implementation/application :) The current KeyUpdate mechanism only gives forward secrecy. If you exchange some nonces and use them as a context you might get protection against an attacker is not monitoring all your traffic. I think details of this should be specified outside of EDHOC like in the OSCORE KeyUpdate draft in the CORE WG. The double ratchet was discussed in LAKE WG at an earlier point. The conclusion was that it added to much complexity to specify that as part of EDHOC. To get this kind of security an application has to rerun EDHOC frequently (like TLS 1.3). Signal uses the Diffie-Hellman ratchet very often which makes it understandable to have on optimized Diffie-Hellman ratchet post-handshake. Contrained use cases of EDHOC can not do Diffie-Hellman as often as Signal so it was determined that requiring rerun of the full EDHOC was the best tradeoff. Should the EDHOC specification say more about the benefits of frequently rerunning Diffie-Hellman? I don't remember how much it says. I recently wrote a paragraph about this that was accepted to RFC8446bis.\nThis is not correct. PRKout does not need to be stored. An implementation can derive the application keys directly and never store PRKout. An implementation can also store PRK_exporter if they don't use EDHOC-KeyUpdate.\nThis is not enough. You need to be sure that the old key is not needed more.\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing.\nNo comments received, so I assume this is fine. (KeyUpdate is now in an appendix.)", "new_text": "4.2.2. To provide forward secrecy in an even more efficient way than re- running EDHOC, EDHOC provides the optional function EDHOC-KeyUpdate. When EDHOC-KeyUpdate is called, a new PRK_out is calculated as a \"hash\" of the old PRK_out using the Expand function as illustrated by the following pseudocode. The change of PRK_out causes a change to PRK_exporter and derived keys using EDHOC-Exporter. where hash_length denotes the output size in bytes of the EDHOC hash algorithm of the selected cipher suite."}
{"id": "q-en-edhoc-a09cce14f59f79c2946a63cd65661cedefa8a03a36393d3048fe306e29a2b4fd", "old_text": "The EDHOC-KeyUpdate takes a context as input to enable binding of the updated PRK_out to some event that triggered the keyUpdate. The Initiator and the Responder need to agree on the context, which can, e.g., be a counter or a pseudorandom number such as a hash. The Initiator and the Responder also need to cache the old PRK_out until it has verfied that the other endpoint has the correct new PRK_out. I-D.ietf-core-oscore-key-update describes key update for OSCORE using EDHOC-KeyUpdate.", "comments": "Should be checked for conflicts with\nI think fixing the comments above resolves conflict with , so can then be replaced by this.\nThis proposal for update in the key derivation description also addresses the clarifications requested in , thus replacing . We merge this to include it in -15 and enable further reviews.\nThis is not correct. How to derive and store PRKexporter is completely up to the implementation. One can derive PRKexporter everytime Exporter is called. This make it look like PRKout is overwritten. This should be formulated with and like TLS 1.3 or with and or something. How long to save the old keying material (PRKout, PRK_Exported, any derived keys) is likely up to the application. The importand thing is that they need to be deleted to give FS.\nThere was a request from someone that more guidance on the use of KeyUpdate should be expanded. Maybe not clear how you are supposed to alternate KeyUpdate and Exporter to get FS.\nAs you are considering making the KeyUpdate more detailed, I have a general design question. Are there use cases where one would expect Post-Compromise Security? That is, if one of the participant's PRK_out is compromised, a successful KeyUpdate mechanism between the two honest parties would heal them and lock out the attacker That is typically what is met by the double ratchet from the Signal app. The classical way to achieve that is roughly to have each party send out for the update fresh DH ephemeral shares, and add the corresponding DH secret inside the key derivation. (it would of course require more care than that in the details)\nI think part of the plan is to make it less detailed and leave some stuff to the implementation/application :) The current KeyUpdate mechanism only gives forward secrecy. If you exchange some nonces and use them as a context you might get protection against an attacker is not monitoring all your traffic. I think details of this should be specified outside of EDHOC like in the OSCORE KeyUpdate draft in the CORE WG. The double ratchet was discussed in LAKE WG at an earlier point. The conclusion was that it added to much complexity to specify that as part of EDHOC. To get this kind of security an application has to rerun EDHOC frequently (like TLS 1.3). Signal uses the Diffie-Hellman ratchet very often which makes it understandable to have on optimized Diffie-Hellman ratchet post-handshake. Contrained use cases of EDHOC can not do Diffie-Hellman as often as Signal so it was determined that requiring rerun of the full EDHOC was the best tradeoff. Should the EDHOC specification say more about the benefits of frequently rerunning Diffie-Hellman? I don't remember how much it says. I recently wrote a paragraph about this that was accepted to RFC8446bis.\nThis is not correct. PRKout does not need to be stored. An implementation can derive the application keys directly and never store PRKout. An implementation can also store PRK_exporter if they don't use EDHOC-KeyUpdate.\nThis is not enough. You need to be sure that the old key is not needed more.\nI merged which addresses some points raised in this issue. Please review and comment if there is anything missing.\nNo comments received, so I assume this is fine. (KeyUpdate is now in an appendix.)", "new_text": "The EDHOC-KeyUpdate takes a context as input to enable binding of the updated PRK_out to some event that triggered the keyUpdate. The Initiator and the Responder need to agree on the context, which can, e.g., be a counter or a pseudorandom number such as a hash. To provide forward secrecy the old PRK_out and derived keys must be deleted as soon as they are not needed. When to delete the old keys and how to verify that they are not needed is up to the application. I-D.ietf-core-oscore-key-update describes key update for OSCORE using EDHOC-KeyUpdate."}
{"id": "q-en-edhoc-d60bb37befdae133115c83f37d2e7b7521b2643b07175f5f697af59d2aded9f5", "old_text": "identifiers from EDHOC SHOULD provide mechanisms to update the connection identifier and MAY provide mechanisms to issue several simultaneously active connection identifiers. See RFC9000 for a non- constrained example of such mechanisms. Connection identifiers SHOULD be unpredictable. Using the same identifier several times is not a problem as long as it is chosen randomly. Connection identity privacy mechanisms are only useful when there are not fixed identifiers such as IP address or MAC address in the lower layers. 8.6.", "comments": "Suggestion to clarify and maybe give example of randomness and unpredictability. The SHOULD should probably be removed. Unclear if 1 byte identifiers can ever by unpredictable.\nClosing this as approved by Marco.\nLooks good to me. NAME ?", "new_text": "identifiers from EDHOC SHOULD provide mechanisms to update the connection identifier and MAY provide mechanisms to issue several simultaneously active connection identifiers. See RFC9000 for a non- constrained example of such mechanisms. Connection identifiers can e.g., be chosen randomly among the set of unused 1-byte connection identifiers. Connection identity privacy mechanisms are only useful when there are not fixed identifiers such as IP address or MAC address in the lower layers. 8.6."}
{"id": "q-en-edhoc-432d91f8677a05ab8e13900533260e27107c4e42a4effeaf603ac42901ba99db", "old_text": "With static Diffie-Hellman key authentication, the authenticating endpoint can deny having participated in the protocol. Two earlier versions of EDHOC have been formally analyzed Norrman20 Bruni18 and the specification has been updated based on the analysis. 8.2.", "comments": "Hi all, Our previous formal analysis on draft 12 and 14 lead to a publication at USENIX 23 available here URL As the two other previous formal analysis were cited in the draft, I'm proposing to add next to them our new one with this PR. As a side remark, our models are here (URL), and we are trying to update everything according to draft 17 with a new overall pass. It might be tight for IETF115, but we are trying our best! And anyway, it should only be positive results by now ;)\nNAME Thanks! Great that you follow up with the latest version. The latest relevant changes are probably in -16 and only adds things to the key derivation (some of which was proposed by you :-).\nThanks for the pointer! Though, the biggest issue is not so much updating the model of the protocol, but rather making a new full pass on the security claims and new appendices, trying to have a clear systematized link between claims from the draft and what we check in the model.\nNAME let's check if there are some other papers missing and what we want to say about them.\nWe should at least add references to work by ETH and ENS. Let's merge this as a starting point and continue discussion in .\nGiven the various security analyses EDHOC has seen and their insights being discussed in Section 8, would it make sense to add informative references to those (as done, e.g., for TLS 1.3, RFC 8446 Appendix E)? URL\nSeems like a good idea to me\nRelated to\nThis was discussed on the IETF 116 LAKE WG meeting, and in absence of volunteers to draft such an appendix we decided to start with adding references. We may want to go somewhere in between a whole appendix and a sentence, and expand with a paragraph about each reference. We still need someone to draft that paragraph for the different references and would like to conclude on this fairly soon. We start by merging now.\nClosing since is merged. Hi all, While we do not have any explicit comments on the document, we thought it is maybe still worthwhile to share the following analysis confirmation. We updated our computational analysis of the SIG-SIG mode, of which Marc presented an earlier version at IETF 114, to draft 17. We could confirm that with some of the later cryptographic suggestions (PR , ) included, EDHOC SIG-SIG achieves key exchange security (key secrecy and explicit authentication) in a strong model capturing potentially colliding credential identifiers, assuming, among others, an \"exclusive ownership\" property of the signature scheme (established for Ed25519, conjectured for the way ECDSA is used in EDHOC SIG-SIG). Our updated analysis is not publicly available yet at this point. Let us know if a public reference would be helpful for the draft RFC; we can also share the document individually upon request. The last point actually maybe does lead to a comment on the document: Given the various security analyses EDHOC has seen and their insights being discussed in Section 8, would it make sense to add informative references to those (as done, e.g., for TLS 1.3, RFC 8446 Appendix E)? Kind regards, Felix G\u00fcnther (with Marc Ilunga)", "new_text": "With static Diffie-Hellman key authentication, the authenticating endpoint can deny having participated in the protocol. Three earlier versions of EDHOC have been formally analyzed Jacomme23 Norrman20 Bruni18 and the specification has been updated based on the analysis. 8.2."}
{"id": "q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08", "old_text": "member impersonate another group member, but a malicious non-member can reroute handshakes between honest group members to connect them in unintended ways. A naive sharing of PSKs, even assuming only honest but curious participants know the key, does not enforce any of these properties, bar one. Endpoint Identity Protection holds vacuously, because the client and server do not use certificates in PSK mode. (This is not true with use of the \"tls_cert_with_extern_psk\" extension I-D.ietf-tls-tls13-cert-with- extern-psk.) In the following sub-sections, we list attacks that demonstrate violations of these properties when the fundamental PSK assumption does not hold. This list is not exhaustive, but merely highlights a number of potential pitfalls. 3.1.1.", "comments": "Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!", "new_text": "member impersonate another group member, but a malicious non-member can reroute handshakes between honest group members to connect them in unintended ways. A naive sharing of PSKs, even assuming only honest but curious participants know the key, can violate all these properties, bar one. Endpoint Identity Protection holds vacuously, because the client and server do not use certificates in PSK mode. (This is not true with use of the \"tls_cert_with_extern_psk\" extension I-D.ietf-tls-tls13-cert-with-extern-psk.) In the following sub-sections, we list attacks that demonstrate violations of these properties when the fundamental PSK assumption does not hold. 3.1.1."}
{"id": "q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08", "old_text": "The attacker intercepts the message and redirects it to \"C\". \"C\" responds with to \"A\". \"A\" sends a \"Finished\" message to \"B\". \"A\" has completed the handshake, ostensibly with \"B\".", "comments": "Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!", "new_text": "The attacker intercepts the message and redirects it to \"C\". \"C\" responds with a \"ServerHello\" to \"A\". \"A\" sends a \"Finished\" message to \"B\". \"A\" has completed the handshake, ostensibly with \"B\"."}
{"id": "q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08", "old_text": "Let the group of peers who know the key be \"A\", \"B\", and \"C\". The attack proceeds as follows: \"A\" sends a \"ClientHello\" to \"B\", without including a key share. The handshake completes as expected.", "comments": "Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!", "new_text": "Let the group of peers who know the key be \"A\", \"B\", and \"C\". The attack proceeds as follows: \"A\" sends a \"ClientHello\" to \"B\", without including a key share extension. The handshake completes as expected."}
{"id": "q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08", "old_text": "properties outlined above. The attack violates the same session keys property, as \"A\" and \"B\" have each completed a session, ostensibly with each other, and yet do not agree on the session key. If \"A\" does not send a key share, then the attacker can have a session with \"A\" and a separate session with \"B\", where both sessions have the same key. This violates the uniqueness of session keys property. In the case of KCI resistance, where we assume the attacker has compromised \"A\", the attacker can successfully impersonate \"B\" to \"A\" using this pattern. 4. Given the desired security goals from sec-properties, applications which make use of external PSKs MUST adhere to the following requirements: Each PSK MUST NOT be shared between with more than two logical nodes. As a result, an agent that acts as both a client and a", "comments": "Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!", "new_text": "properties outlined above. The attack violates the same session keys property, as \"A\" and \"B\" have each completed a session, ostensibly with each other, and yet do not agree on the session key. If \"A\" does not send a key share extension, then the attacker can have a session with \"A\" and a separate session with \"B\", where both sessions have the same key. Since the attacker has knowledge of all of the messages and their content, the nonce in the handshake does not prevent the attacker from computing the full key schedule. This violates the uniqueness of session keys property. In the case of KCI resistance, where we assume the attacker has compromised \"A\", the attacker can successfully impersonate \"B\" to \"A\" using this pattern. 4. To achieve the security goals from sec-properties when the external PSK will not be combined with a pairwise secret value, applications MUST use external PSKs that adhere to the following requirements: Each PSK MUST NOT be shared between with more than two logical nodes. As a result, an agent that acts as both a client and a"}
{"id": "q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08", "old_text": "Each PSK MUST be at least 128-bits long. Each PSK SHOULD be seeded from a source with at least 128-bits of entropy. 5. PSK privacy properties are orthogonal to security properties", "comments": "Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!", "new_text": "Each PSK MUST be at least 128-bits long. 5. PSK privacy properties are orthogonal to security properties"}
{"id": "q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08", "old_text": "appearing in cleartext in a ClientHello. As a result, a passive adversary can link two or more connections together that use the same external PSK on the wire. Applications should take precautions when using external PSKs if these risks are a concern. In addition to linkability in the network, external PSKs are intrinsically linkable by PSK receivers. Specifically, servers can", "comments": "Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!", "new_text": "appearing in cleartext in a ClientHello. As a result, a passive adversary can link two or more connections together that use the same external PSK on the wire. Applications should take precautions when using external PSKs if these risks. In addition to linkability in the network, external PSKs are intrinsically linkable by PSK receivers. Specifically, servers can"}
{"id": "q-en-external-psk-design-team-afc30ac4b95c322b7fd8818a6c997d7e109092d7437cd93dc58b437ea4670e08", "old_text": "First Use (TOFU) approach with a device completely unprotected before the first login did take place). Many devices have very limited UI. For example, they may only have a numeric keypad or perhaps an interface with even fewer buttons. When the TOFU approach is not suitable, entering the key may require typing it on a very constrained UI. Moreover, PSK production lacks guidance unlike user passwords. Some devices are provisioned PSKs via an out-of-band, cloud-based syncing protocol.", "comments": "Changes to comments from Russ Housley on 03-Mar-2020\nSection 3.1.1: The text says: C responds with to A. I think that C responds with a ServerHello to A. Section 3.1.2: The text says: A sends a ClientHello to B, without including a key share. It would be helpful to say \"without including a key share extension.\" Section 3.1.3: I think a bit more explanation is needed here. Nonces can make the keys different, why doesn't the nonce from the uncompromised party help in this example? Section 4: I would like to see a bit more context here. I agree with the recommendations for the intended environment, but they are overkill in the environment suggested by draft-ietf-tls-tls13-cert-with-extern-psk. In fact, they are overkill if (EC)DH is combined with the PSK at all. So, maybe we just need to say that the PSK is not combined with a pairwise key that these rules apply.\nFixed by .\nLGTM -- thanks, Russ!", "new_text": "First Use (TOFU) approach with a device completely unprotected before the first login did take place). Many devices have very limited UI. For example, they may only have a numeric keypad or even less number of buttons. When the TOFU approach is not suitable, entering the key would require typing it on a constrained UI. Moreover, PSK production lacks guidance unlike user passwords. Some devices are provisioned PSKs via an out-of-band, cloud-based syncing protocol."}
{"id": "q-en-external-psk-design-team-aac111d6994cb0dfadc5045619feee021aab8e0687f06a85f62777195b55c91c", "old_text": "1. This document provides usage guidance for external Pre-Shared Keys (PSKs) in TLS. It lists TLS security properties provided by PSKs under certain assumptions and demonstrates how violations of these assumptions lead to attacks. This document also discusses PSK use cases, provisioning processes, and TLS stack implementation support in the context of these assumptions. It provides advice for applications in various use cases to help meet these assumptions. The guidance provided in this document is applicable across TLS RFC8446, DTLS I-D.ietf-tls-dtls13, and Constrained TLS I-D.ietf-tls-", "comments": "The comment about how there are no guidelines about PSKs like there are about generating passwords seemed like a statement of the motivation for this draft, so I moved it to the introduction. (I was going to add a reference for the statement, but I couldn't figure out how to get the citation to compile correctly. This was the document I was going to cite, in case you think it needs it: URL)", "new_text": "1. There are many resources that provide guidance for password generation and verification aimed towards improving security. However, there is no such equivalent for external Pre-Shared Keys (PSKs) in TLS. This document aims to reduce that gap. It lists TLS security properties provided by PSKs under certain assumptions and demonstrates how violations of these assumptions lead to attacks. This document also discusses PSK use cases, provisioning processes, and TLS stack implementation support in the context of these assumptions. It provides advice for applications in various use cases to help meet these assumptions. The guidance provided in this document is applicable across TLS RFC8446, DTLS I-D.ietf-tls-dtls13, and Constrained TLS I-D.ietf-tls-"}
{"id": "q-en-external-psk-design-team-aac111d6994cb0dfadc5045619feee021aab8e0687f06a85f62777195b55c91c", "old_text": "6. Pre-shared Key (PSK) ciphersuites were first specified for TLS in 2005. Now, PSK is an integral part of the TLS version 1.3 specification RFC8446. TLS 1.3 also uses PSKs for session resumption. It distinguishes these resumption PSKs from external PSKs which have been provisioned out-of-band (OOB). Below, we list some example use-cases where pair-wise external PSKs (i.e., external PSKs that are shared between only one server and one client) have been used for authentication in TLS. Device-to-device communication with out-of-band synchronized keys. PSKs provisioned out-of-band for communicating with known", "comments": "The comment about how there are no guidelines about PSKs like there are about generating passwords seemed like a statement of the motivation for this draft, so I moved it to the introduction. (I was going to add a reference for the statement, but I couldn't figure out how to get the citation to compile correctly. This was the document I was going to cite, in case you think it needs it: URL)", "new_text": "6. PSK ciphersuites were first specified for TLS in 2005. Now, PSKs are an integral part of the TLS version 1.3 specification RFC8446. TLS 1.3 also uses PSKs for session resumption. It distinguishes these resumption PSKs from external PSKs which have been provisioned out- of-band (OOB). Below, we list some example use-cases where pair-wise external PSKs (i.e., external PSKs that are shared between only one server and one client) have been used for authentication in TLS. Device-to-device communication with out-of-band synchronized keys. PSKs provisioned out-of-band for communicating with known"}
{"id": "q-en-external-psk-design-team-aac111d6994cb0dfadc5045619feee021aab8e0687f06a85f62777195b55c91c", "old_text": "limited UI. For example, they may only have a numeric keypad or even less number of buttons. When the TOFU approach is not suitable, entering the key would require typing it on a constrained UI. Moreover, PSK production lacks guidance unlike user passwords. Some devices provision PSKs via an out-of-band, cloud-based syncing protocol.", "comments": "The comment about how there are no guidelines about PSKs like there are about generating passwords seemed like a statement of the motivation for this draft, so I moved it to the introduction. (I was going to add a reference for the statement, but I couldn't figure out how to get the citation to compile correctly. This was the document I was going to cite, in case you think it needs it: URL)", "new_text": "limited UI. For example, they may only have a numeric keypad or even less number of buttons. When the TOFU approach is not suitable, entering the key would require typing it on a constrained UI. Some devices provision PSKs via an out-of-band, cloud-based syncing protocol."}
{"id": "q-en-external-psk-design-team-d1c72ee3d62ad86a83b0ebed7da9ee2df74b95e3674e246be3eacaf0221158b2", "old_text": "Keys (PSKs) in Transport Layer Security (TLS) 1.3 RFC8446. This guidance also applies to Datagram TLS (DTLS) 1.3 I-D.ietf-tls-dtls13 and Compact TLS 1.3 I-D.ietf-tls-ctls. For readability, this document uses the term TLS to refer to all such versions. External PSKs are symmetric secret keys provided to the TLS protocol implementation as external inputs. External PSKs are provisioned out-of-band. This document lists TLS security properties provided by PSKs under certain assumptions and demonstrates how violations of these assumptions lead to attacks. This document discusses PSK use cases, provisioning processes, and TLS stack implementation support in the context of these assumptions. This document also provides advice for applications in various use cases to help meet these assumptions. There are many resources that provide guidance for password generation and verification aimed towards improving security.", "comments": "More = better. There are a few other places that this treatment might be applied elsewhere if the editors feel that way inclined. This seemed to be the most important.\nI'll do a pass for other obvious paragraphs that can be split up.", "new_text": "Keys (PSKs) in Transport Layer Security (TLS) 1.3 RFC8446. This guidance also applies to Datagram TLS (DTLS) 1.3 I-D.ietf-tls-dtls13 and Compact TLS 1.3 I-D.ietf-tls-ctls. For readability, this document uses the term TLS to refer to all such versions. External PSKs are symmetric secret keys provided to the TLS protocol implementation as external inputs. External PSKs are provisioned out-of-band. This document lists TLS security properties provided by PSKs under certain assumptions and demonstrates how violations of these assumptions lead to attacks. This document discusses PSK use cases, provisioning processes, and TLS stack implementation support in the context of these assumptions. This document also provides advice for applications in various use cases to help meet these assumptions. There are many resources that provide guidance for password generation and verification aimed towards improving security."}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "This document provides usage guidance for external Pre-Shared Keys (PSKs) in Transport Layer Security (TLS) 1.3 as defined in RFC 8446. This document lists TLS security properties provided by PSKs under certain assumptions, and then demonstrates how violations of these assumptions lead to attacks. This document discusses PSK use cases and provisioning processes. This document provides advice for applications to help meet these assumptions. This document also lists the privacy and security properties that are not provided by TLS 1.3 when external PSKs are used. 1.", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "This document provides usage guidance for external Pre-Shared Keys (PSKs) in Transport Layer Security (TLS) 1.3 as defined in RFC 8446. It lists TLS security properties provided by PSKs under certain assumptions, and then demonstrates how violations of these assumptions lead to attacks. Advice for applications to help meet these assumptions is provided. It also discusses PSK use cases and provisioning processes. Finally, it lists the privacy and security properties that are not provided by TLS 1.3 when external PSKs are used. 1."}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "If PSK is not combined with fresh ephemeral key exchange, then compromise of any group member allows the attacker to passively read (and actively modify) all traffic. Additionally, a malicious non-member can reroute handshakes between honest group members to connect them in unintended ways, as described", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "If PSK is not combined with fresh ephemeral key exchange, then compromise of any group member allows the attacker to passively read (and actively modify) all traffic, including past traffic. Additionally, a malicious non-member can reroute handshakes between honest group members to connect them in unintended ways, as described"}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "value and the receiving member's known identity. See Selfie for details. To illustrate the rerouting attack, consider the group of peers who know the PSK be \"A\", \"B\", and \"C\". The attack proceeds as follows: \"A\" sends a \"ClientHello\" to \"B\".", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "value and the receiving member's known identity. See Selfie for details. To illustrate the rerouting attack, consider three peers, \"A\", \"B\", and \"C\", who all know the PSK. The attack proceeds as follows: \"A\" sends a \"ClientHello\" to \"B\"."}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "Entropy properties of external PSKs may also affect TLS security properties. For example, if a high entropy PSK is used, then PSK- only key establishment modes provide expected security properties for TLS, including, for example, including establishing the same session keys between peers, secrecy of session keys, peer authentication, and downgrade protection. See RFC8446, Section E.1 for an explanation of these properties. However, these modes lack forward security. Forward security may be achieved by using a PSK-DH mode, or, alternatively, by using PSKs with short lifetimes. In contrast, if a low entropy PSK is used, then PSK-only key establishment modes are subject to passive exhaustive search attacks", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "Entropy properties of external PSKs may also affect TLS security properties. For example, if a high entropy PSK is used, then PSK- only key establishment modes provide expected security properties for TLS, including establishing the same session keys between peers, secrecy of session keys, peer authentication, and downgrade protection. See RFC8446, Section E.1 for an explanation of these properties. However, these modes lack forward security. Forward security may be achieved by using a PSK-DH mode, or, alternatively, by using PSKs with short lifetimes. In contrast, if a low entropy PSK is used, then PSK-only key establishment modes are subject to passive exhaustive search attacks"}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "This section lists some example use-cases where pair-wise external PSKs, i.e., external PSKs that are shared between only one server and one client, have been used for authentication in TLS. Device-to-device communication with out-of-band synchronized keys. PSKs provisioned out-of-band for communicating with known", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "This section lists some example use-cases where pair-wise external PSKs, i.e., external PSKs that are shared between only one server and one client, have been used for authentication in TLS. There was no attempt to prioritize the examples in any particular order. Device-to-device communication with out-of-band synchronized keys. PSKs provisioned out-of-band for communicating with known"}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "different online protocol. Intra-data-center communication. Machine-to-machine communication within a single data center or PoP may use externally provisioned PSKs, primarily for the purposes of supporting TLS connections with early data; see security-con for considerations when using early data with external PSKs. Certificateless server-to-server communication. Machine-to- machine communication may use externally provisioned PSKs,", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "different online protocol. Intra-data-center communication. Machine-to-machine communication within a single data center or point-of-presence (PoP) may use externally provisioned PSKs, primarily for the purposes of supporting TLS connections with early data; see security-con for considerations when using early data with external PSKs. Certificateless server-to-server communication. Machine-to- machine communication may use externally provisioned PSKs,"}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "PSK into the devices, or using a Trust On First Use (TOFU) approach with a device completely unprotected before the first login did take place. Many devices have very limited UI. For example, they may only have a numeric keypad or even less number of buttons. When the TOFU approach is not suitable, entering the key would require typing it on a constrained UI. Some devices provision PSKs via an out-of-band, cloud-based syncing protocol. Some secrets may be baked into or hardware or software device components. Moreover, when this is done at manufacturing time, secrets may be printed on labels or included in a Bill of Materials for ease of scanning or import.", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "PSK into the devices, or using a Trust On First Use (TOFU) approach with a device completely unprotected before the first login did take place. Many devices have very limited UI. For example, they may only have a numeric keypad or even fewer buttons. When the TOFU approach is not suitable, entering the key would require typing it on a constrained UI. Some devices provision PSKs via an out-of-band, cloud-based syncing protocol. Some secrets may be baked into hardware or software device components. Moreover, when this is done at manufacturing time, secrets may be printed on labels or included in a Bill of Materials for ease of scanning or import."}
{"id": "q-en-external-psk-design-team-18ea6ddadcaa90e5bfce637c8fc57107f58e3731c3847ccbe5e9e0ca2bf62678", "old_text": "identifiers have privacy implications; see endpoint-privacy. Each endpoint SHOULD know the identifier of the other endpoint with which its wants to connect and SHOULD compare it with the other endpoint's identifier used in ImportedIdentity.context. It is however important to remember that endpoints sharing the same group PSK can always impersonate each other.", "comments": "Address comments from IESG evaluation. Due to the holiday season, we may not get prompt confirmation that these changes fully resolve the comments.", "new_text": "identifiers have privacy implications; see endpoint-privacy. Each endpoint SHOULD know the identifier of the other endpoint with which it wants to connect and SHOULD compare it with the other endpoint's identifier used in ImportedIdentity.context. It is however important to remember that endpoints sharing the same group PSK can always impersonate each other."}
{"id": "q-en-gnap-core-protocol-aeb1202dd6202ecdee07baa18233bf138580890cfba45a022b1829d56a4da074", "old_text": "The method the client instance uses to send an access token depends on whether the token is bound to a key, and if so which proofing method is associated with the key. This information is conveyed in the \"key\" and \"proof\" parameters in response-token-single. If the \"key\" value is the boolean \"false\", the access token is a bearer token sent using the HTTP Header method defined in RFC6750. The form parameter and query parameter methods of RFC6750 MUST NOT be used. If the \"key\" value is the boolean \"true\", the access token MUST be sent using the same key and proofing mechanism that the client instance used in its initial request (or its most recent rotation). If the \"key\" value is an object as described in key-format, the value of the \"proof\" field within the key indicates the particular proofing mechanism to use. The access token is sent using the HTTP authorization scheme \"GNAP\" along with a key proof as described in binding-keys for the key bound to the access token. For example, a \"jwsd\"-bound access token is sent as follows: [[ See issue #104 [47] ]] 7.3. Any keys presented by the client instance to the AS or RS MUST be", "comments": "Deploy preview for gnap-core-protocol-editors-draft ready! Built with commit a5ab40750ac9435b370bab7ded6dbdc60b35b01a https://deploy-preview-208--gnap-core-protocol-editors-URL\nIt may make sense to change the order to something like == && is present == && is absent == == && is present\nI did changes for URL There are lots of other things that, I believe, could be improved & changed for the \"Presenting Access Tokens\" section and they will be addressed in separate PRs in the future.\nThank you, this is great! Aligns with the language in", "new_text": "The method the client instance uses to send an access token depends on whether the token is bound to a key, and if so which proofing method is associated with the key. This information is conveyed in the \"bound\" and \"key\" parameters in response-token-single and response-token-multiple responses. If the \"bound\" value is the boolean \"true\" and the \"key\" is absent, the access token MUST be sent using the same key and proofing mechanism that the client instance used in its initial request (or its most recent rotation). If the \"bound\" value is the boolean \"true\" and the \"key\" value is an object as described in key-format, the access token MUST be sent using the key and proofing mechanism defined by the value of the \"proof\" field within the key object. The access token MUST be sent using the HTTP \"Authorization\" request header field and the \"GNAP\" authorization scheme along with a key proof as described in binding-keys for the key bound to the access token. For example, a \"jwsd\"-bound access token is sent as follows: If the \"bound\" value is the boolean \"false\", the access token is a bearer token that MUST be sent using the \"Authorization Request Header Field\" method defined in RFC6750. The \"Form-Encoded Body Parameter\" and \"URI Query Parameter\" methods of RFC6750 MUST NOT be used. [[ See issue #104 [47] ]] The client software MUST reject as an error a situation where the \"bound\" value is the boolean \"false\" and the \"key\" is present. 7.3. Any keys presented by the client instance to the AS or RS MUST be"}
{"id": "q-en-gnap-core-protocol-61a83002c6d411fdadd014a5560e3f1db1124312886ea919e31ffa9494094ddf", "old_text": "REQUIRED. A unique access token for continuing the request, in the format specified in response-token-single. This access token MUST be bound to the client instance's key used in the request and MUST NOT be a \"bearer\" token. As a consequence, the \"bound\" field of this access token is always the boolean value \"true\" and the \"key\" field MUST be omitted. This access token MUST NOT be usable at resources outside of the AS. The client instance MUST present the access token in all requests to the continuation URI as", "comments": "URL match the flags array in the token request. While this is technically a breaking change, I think we should consider this editorial as this should have been changed when we made the original change to use a flags array in the request in .\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: 8c793bb446d82f053536358106485568fc457842 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYou know, I had that thought, and I checked the request section and it doesn't have quotes. I'm going to add the quotes to both parts. This ended up with some other language changing, not just the examples, so let me know what you think.", "new_text": "REQUIRED. A unique access token for continuing the request, in the format specified in response-token-single. This access token MUST be bound to the client instance's key used in the request and MUST NOT be a \"bearer\" token. As a consequence, the \"flags\" array of this access token MUST NOT contain the string \"bearer\" and the \"key\" field MUST be omitted. This access token MUST NOT be usable at resources outside of the AS. The client instance MUST present the access token in all requests to the continuation URI as"}
{"id": "q-en-gnap-core-protocol-61a83002c6d411fdadd014a5560e3f1db1124312886ea919e31ffa9494094ddf", "old_text": "ASCII characters to facilitate transmission over HTTP headers within other protocols without requiring additional encoding. RECOMMENDED. Flag indicating if the token is bound to the client instance's key. If the boolean value is \"true\" or the field is omitted, and the \"key\" field is omitted, the token is bound to the request-client in its request for access. If the boolean value is \"true\" or the field is omitted, and the \"key\" field is present, the token is bound to the key and proofing mechanism indicated in the \"key\" field. If the boolean value is \"false\", the token is a bearer token with no key bound to it and the \"key\" field MUST be omitted. REQUIRED for multiple access tokens, OPTIONAL for single access token. The value of the \"label\" the client instance provided in the associated request-token, if present. If the token has been", "comments": "URL match the flags array in the token request. While this is technically a breaking change, I think we should consider this editorial as this should have been changed when we made the original change to use a flags array in the request in .\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: 8c793bb446d82f053536358106485568fc457842 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYou know, I had that thought, and I checked the request section and it doesn't have quotes. I'm going to add the quotes to both parts. This ended up with some other language changing, not just the examples, so let me know what you think.", "new_text": "ASCII characters to facilitate transmission over HTTP headers within other protocols without requiring additional encoding. REQUIRED for multiple access tokens, OPTIONAL for single access token. The value of the \"label\" the client instance provided in the associated request-token, if present. If the token has been"}
{"id": "q-en-gnap-core-protocol-61a83002c6d411fdadd014a5560e3f1db1124312886ea919e31ffa9494094ddf", "old_text": "MUST be able to dereference or process the key information in order to be able to sign the request. OPTIONAL. Flag indicating a hint of AS behavior on token rotation. If this flag is set to the value \"true\", then the client instance can expect a previously-issued access token to continue to work after it has been rotate-access-token or the underlying grant request has been continue-modify, resulting in the issuance of new access tokens. If this flag is set to the boolean value \"false\" or is omitted, the client instance can anticipate a given access token will stop working after token rotation or grant request modification. Note that a token flagged as \"durable\" can still expire or be revoked through any normal", "comments": "URL match the flags array in the token request. While this is technically a breaking change, I think we should consider this editorial as this should have been changed when we made the original change to use a flags array in the request in .\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: 8c793bb446d82f053536358106485568fc457842 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYou know, I had that thought, and I checked the request section and it doesn't have quotes. I'm going to add the quotes to both parts. This ended up with some other language changing, not just the examples, so let me know what you think.", "new_text": "MUST be able to dereference or process the key information in order to be able to sign the request. OPTIONAL. A set of flags that represent attributes or behaviors of the access token issued by the AS. The values of the \"flags\" field defined by this specification are as follows: This flag indicates whether the token is bound to the client instance's key. If the \"bearer\" flag is present, the access token is a bearer token, and the \"key\" field in this response MUST be omitted. If the \"bearer\" flag is omitted and the \"key\" field in this response is omitted, the token is bound the request-client in its request for access. If the \"bearer\" flag is omitted, and the \"key\" field is present, the token is bound to the key and proofing mechanism indicated in the \"key\" field. OPTIONAL. Flag indicating a hint of AS behavior on token rotation. If this flag is present, then the client instance can expect a previously-issued access token to continue to work after it has been rotate-access-token or the underlying grant request has been continue-modify, resulting in the issuance of new access tokens. If this flag is omitted, the client instance can anticipate a given access token will stop working after token rotation or grant request modification. Note that a token flagged as \"durable\" can still expire or be revoked through any normal"}
{"id": "q-en-gnap-core-protocol-61a83002c6d411fdadd014a5560e3f1db1124312886ea919e31ffa9494094ddf", "old_text": "unless the client instance has specifically requested it by use of the \"split\" flag. The following non-normative example shows a single access token bound to the client instance's key used in the initial request, with a management URL, and that has access to three described resources (one", "comments": "URL match the flags array in the token request. While this is technically a breaking change, I think we should consider this editorial as this should have been changed when we made the original change to use a flags array in the request in .\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: 8c793bb446d82f053536358106485568fc457842 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYou know, I had that thought, and I checked the request section and it doesn't have quotes. I'm going to add the quotes to both parts. This ended up with some other language changing, not just the examples, so let me know what you think.", "new_text": "unless the client instance has specifically requested it by use of the \"split\" flag. Flag values MUST NOT be included more than once. Additional flags can be defined by extensions using IANA. The following non-normative example shows a single access token bound to the client instance's key used in the initial request, with a management URL, and that has access to three described resources (one"}
{"id": "q-en-gnap-core-protocol-61a83002c6d411fdadd014a5560e3f1db1124312886ea919e31ffa9494094ddf", "old_text": "sends the \"split\" flag as described in request-token-single. If the AS has split the access token response, the response MUST include the \"split\" flag set to \"true\". [[ See issue #69 [17] ]]", "comments": "URL match the flags array in the token request. While this is technically a breaking change, I think we should consider this editorial as this should have been changed when we made the original change to use a flags array in the request in .\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: 8c793bb446d82f053536358106485568fc457842 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYou know, I had that thought, and I checked the request section and it doesn't have quotes. I'm going to add the quotes to both parts. This ended up with some other language changing, not just the examples, so let me know what you think.", "new_text": "sends the \"split\" flag as described in request-token-single. If the AS has split the access token response, the response MUST include the \"split\" flag. [[ See issue #69 [17] ]]"}
{"id": "q-en-gnap-core-protocol-61a83002c6d411fdadd014a5560e3f1db1124312886ea919e31ffa9494094ddf", "old_text": "multiple access token structure containing one access token. If the AS has split the access token response, the response MUST include the \"split\" flag set to \"true\". Each access token MAY be bound to different keys with different proofing mechanisms.", "comments": "URL match the flags array in the token request. While this is technically a breaking change, I think we should consider this editorial as this should have been changed when we made the original change to use a flags array in the request in .\n:heavycheckmark: Deploy Preview for gnap-core-protocol-editors-draft ready! Built :hammer: Explore the source changes: 8c793bb446d82f053536358106485568fc457842 :mag: Inspect the deploy log: :sunglasses: Browse the preview:\nYou know, I had that thought, and I checked the request section and it doesn't have quotes. I'm going to add the quotes to both parts. This ended up with some other language changing, not just the examples, so let me know what you think.", "new_text": "multiple access token structure containing one access token. If the AS has split the access token response, the response MUST include the \"split\" flag in the \"flags\" array. Each access token MAY be bound to different keys with different proofing mechanisms."}
{"id": "q-en-gnap-core-protocol-7e99d6cb0c5845f8f0c5da33844c044a06ff55f5557818ffe6d50f4bccf2229c", "old_text": "Indicates that interaction through some set of defined mechanisms needs to take place. Claims about the RO as known and declared by the AS. An identifier this client instance can use to identify itself when making future requests. An identifier this client instance can use to identify its current end-user when making future requests. An error code indicating that something has gone wrong. In this example, the AS is returning an response-interact-redirect, a", "comments": "Opaque identifiers provide a generic reference, so we can simplify the text. (mainly), and as a byproduct also and - removed userhandle from main response in section 3 changed title of section 3.4 (User -> Subject) and moved the text that requires RO = end-user at the top of the section, for clarity changed name (userhandle -> user) and wording in section 3.5. Maybe we could also simplify the title \"Returning Dynamically-bound Reference Handles\" into \"Returning Reference Handles\" (didn't do it yet).\nDeploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 73de700ce82846ec5e09f8f60550fac7bee0dfb5 Inspect the deploy log: Browse the preview:\nWith the adoption of the subject identifier format, we no longer need to have a GNAP-specific value and can instead leverage this more standard field. Alternatively, we could have a GNAP-specific subject identifier format, instead of a separate field.\nIn section 2.4 (Requesting User Information), the first sentence starts with: If the client instance is requesting information about the RO from the AS, ... From the title, the acronym \"RO\" should be changed into \"end-user\". The case where the RO is also the end-user is a specific case. This leads to the following change: If the client instance is requesting information about the end-user from the AS, ... In the following sentences of this section, RO should be changed into end-user.\nThe current text really means RO, and generally considers the case end-user = RO. Any other case would even trigger an error: \"If the identified end-user does not match the RO present at the AS during an interaction step, the AS SHOULD reject the request with an error.\" (section 2.4) Which makes sense because why would we return something about the RO to a potentially unknown end-user? (the only problem is that this wording is buried in the text, so it's hard to understand) Back to the general case: The RO is in control of policies at the AS. The end-user is in control of a client and a request to the AS. The end-user and client may or may not have any prior relationship with the AS. The subject info is what is known by the AS (in theory could be RO and maybe end-user, but notice that in the terminology, we define the end-user as a natural person and the RO as a subject entity). I think we should clarify when we could have a remote RO too. Then do we even return a result if the AS knows something about the end-user? (different from RO in that case). Plus we need to keep in mind that we also use sub_ids for URL In general, we might want to avoid the term \"user\" as it is not specific enough.\nThe title has been changed into \"Requesting Subject information\". This title is incorrect as long as a subject can be also an organization or a device. An AS can hold information about an end-user or about RO when the RO is a human being. It does not hold information about organizations or devices. However, an end-user may belong to some organizations or/and use some devices. IMO, I would replace it by \"Requesting end-user Information\". If an AS holds information about an end-user, it seems logical that the end-user should have access to it. If it happens that the end-user has also a RO account on the AS, then it should also have access to it. In such a case, another section with the following title \"Requesting RO Information\" should be created. RO = end-user was a common case in OAuth. In GNAP and in the general case a RO is different from an end-user and an AS may have no relationship with any RO (e.g. when attributes only are being used).\nI guess you refer to \"2.2. Requesting Subject Information\". We might have to harmonize a bit, but for instance \"2.3. Identifying the Client Instance\" might very well hold information about devices or at least the software running on it (that's the point of client software attestation to be discussed in ) We don't always know in advance whether it will be a RO or an end-user only, so the proposal isn't always applicable.\nI was indeed referring to \"2.2. Requesting Subject Information\" which has no relationship with \"2.3. Identifying the Client Instance\". This means that your arguments do not apply to section 2.2. and, by implication, that my argumentation currently remains valid.\nWell, I take the point in consideration but I don't think we should base our reasoning on the structure of sections, which can change\nIdentifying the User by Reference: Editor's note: We might be able to fold this function into an unstructured user assertion reference issued by the AS to the RC. We could put it in as an assertion type of \"gnap_reference\" or something like that. Downside: it's more verbose and potentially confusing to the client developer to have an assertion-like thing that's internal to the AS and not an assertion.\nWhy an assertion? Is that reference supposed to be linked to a session (cf \"represent the same end-user to the AS over subsequent requests\") or can it be persistent? I would see that more as an opaque identifier, rather than an assertion. Something like: asref seems a bit specific to GNAP (reference managed by the AS) but it could always be read \"as a reference\" in a more general setting. There's a variety of generic names that we could choose if needed, localuid, internal_uid, etc. But that really asks what additional types we could define as part of secevents (section 7.1.2). Beyond the case mentioned here, it would be most useful as an opaque identifier when RO != end-user, to avoid leaking private info about the RO (although currently we don't seem to cover that case yet) It's indeed more verbose than \"user\": \"XUT2MFM1XBIKJKSDU8QM\" but at least it's always the same structure (doesn't depend whether it's a reference or not), and it also makes the type more explicit.\nA subject identifier makes sense here, and I agree that it should be a new value to let us constrain it to the GNAP context.\nFixed by SECEVENT opaque identifiers\nI don't think this goes far enough -- I am thinking that with the opaque identifiers we can now simply return a with a format of that the client can be allowed to send in a follow-up request if it wants to. This way they all get treated equally, there's no special field anymore.", "new_text": "Indicates that interaction through some set of defined mechanisms needs to take place. Claims about the RO as known and declared by the AS, as described in response-subject. An identifier this client instance can use to identify itself when making future requests. An error code indicating that something has gone wrong. In this example, the AS is returning an response-interact-redirect, a"}
{"id": "q-en-gnap-core-protocol-7e99d6cb0c5845f8f0c5da33844c044a06ff55f5557818ffe6d50f4bccf2229c", "old_text": "If information about the RO is requested and the AS grants the client instance access to that data, the AS returns the approved information in the \"subject\" response field. This field is an object with the following OPTIONAL properties. An array of subject identifiers for the RO, as defined by I- D.ietf-secevent-subject-identifiers.", "comments": "Opaque identifiers provide a generic reference, so we can simplify the text. (mainly), and as a byproduct also and - removed userhandle from main response in section 3 changed title of section 3.4 (User -> Subject) and moved the text that requires RO = end-user at the top of the section, for clarity changed name (userhandle -> user) and wording in section 3.5. Maybe we could also simplify the title \"Returning Dynamically-bound Reference Handles\" into \"Returning Reference Handles\" (didn't do it yet).\nDeploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 73de700ce82846ec5e09f8f60550fac7bee0dfb5 Inspect the deploy log: Browse the preview:\nWith the adoption of the subject identifier format, we no longer need to have a GNAP-specific value and can instead leverage this more standard field. Alternatively, we could have a GNAP-specific subject identifier format, instead of a separate field.\nIn section 2.4 (Requesting User Information), the first sentence starts with: If the client instance is requesting information about the RO from the AS, ... From the title, the acronym \"RO\" should be changed into \"end-user\". The case where the RO is also the end-user is a specific case. This leads to the following change: If the client instance is requesting information about the end-user from the AS, ... In the following sentences of this section, RO should be changed into end-user.\nThe current text really means RO, and generally considers the case end-user = RO. Any other case would even trigger an error: \"If the identified end-user does not match the RO present at the AS during an interaction step, the AS SHOULD reject the request with an error.\" (section 2.4) Which makes sense because why would we return something about the RO to a potentially unknown end-user? (the only problem is that this wording is buried in the text, so it's hard to understand) Back to the general case: The RO is in control of policies at the AS. The end-user is in control of a client and a request to the AS. The end-user and client may or may not have any prior relationship with the AS. The subject info is what is known by the AS (in theory could be RO and maybe end-user, but notice that in the terminology, we define the end-user as a natural person and the RO as a subject entity). I think we should clarify when we could have a remote RO too. Then do we even return a result if the AS knows something about the end-user? (different from RO in that case). Plus we need to keep in mind that we also use sub_ids for URL In general, we might want to avoid the term \"user\" as it is not specific enough.\nThe title has been changed into \"Requesting Subject information\". This title is incorrect as long as a subject can be also an organization or a device. An AS can hold information about an end-user or about RO when the RO is a human being. It does not hold information about organizations or devices. However, an end-user may belong to some organizations or/and use some devices. IMO, I would replace it by \"Requesting end-user Information\". If an AS holds information about an end-user, it seems logical that the end-user should have access to it. If it happens that the end-user has also a RO account on the AS, then it should also have access to it. In such a case, another section with the following title \"Requesting RO Information\" should be created. RO = end-user was a common case in OAuth. In GNAP and in the general case a RO is different from an end-user and an AS may have no relationship with any RO (e.g. when attributes only are being used).\nI guess you refer to \"2.2. Requesting Subject Information\". We might have to harmonize a bit, but for instance \"2.3. Identifying the Client Instance\" might very well hold information about devices or at least the software running on it (that's the point of client software attestation to be discussed in ) We don't always know in advance whether it will be a RO or an end-user only, so the proposal isn't always applicable.\nI was indeed referring to \"2.2. Requesting Subject Information\" which has no relationship with \"2.3. Identifying the Client Instance\". This means that your arguments do not apply to section 2.2. and, by implication, that my argumentation currently remains valid.\nWell, I take the point in consideration but I don't think we should base our reasoning on the structure of sections, which can change\nIdentifying the User by Reference: Editor's note: We might be able to fold this function into an unstructured user assertion reference issued by the AS to the RC. We could put it in as an assertion type of \"gnap_reference\" or something like that. Downside: it's more verbose and potentially confusing to the client developer to have an assertion-like thing that's internal to the AS and not an assertion.\nWhy an assertion? Is that reference supposed to be linked to a session (cf \"represent the same end-user to the AS over subsequent requests\") or can it be persistent? I would see that more as an opaque identifier, rather than an assertion. Something like: asref seems a bit specific to GNAP (reference managed by the AS) but it could always be read \"as a reference\" in a more general setting. There's a variety of generic names that we could choose if needed, localuid, internal_uid, etc. But that really asks what additional types we could define as part of secevents (section 7.1.2). Beyond the case mentioned here, it would be most useful as an opaque identifier when RO != end-user, to avoid leaking private info about the RO (although currently we don't seem to cover that case yet) It's indeed more verbose than \"user\": \"XUT2MFM1XBIKJKSDU8QM\" but at least it's always the same structure (doesn't depend whether it's a reference or not), and it also makes the type more explicit.\nA subject identifier makes sense here, and I agree that it should be a new value to let us constrain it to the GNAP context.\nFixed by SECEVENT opaque identifiers\nI don't think this goes far enough -- I am thinking that with the opaque identifiers we can now simply return a with a format of that the client can be allowed to send in a follow-up request if it wants to. This way they all get treated equally, there's no special field anymore.", "new_text": "If information about the RO is requested and the AS grants the client instance access to that data, the AS returns the approved information in the \"subject\" response field. The AS MUST return the \"subject\" field only in cases where the AS is sure that the RO and the end-user are the same party. This can be accomplished through some forms of authorization. This field is an object with the following OPTIONAL properties. An array of subject identifiers for the RO, as defined by I- D.ietf-secevent-subject-identifiers."}
{"id": "q-en-gnap-core-protocol-7e99d6cb0c5845f8f0c5da33844c044a06ff55f5557818ffe6d50f4bccf2229c", "old_text": "information through an identity API. The definition of such an identity API is out of scope for this specification. The AS MUST return the \"subject\" field only in cases where the AS is sure that the RO and the end-user are the same party. This can be accomplished through some forms of authorization. Subject identifiers returned by the AS SHOULD uniquely identify the RO at the AS. Some forms of subject identifier are opaque to the client instance (such as the subject of an issuer and subject pair),", "comments": "Opaque identifiers provide a generic reference, so we can simplify the text. (mainly), and as a byproduct also and - removed userhandle from main response in section 3 changed title of section 3.4 (User -> Subject) and moved the text that requires RO = end-user at the top of the section, for clarity changed name (userhandle -> user) and wording in section 3.5. Maybe we could also simplify the title \"Returning Dynamically-bound Reference Handles\" into \"Returning Reference Handles\" (didn't do it yet).\nDeploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 73de700ce82846ec5e09f8f60550fac7bee0dfb5 Inspect the deploy log: Browse the preview:\nWith the adoption of the subject identifier format, we no longer need to have a GNAP-specific value and can instead leverage this more standard field. Alternatively, we could have a GNAP-specific subject identifier format, instead of a separate field.\nIn section 2.4 (Requesting User Information), the first sentence starts with: If the client instance is requesting information about the RO from the AS, ... From the title, the acronym \"RO\" should be changed into \"end-user\". The case where the RO is also the end-user is a specific case. This leads to the following change: If the client instance is requesting information about the end-user from the AS, ... In the following sentences of this section, RO should be changed into end-user.\nThe current text really means RO, and generally considers the case end-user = RO. Any other case would even trigger an error: \"If the identified end-user does not match the RO present at the AS during an interaction step, the AS SHOULD reject the request with an error.\" (section 2.4) Which makes sense because why would we return something about the RO to a potentially unknown end-user? (the only problem is that this wording is buried in the text, so it's hard to understand) Back to the general case: The RO is in control of policies at the AS. The end-user is in control of a client and a request to the AS. The end-user and client may or may not have any prior relationship with the AS. The subject info is what is known by the AS (in theory could be RO and maybe end-user, but notice that in the terminology, we define the end-user as a natural person and the RO as a subject entity). I think we should clarify when we could have a remote RO too. Then do we even return a result if the AS knows something about the end-user? (different from RO in that case). Plus we need to keep in mind that we also use sub_ids for URL In general, we might want to avoid the term \"user\" as it is not specific enough.\nThe title has been changed into \"Requesting Subject information\". This title is incorrect as long as a subject can be also an organization or a device. An AS can hold information about an end-user or about RO when the RO is a human being. It does not hold information about organizations or devices. However, an end-user may belong to some organizations or/and use some devices. IMO, I would replace it by \"Requesting end-user Information\". If an AS holds information about an end-user, it seems logical that the end-user should have access to it. If it happens that the end-user has also a RO account on the AS, then it should also have access to it. In such a case, another section with the following title \"Requesting RO Information\" should be created. RO = end-user was a common case in OAuth. In GNAP and in the general case a RO is different from an end-user and an AS may have no relationship with any RO (e.g. when attributes only are being used).\nI guess you refer to \"2.2. Requesting Subject Information\". We might have to harmonize a bit, but for instance \"2.3. Identifying the Client Instance\" might very well hold information about devices or at least the software running on it (that's the point of client software attestation to be discussed in ) We don't always know in advance whether it will be a RO or an end-user only, so the proposal isn't always applicable.\nI was indeed referring to \"2.2. Requesting Subject Information\" which has no relationship with \"2.3. Identifying the Client Instance\". This means that your arguments do not apply to section 2.2. and, by implication, that my argumentation currently remains valid.\nWell, I take the point in consideration but I don't think we should base our reasoning on the structure of sections, which can change\nIdentifying the User by Reference: Editor's note: We might be able to fold this function into an unstructured user assertion reference issued by the AS to the RC. We could put it in as an assertion type of \"gnap_reference\" or something like that. Downside: it's more verbose and potentially confusing to the client developer to have an assertion-like thing that's internal to the AS and not an assertion.\nWhy an assertion? Is that reference supposed to be linked to a session (cf \"represent the same end-user to the AS over subsequent requests\") or can it be persistent? I would see that more as an opaque identifier, rather than an assertion. Something like: asref seems a bit specific to GNAP (reference managed by the AS) but it could always be read \"as a reference\" in a more general setting. There's a variety of generic names that we could choose if needed, localuid, internal_uid, etc. But that really asks what additional types we could define as part of secevents (section 7.1.2). Beyond the case mentioned here, it would be most useful as an opaque identifier when RO != end-user, to avoid leaking private info about the RO (although currently we don't seem to cover that case yet) It's indeed more verbose than \"user\": \"XUT2MFM1XBIKJKSDU8QM\" but at least it's always the same structure (doesn't depend whether it's a reference or not), and it also makes the type more explicit.\nA subject identifier makes sense here, and I agree that it should be a new value to let us constrain it to the GNAP context.\nFixed by SECEVENT opaque identifiers\nI don't think this goes far enough -- I am thinking that with the opaque identifiers we can now simply return a with a format of that the client can be allowed to send in a follow-up request if it wants to. This way they all get treated equally, there's no special field anymore.", "new_text": "information through an identity API. The definition of such an identity API is out of scope for this specification. Subject identifiers returned by the AS SHOULD uniquely identify the RO at the AS. Some forms of subject identifier are opaque to the client instance (such as the subject of an issuer and subject pair),"}
{"id": "q-en-gnap-core-protocol-7e99d6cb0c5845f8f0c5da33844c044a06ff55f5557818ffe6d50f4bccf2229c", "old_text": "All dynamically generated handles are returned as fields in the root JSON object of the response. This specification defines the following dynamic handle returns, additional handles can be defined in IANA. A string value used to represent the information in the \"client\" object that the client instance can use in a future request, as described in request-instance. A string value used to represent the current user. The client instance can use in a future request, as described in request- user-reference. This non-normative example shows two handles along side an issued access token. [[ See issue #77 [23] ]]", "comments": "Opaque identifiers provide a generic reference, so we can simplify the text. (mainly), and as a byproduct also and - removed userhandle from main response in section 3 changed title of section 3.4 (User -> Subject) and moved the text that requires RO = end-user at the top of the section, for clarity changed name (userhandle -> user) and wording in section 3.5. Maybe we could also simplify the title \"Returning Dynamically-bound Reference Handles\" into \"Returning Reference Handles\" (didn't do it yet).\nDeploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: 73de700ce82846ec5e09f8f60550fac7bee0dfb5 Inspect the deploy log: Browse the preview:\nWith the adoption of the subject identifier format, we no longer need to have a GNAP-specific value and can instead leverage this more standard field. Alternatively, we could have a GNAP-specific subject identifier format, instead of a separate field.\nIn section 2.4 (Requesting User Information), the first sentence starts with: If the client instance is requesting information about the RO from the AS, ... From the title, the acronym \"RO\" should be changed into \"end-user\". The case where the RO is also the end-user is a specific case. This leads to the following change: If the client instance is requesting information about the end-user from the AS, ... In the following sentences of this section, RO should be changed into end-user.\nThe current text really means RO, and generally considers the case end-user = RO. Any other case would even trigger an error: \"If the identified end-user does not match the RO present at the AS during an interaction step, the AS SHOULD reject the request with an error.\" (section 2.4) Which makes sense because why would we return something about the RO to a potentially unknown end-user? (the only problem is that this wording is buried in the text, so it's hard to understand) Back to the general case: The RO is in control of policies at the AS. The end-user is in control of a client and a request to the AS. The end-user and client may or may not have any prior relationship with the AS. The subject info is what is known by the AS (in theory could be RO and maybe end-user, but notice that in the terminology, we define the end-user as a natural person and the RO as a subject entity). I think we should clarify when we could have a remote RO too. Then do we even return a result if the AS knows something about the end-user? (different from RO in that case). Plus we need to keep in mind that we also use sub_ids for URL In general, we might want to avoid the term \"user\" as it is not specific enough.\nThe title has been changed into \"Requesting Subject information\". This title is incorrect as long as a subject can be also an organization or a device. An AS can hold information about an end-user or about RO when the RO is a human being. It does not hold information about organizations or devices. However, an end-user may belong to some organizations or/and use some devices. IMO, I would replace it by \"Requesting end-user Information\". If an AS holds information about an end-user, it seems logical that the end-user should have access to it. If it happens that the end-user has also a RO account on the AS, then it should also have access to it. In such a case, another section with the following title \"Requesting RO Information\" should be created. RO = end-user was a common case in OAuth. In GNAP and in the general case a RO is different from an end-user and an AS may have no relationship with any RO (e.g. when attributes only are being used).\nI guess you refer to \"2.2. Requesting Subject Information\". We might have to harmonize a bit, but for instance \"2.3. Identifying the Client Instance\" might very well hold information about devices or at least the software running on it (that's the point of client software attestation to be discussed in ) We don't always know in advance whether it will be a RO or an end-user only, so the proposal isn't always applicable.\nI was indeed referring to \"2.2. Requesting Subject Information\" which has no relationship with \"2.3. Identifying the Client Instance\". This means that your arguments do not apply to section 2.2. and, by implication, that my argumentation currently remains valid.\nWell, I take the point in consideration but I don't think we should base our reasoning on the structure of sections, which can change\nIdentifying the User by Reference: Editor's note: We might be able to fold this function into an unstructured user assertion reference issued by the AS to the RC. We could put it in as an assertion type of \"gnap_reference\" or something like that. Downside: it's more verbose and potentially confusing to the client developer to have an assertion-like thing that's internal to the AS and not an assertion.\nWhy an assertion? Is that reference supposed to be linked to a session (cf \"represent the same end-user to the AS over subsequent requests\") or can it be persistent? I would see that more as an opaque identifier, rather than an assertion. Something like: asref seems a bit specific to GNAP (reference managed by the AS) but it could always be read \"as a reference\" in a more general setting. There's a variety of generic names that we could choose if needed, localuid, internal_uid, etc. But that really asks what additional types we could define as part of secevents (section 7.1.2). Beyond the case mentioned here, it would be most useful as an opaque identifier when RO != end-user, to avoid leaking private info about the RO (although currently we don't seem to cover that case yet) It's indeed more verbose than \"user\": \"XUT2MFM1XBIKJKSDU8QM\" but at least it's always the same structure (doesn't depend whether it's a reference or not), and it also makes the type more explicit.\nA subject identifier makes sense here, and I agree that it should be a new value to let us constrain it to the GNAP context.\nFixed by SECEVENT opaque identifiers\nI don't think this goes far enough -- I am thinking that with the opaque identifiers we can now simply return a with a format of that the client can be allowed to send in a follow-up request if it wants to. This way they all get treated equally, there's no special field anymore.", "new_text": "All dynamically generated handles are returned as fields in the root JSON object of the response. This specification defines the following dynamic handle return, additional handles can be defined in IANA. A string value used to represent the information in the \"client\" object that the client instance can use in a future request, as described in request-instance. This non-normative example shows one handle along side an issued access token. [[ See issue #77 [23] ]]"}
{"id": "q-en-gnap-core-protocol-603d1b4130e0e882676ba3c5431f5d61d56f01280a6d55343905c825ac73f1cc", "old_text": "REQUIRED. The URI at which the client instance can make continuation requests. This URI MAY vary per request, or MAY be stable at the AS if the AS includes an access token. The client instance MUST use this value exactly as given when making a continue-request. RECOMMENDED. The amount of time in integer seconds the client instance SHOULD wait after receiving this continuation handle and", "comments": "In the current version, the AS must include an access token. \"if the AS includes an access token\" sounds like the inclusion of an access token is optional.\nDeploy Preview for gnap-core-protocol-editors-draft ready! Built Explore the source changes: 323dd6c268368857c2d9942048f94658b806b8ad Inspect the deploy log: Browse the preview:", "new_text": "REQUIRED. The URI at which the client instance can make continuation requests. This URI MAY vary per request, or MAY be stable at the AS. The client instance MUST use this value exactly as given when making a continue-request. RECOMMENDED. The amount of time in integer seconds the client instance SHOULD wait after receiving this continuation handle and"}
{"id": "q-en-gnap-core-protocol-8ce690da2381168416a76fe381afe818d7a7afc4d566090a47c2403731106580", "old_text": "If the AS determines that the request cannot be issued for any reason, it responds to the client instance with an error message. The error code is one of the following, with additional values available in IANA: [[ See issue #79 [15] ]] 3.7.", "comments": "for\nDeploy Preview for gnap-core-protocol-editors-draft ready! Explore the source changes: c7273a059b212bf18879f131a5a18d9a776a2918 Inspect the deploy log: Browse the preview:\nError Response: Editor's note: I think we will need a more robust error mechanism, and we need to be more clear about what error states are allowed in what circumstances. Additionally, is the \"error\" parameter exclusive with others in the return?\nAs a starting point, I would suggest adding these sentences The error code parameter MUST be required and its value SHOULD be limited to ASCII characters. The error code indicates that a request is malformed. The AS MUST return an HTTP 401 (Unauthorized) status code to indicate that it can't identify a client instance (the error code can be ). The AS MUST return an HTTP 401 (Unauthorized) status code to indicate that it can't authenticate a client instance (the error code can be ). The AS MAY uncover additional information about the cause of the error when it can authenticate a client instance (the error code can be , , and so on).\nAddressed by , still need error codes for things like token management.\nThe type of the of the error property in a response is currently ambiguous. In section 3 it is defined as an and in section 3.6 it is defined as a .\nNAME thanks for catching that -- looks like both \"error\" and \"errordescription\" should be added to the list of parameters. I think the intent might have been for the value of \"error\" to be an object with \"errorcode\" and \"error_description\" fields inside of it though, so it might be better to align things in that direction instead. Do you have thoughts on which is preferable? Interop testing of error conditions is really difficult in practice, so this is a great catch!\nYaron: The AS MUST ignore any unknown extension. Justin: I\u2019m OK with this stance, but it needs to be consistent throughout the protocol, not just at this level. Yaron: we can actually have one policy for an AS that doesn't understand something defined in the RFC (e.g. a token binding method) vs. extensions. Unknown extensions clearly must be ignored, otherwise you cannot deploy extensions incrementally. For RFC elements, this is more nuanced.\nWith the guidance on extension points from and error codes from and similar, this should now be covered by the current text.", "new_text": "If the AS determines that the request cannot be issued for any reason, it responds to the client instance with an error message. [[ See issue #79 [15] ]] 3.7."}
{"id": "q-en-gnap-core-protocol-5dbaefbba5dd801c29d1376eb10cc372134b187e55e86bed6ca4ac8bbeaa8f28", "old_text": "Any keys presented by the client instance to the AS or RS MUST be validated as part of the request in which they are presented. The type of binding used is indicated by the proof parameter of the key object in key-format. Values defined by this specification are as follows: Additional proofing methods are defined by IANA. All key binding methods used by this specification MUST cover all relevant portions of the request, including anything that would change the nature of the request, to allow for secure validation of", "comments": "Expands the key parameter to allow it to be an object or a string, to facilitate methods with multiple parameters such as HTTPSig.\nTo edit notification comments on pull requests, go to your .\nSome key proofing methods have additional options that could be signaled in the GNAP protocol structure. Notable, HTTP Message Signatures has the ability to use different HTTP Signing Algorithms and different HTTP Digest Algorithms. The field could be changed to an object to accomodate this kind of use, allowing each proofing method to define its own parameters: Each method can also define \"Default\" values for missing parameters, allowing this to collapse in the simple case back to a string as it is today: These string values would have a deterministic expansion defined in the core protocol.\nNote that the httpsig document itself wavers between and and does not require either of them. Though maybe this is OK if the current document makes mandatory for the appropriate cases.\nThere are some updates pending in the Digest draft -- once that's out, we'll reference using as that's the most appropriate for our use case, and examples will be updated.", "new_text": "Any keys presented by the client instance to the AS or RS MUST be validated as part of the request in which they are presented. The type of binding used is indicated by the \"proof\" parameter of the key object in key-format. This parameter is formally specified by an object with at least the following member: Individual methods MAY define additional parameters as members in this object. Values for the \"method\" defined by this specification are as follows: Additional proofing methods are defined by IANA. For example, the \"httpsig\" method can be specified with its parameters as: If additional parameters are not required or used for a specific method, the method MAY be passed as a string instead of an object. For example, the \"mtls\" method with no additional parameters could be sent by the client instance as: The AS would map this to the equivalent expanded form as follows: All key binding methods used by this specification MUST cover all relevant portions of the request, including anything that would change the nature of the request, to allow for secure validation of"}
{"id": "q-en-gnap-core-protocol-5dbaefbba5dd801c29d1376eb10cc372134b187e55e86bed6ca4ac8bbeaa8f28", "old_text": "7.3.1. This method is indicated by \"httpsig\" in the \"proof\" field. The signer creates an HTTP Message Signature as described in I-D.ietf- httpbis-message-signatures. The covered components of the signature MUST include the following:", "comments": "Expands the key parameter to allow it to be an object or a string, to facilitate methods with multiple parameters such as HTTPSig.\nTo edit notification comments on pull requests, go to your .\nSome key proofing methods have additional options that could be signaled in the GNAP protocol structure. Notable, HTTP Message Signatures has the ability to use different HTTP Signing Algorithms and different HTTP Digest Algorithms. The field could be changed to an object to accomodate this kind of use, allowing each proofing method to define its own parameters: Each method can also define \"Default\" values for missing parameters, allowing this to collapse in the simple case back to a string as it is today: These string values would have a deterministic expansion defined in the core protocol.\nNote that the httpsig document itself wavers between and and does not require either of them. Though maybe this is OK if the current document makes mandatory for the appropriate cases.\nThere are some updates pending in the Digest draft -- once that's out, we'll reference using as that's the most appropriate for our use case, and examples will be updated.", "new_text": "7.3.1. This method is indicated by the method value \"httpsig\". The signer creates an HTTP Message Signature as described in I-D.ietf-httpbis- message-signatures. This method defines the following parameters: The covered components of the signature MUST include the following:"}
{"id": "q-en-gnap-core-protocol-5dbaefbba5dd801c29d1376eb10cc372134b187e55e86bed6ca4ac8bbeaa8f28", "old_text": "7.3.2. This method is indicated by \"mtls\" in the \"proof\" field. The signer presents its TLS client certificate during TLS negotiation with the verifier. In this example, the certificate is communicated to the application through the \"Client-Cert\" header from a TLS reverse proxy, leading to", "comments": "Expands the key parameter to allow it to be an object or a string, to facilitate methods with multiple parameters such as HTTPSig.\nTo edit notification comments on pull requests, go to your .\nSome key proofing methods have additional options that could be signaled in the GNAP protocol structure. Notable, HTTP Message Signatures has the ability to use different HTTP Signing Algorithms and different HTTP Digest Algorithms. The field could be changed to an object to accomodate this kind of use, allowing each proofing method to define its own parameters: Each method can also define \"Default\" values for missing parameters, allowing this to collapse in the simple case back to a string as it is today: These string values would have a deterministic expansion defined in the core protocol.\nNote that the httpsig document itself wavers between and and does not require either of them. Though maybe this is OK if the current document makes mandatory for the appropriate cases.\nThere are some updates pending in the Digest draft -- once that's out, we'll reference using as that's the most appropriate for our use case, and examples will be updated.", "new_text": "7.3.2. This method is indicated by the method value \"mtls\". This method defines no additional parameters. The signer presents its TLS client certificate during TLS negotiation with the verifier. In this example, the certificate is communicated to the application through the \"Client-Cert\" header from a TLS reverse proxy, leading to"}
{"id": "q-en-gnap-core-protocol-5dbaefbba5dd801c29d1376eb10cc372134b187e55e86bed6ca4ac8bbeaa8f28", "old_text": "7.3.3. This method is indicated by \"jwsd\" in the \"proof\" field. A JWS RFC7515 object is created as follows: To protect the request, the JOSE header of the signature contains the following claims:", "comments": "Expands the key parameter to allow it to be an object or a string, to facilitate methods with multiple parameters such as HTTPSig.\nTo edit notification comments on pull requests, go to your .\nSome key proofing methods have additional options that could be signaled in the GNAP protocol structure. Notable, HTTP Message Signatures has the ability to use different HTTP Signing Algorithms and different HTTP Digest Algorithms. The field could be changed to an object to accomodate this kind of use, allowing each proofing method to define its own parameters: Each method can also define \"Default\" values for missing parameters, allowing this to collapse in the simple case back to a string as it is today: These string values would have a deterministic expansion defined in the core protocol.\nNote that the httpsig document itself wavers between and and does not require either of them. Though maybe this is OK if the current document makes mandatory for the appropriate cases.\nThere are some updates pending in the Digest draft -- once that's out, we'll reference using as that's the most appropriate for our use case, and examples will be updated.", "new_text": "7.3.3. This method is indicated by the method value \"jwsd\". This method defines no additional parameters. A JWS RFC7515 object is created as follows: To protect the request, the JOSE header of the signature contains the following claims:"}
{"id": "q-en-gnap-core-protocol-5dbaefbba5dd801c29d1376eb10cc372134b187e55e86bed6ca4ac8bbeaa8f28", "old_text": "7.3.4. This method is indicated by \"jws\" in the \"proof\" field. A JWS RFC7515 object is created as follows: To protect the request, the JWS header contains the following claims.", "comments": "Expands the key parameter to allow it to be an object or a string, to facilitate methods with multiple parameters such as HTTPSig.\nTo edit notification comments on pull requests, go to your .\nSome key proofing methods have additional options that could be signaled in the GNAP protocol structure. Notable, HTTP Message Signatures has the ability to use different HTTP Signing Algorithms and different HTTP Digest Algorithms. The field could be changed to an object to accomodate this kind of use, allowing each proofing method to define its own parameters: Each method can also define \"Default\" values for missing parameters, allowing this to collapse in the simple case back to a string as it is today: These string values would have a deterministic expansion defined in the core protocol.\nNote that the httpsig document itself wavers between and and does not require either of them. Though maybe this is OK if the current document makes mandatory for the appropriate cases.\nThere are some updates pending in the Digest draft -- once that's out, we'll reference using as that's the most appropriate for our use case, and examples will be updated.", "new_text": "7.3.4. This method is indicated by the method value \"jws\". This method defines no additional parameters. A JWS RFC7515 object is created as follows: To protect the request, the JWS header contains the following claims."}
{"id": "q-en-gnap-resource-servers-c70ea61fa2d12d69c6c48e0077da07fbe4cee9e6ebbb73e526ff4e77575ec7b6", "old_text": "This document contains non-normative examples of partial and complete HTTP messages, JSON structures, URLs, query components, keys, and other elements. Some examples use a single trailing backslash '' to indicate line wrapping for long values, as per RFC8792. The \"\" character and leading spaces on wrapped lines are not part of the value.", "comments": "Deploy Preview for gnap-resource-servers-editors-draft ready! Built Explore the source changes: 611af53328d71bac58f517702c5775a9c514d88d Inspect the deploy log: Browse the preview:", "new_text": "This document contains non-normative examples of partial and complete HTTP messages, JSON structures, URLs, query components, keys, and other elements. Some examples use a single trailing backslash \"\" to indicate line wrapping for long values, as per RFC8792. The \"\" character and leading spaces on wrapped lines are not part of the value."}
{"id": "q-en-groupcomm-bis-d2b76741c208145052a1dedba86c17a88206cf1be734f26eb0b405dce27b260d", "old_text": "This section defines how proxies operate in a group communication scenario. In particular, sec-proxy-forward defines operations of forward-proxies, while sec-proxy-reverse defines operations of reverse-proxies. 3.5.1.", "comments": "text for proxy security cases\nReverse proxy may act as origin server and hence client doesn't use e2e security to server group. However, in other cases a reverse proxy may assume the client uses e2e OSCORE security and not add any security by itself. To consider all the different, possible/valid cases for this. And see if it's different for forward proxies.\nOne particular example is a client that connects over CoAP+TLS+TCP (or CoAP+DTLS) to the reverse-proxy, where the proxy uses OSCORE security for the multicast request. The client is not aware of the (group) security material which the proxy and servers have.\nAt high level, there are 4 different potential cases of \"proxying with security\". Forward proxy with e2e security [=Group oscore e2e via proxy] Forward proxy with two-leg security [=unicast OSCORE or DTLS on 1st leg, with group OSCORE on 2nd leg] Reverse proxy with e2e security [=Group oscore e2e via proxy] Reverse proxy with two-leg security [=unicast OSCORE or DTLS on 1st leg, with group OSCORE on 2nd leg] Currently the draft doesn't say anything about these combinations, which are valid, which not, and where to find the security solution to them (e.g. a pointer to draft-groupcomm-proxy for covering most of these cases) This text/analysis could be a new Section 5.3 \"Proxy Security\" or so. Note: cases 2 and 4 depend on a 100% trusted proxy so no e2e security model. The client in this case is not even aware necessarily of the existence of Group-OSCORE. The proxy stores the \"group member identity\" to be used to make the Group OSCORE protected request. This may be a single identity, or a separate identity for each authorized user of the Proxy service. Case 1 and 3 most likely means OSCORE-in-OSCORE is needed?\nIt seems like a good idea. Intermediaries are usually considered non-trusted, hence the interest for an actual e2e security model between origin client and origin servers. How would the proxy store and use a \"group member identity\" ? Even if the proxy somehow obtains multiple different Sender IDs from the Group Manager (i.e., one for each authorized user of the proxy service), the proxy would have to sign a Group OSCORE protected request with a private key of its own (possibly one per corresponding Sender ID), which is not in control of the actual origin client. With such a level of trust on the proxy, and with a client not interested in e2e security anyway, it would be just easier to think of the proxy simply as another single member of the same OSCORE group including the servers, thus sacrificing verifiable data authorship/origin. Maybe this might be useful/applicable for case 4, with a client anyway likely unaware of what happens beyond the reverse-proxy and to be actually reaching a group of servers. Not sure about case 2, where the client would be aware of an actual group of servers it tries to reach. Yes, when exactly OSCORE is also conveniently used between Client and Proxy, to fulfill REQ2 from URL : REQ2. The CoAP proxy MUST identify a client sending a CoAP group request, in order to verify whether the client is allowed-listed to do so. For example, this can rely on one of the following ... Note that this requirement applies even in case end-to-end security is not enforced between client and servers.\nAgree, in the two-legs security case it seems sufficient that the proxy has one \"identity\" within the group. And I agree that this proxy would most likely be a reverse proxy Case 4 i.e. it operates like a (trusted!) origin server. By definition of a reverse proxy , it has to be trusted just like a client trusts an origin server with being able to know and manipulate the served content.\nThis issue was addressed in Section 5.3 of version -04. NAME , can we close it?\nYes!\nLooks good! See inline some proposed rephrasing.", "new_text": "This section defines how proxies operate in a group communication scenario. In particular, sec-proxy-forward defines operations of forward-proxies, while sec-proxy-reverse defines operations of reverse-proxies. Security operations for a proxy are discussed later in chap-proxy-security. 3.5.1."}
{"id": "q-en-groupcomm-bis-d2b76741c208145052a1dedba86c17a88206cf1be734f26eb0b405dce27b260d", "old_text": "groupcomm, and it is RECOMMENDED to perform them according to the approach described in I-D.ietf-ace-key-groupcomm-oscore. 6. This section provides security considerations for CoAP group", "comments": "text for proxy security cases\nReverse proxy may act as origin server and hence client doesn't use e2e security to server group. However, in other cases a reverse proxy may assume the client uses e2e OSCORE security and not add any security by itself. To consider all the different, possible/valid cases for this. And see if it's different for forward proxies.\nOne particular example is a client that connects over CoAP+TLS+TCP (or CoAP+DTLS) to the reverse-proxy, where the proxy uses OSCORE security for the multicast request. The client is not aware of the (group) security material which the proxy and servers have.\nAt high level, there are 4 different potential cases of \"proxying with security\". Forward proxy with e2e security [=Group oscore e2e via proxy] Forward proxy with two-leg security [=unicast OSCORE or DTLS on 1st leg, with group OSCORE on 2nd leg] Reverse proxy with e2e security [=Group oscore e2e via proxy] Reverse proxy with two-leg security [=unicast OSCORE or DTLS on 1st leg, with group OSCORE on 2nd leg] Currently the draft doesn't say anything about these combinations, which are valid, which not, and where to find the security solution to them (e.g. a pointer to draft-groupcomm-proxy for covering most of these cases) This text/analysis could be a new Section 5.3 \"Proxy Security\" or so. Note: cases 2 and 4 depend on a 100% trusted proxy so no e2e security model. The client in this case is not even aware necessarily of the existence of Group-OSCORE. The proxy stores the \"group member identity\" to be used to make the Group OSCORE protected request. This may be a single identity, or a separate identity for each authorized user of the Proxy service. Case 1 and 3 most likely means OSCORE-in-OSCORE is needed?\nIt seems like a good idea. Intermediaries are usually considered non-trusted, hence the interest for an actual e2e security model between origin client and origin servers. How would the proxy store and use a \"group member identity\" ? Even if the proxy somehow obtains multiple different Sender IDs from the Group Manager (i.e., one for each authorized user of the proxy service), the proxy would have to sign a Group OSCORE protected request with a private key of its own (possibly one per corresponding Sender ID), which is not in control of the actual origin client. With such a level of trust on the proxy, and with a client not interested in e2e security anyway, it would be just easier to think of the proxy simply as another single member of the same OSCORE group including the servers, thus sacrificing verifiable data authorship/origin. Maybe this might be useful/applicable for case 4, with a client anyway likely unaware of what happens beyond the reverse-proxy and to be actually reaching a group of servers. Not sure about case 2, where the client would be aware of an actual group of servers it tries to reach. Yes, when exactly OSCORE is also conveniently used between Client and Proxy, to fulfill REQ2 from URL : REQ2. The CoAP proxy MUST identify a client sending a CoAP group request, in order to verify whether the client is allowed-listed to do so. For example, this can rely on one of the following ... Note that this requirement applies even in case end-to-end security is not enforced between client and servers.\nAgree, in the two-legs security case it seems sufficient that the proxy has one \"identity\" within the group. And I agree that this proxy would most likely be a reverse proxy Case 4 i.e. it operates like a (trusted!) origin server. By definition of a reverse proxy , it has to be trusted just like a client trusts an origin server with being able to know and manipulate the served content.\nThis issue was addressed in Section 5.3 of version -04. NAME , can we close it?\nYes!\nLooks good! See inline some proposed rephrasing.", "new_text": "groupcomm, and it is RECOMMENDED to perform them according to the approach described in I-D.ietf-ace-key-groupcomm-oscore. 5.3. Different solutions may be selected for secure group communication via a proxy depending on proxy type, use case and deployment requirements. In this section the options based on Group OSCORE are listed. For a client performing a group communication request via a forward- proxy, end-to-end security should be implemented. The client then creates a group request protected with Group OSCORE and unicasts this to the proxy. The proxy adapts the request from a forward-proxy request to a regular request and multicasts this adapted request to the indicated CoAP group. During the adaptation, the security provided by Group OSCORE persists, in either case of using the group mode or using the pairwise mode. The first leg of communication from client to proxy can optionally be further protected, e.g., by using (D)TLS and/or OSCORE. For a client performing a group communication request via a reverse- proxy, either end-to-end-security or hop-by-hop security can be implemented. The case of end-to-end security is the same as for the forward-proxy case. The case of hop-by-hop security is only possible if the proxy can be completely trusted and it is configured as a member of the OSCORE security group(s) that it needs to access, on behalf of clients. The first leg of communication between client and proxy is then protected with a security method for CoAP unicast, such as (D)TLS, OSCORE or a combination of such methods. The second leg between proxy and servers is protected using Group OSCORE. This can be useful in applications where for example the origin client does not implement Group OSCORE, or the group management operations are confined to a particular network domain and the client is outside this domain. For all the above cases, more details on using Group OSCORE are defined in I-D.tiloca-core-groupcomm-proxy. 6. This section provides security considerations for CoAP group"}
{"id": "q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad", "old_text": "secure delegation. The zone signing and secure delegation may be performed either by the Homenet Naming Authority or by the Outsourcing Infrastructure. sec-dnssec-deplyment discusses these two alternatives. sec-views discusses the consequences of publishing multiple representations of the same zone also commonly designated as views. This section provides guidance to limit the risks associated with multiple views. sec-reverse discusses management of the reverse zone. sec-renumbering discusses how renumbering should be handled. Finally, sec-privacy and sec-security respectively discuss privacy and security considerations when outsourcing the Homenet Zone. 3. Customer Premises Equipment: (CPE) is a router providing", "comments": "\u2026ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only.", "new_text": "secure delegation. The zone signing and secure delegation may be performed either by the Homenet Naming Authority or by the Outsourcing Infrastructure. sec-dnssec-deplyment discusses these two alternatives. sec-reverse discusses management of the reverse zone. sec-renumbering discusses how renumbering should be handled. Finally, sec-privacy and sec-security respectively discuss privacy and security considerations when outsourcing the Homenet Zone. The Homenet Zone is expected to host public information only. It is not the scope of this DNS service to define local home network boundaries. Instead, local scope information is expected to be provided to the home network using local scope naming services. mDNS RFC6762 DNS-SD RFC6763 are two examples of these services. Currently mDNS is limited to a single link network. However, future protocols are expected to leverage this constraint as pointed out in RFC7558. 3. Customer Premises Equipment: (CPE) is a router providing"}
{"id": "q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad", "old_text": "Server. On the other hand, if the HNA signs the Homenet Zone itself, the zone would be collected by the Synchronization Server and directly transferred to the Public Authoritative Server(s). These policies are discussed and detailed in sec-dnssec-deplyment and sec- views. 4.2.", "comments": "\u2026ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only.", "new_text": "Server. On the other hand, if the HNA signs the Homenet Zone itself, the zone would be collected by the Synchronization Server and directly transferred to the Public Authoritative Server(s). These policies are discussed and detailed in sec-dnssec-deplyment. 4.2."}
{"id": "q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad", "old_text": "As depicted in fig-naming-arch and fig-auth-servers, the Public Homenet Zone is hosted on the Public Authoritative Server(s), whereas the Homenet Zone is hosted on the HNA. Motivations for keeping these two zones identical are detailed in sec-views, and this section considers that the HNA builds the zone that will be effectively published on the Public Authoritative Server(s). In other words \"Homenet to Public Zone transformation\" is the identity also commonly designated as \"no operation\" (NOP). In that case, the Homenet Zone should configure its Name Server RRset (NS) and Start of Authority (SOA) with the values associated with the", "comments": "\u2026ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only.", "new_text": "As depicted in fig-naming-arch and fig-auth-servers, the Public Homenet Zone is hosted on the Public Authoritative Server(s), whereas the Homenet Zone is hosted on the HNA. This section considers that the HNA builds the zone that will be effectively published on the Public Authoritative Server(s). In other words \"Homenet to Public Zone transformation\" is the identity also commonly designated as \"no operation\" (NOP). In that case, the Homenet Zone should configure its Name Server RRset (NS) and Start of Authority (SOA) with the values associated with the"}
{"id": "q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad", "old_text": "Reasons for signing the zone by the HNA are: 1) Keeping the Homenet Zone and the Public Homenet Zone equal to securely optimize DNS resolution. As the Public Zone is signed with DNSSEC, RRsets are authenticated, and thus DNS responses can be validated even though they are not provided by the", "comments": "\u2026ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only.", "new_text": "Reasons for signing the zone by the HNA are: Keeping the Homenet Zone and the Public Homenet Zone equal to securely optimize DNS resolution. As the Public Zone is signed with DNSSEC, RRsets are authenticated, and thus DNS responses can be validated even though they are not provided by the"}
{"id": "q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad", "old_text": "otherwise a resolution with the Public Authoritative Server(s) would be performed. 2) Keeping the Homenet Zone and the Public Homenet Zone equal to securely address the connectivity disruption independence detailed in RFC7368 section 4.4.1 and 3.7.5. As local lookups are possible in case of network disruption, communications within the home", "comments": "\u2026ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only.", "new_text": "otherwise a resolution with the Public Authoritative Server(s) would be performed. Keeping the Homenet Zone and the Public Homenet Zone equal to securely address the connectivity disruption independence detailed in RFC7368 section 4.4.1 and 3.7.5. As local lookups are possible in case of network disruption, communications within the home"}
{"id": "q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad", "old_text": "lookup would provide DNS as opposed to DNSSEC responses provided by the Public Authoritative Server(s). 3) Keeping the Homenet Zone and the Public Homenet Zone equal to guarantee coherence between DNS responses. Using a unique zone is one way to guarantee uniqueness of the responses among servers and places. Issues generated by different views are discussed in more details in sec-views. 4) Privacy and Integrity of the DNSSEC Homenet Zone are better guaranteed. When the Zone is signed by the HNA, it makes modification of the DNS data -- for example for flow redirection -- impossible. As a result, signing the Homenet Zone by the HNA provides better protection for end user privacy. Reasons for signing the zone by the Outsourcing Infrastructure are:", "comments": "\u2026ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only.", "new_text": "lookup would provide DNS as opposed to DNSSEC responses provided by the Public Authoritative Server(s). Privacy and Integrity of the DNSSEC Homenet Zone are better guaranteed. When the Zone is signed by the HNA, it makes modification of the DNS data -- for example for flow redirection -- impossible. As a result, signing the Homenet Zone by the HNA provides better protection for end user privacy. Reasons for signing the zone by the Outsourcing Infrastructure are:"}
{"id": "q-en-ietf-homenet-hna-2cbee0a784e57dab405444116c5eee44a9154dbb808a36226c6b81d2a48d25ad", "old_text": "7. The Homenet Zone provides information about the home network. Some users may be tempted to have provide responses dependent on the origin of the DNS query. More specifically, some users may be tempted to provide a different view for DNS queries originating from the home network and for DNS queries coming from the Internet. Each view could then be associated with a dedicated Homenet Zone. Note that this document does not specify how DNS queries originating from the home network are addressed to the Homenet Zone. This could be done via hosting the DNS resolver on the HNA for example. This section is not normative. sec-reasons details why some nodes may only be reachable from the home network and not from the global Internet. sec-consequences briefly describes the consequences of having distinct views such as a \"home network view\" and an \"Internet view\". Finally, sec-guidance provides guidance on how to resolve names that are only significant in the home network, without creating different views. 7.1. The motivation for supporting different views is to provide different answers dependent on the origin of the DNS query, for reasons such as: 1: An end user may want to have services not published on the Internet. Services like the HNA administration interface that provides the GUI to administer your HNA might not seem advisable to publish on the Internet. Similarly, services like the mapper that registers the devices of your home network may also not be desirable to be published on the Internet. In both cases, these services should only be known or used by the network administrator. To restrict the access of such services, the home network administrator may choose to publish these pieces of information only within the home network, where it might be assumed that the users are more trusted than on the Internet. Even though this assumption may not be valid, at least this may reduce the surface of any attack. 2: Services within the home network may be reachable using non global IP addresses. IPv4 and NAT may be one reason. On the other hand IPv6 may favor link-local or site-local IP addresses. These IP addresses are not significant outside the boundaries of the home network. As a result, they MAY be published in the home network view, and SHOULD NOT be published in the Public Homenet Zone. 7.2. Enabling different views leads to a non-coherent naming system. Depending on where resolution is performed, some services will not be available. This may be especially inconvenient with devices with multiple interfaces that are attached both to the Internet via a 3G/4G interface and to the home network via a WLAN interface. Devices may also cache the results of name resolution, and these cached entries may no longer be valid if a mobile device moves between a homenet connection and an internet connection e.g. a device temporarily loses wifi signal and switches to 3G. Regarding local-scope IP addresses, such devices may end up with poor connectivity. Suppose, for example, that DNS resolution is performed via the WLAN interface attached to the HNA, and the response provides local-scope IP addresses, but the communication is initiated on the 3G/4G interface. Communications with local-scope addresses will be unreachable on the Internet, thus aborting the communication. The same situation occurs if a device is flip / flopping between various WLAN networks. Regarding DNSSEC, if the HNA does not sign the Homenet Zone and outsources the signing process, the two views are different, because one is protected with DNSSEC whereas the other is not. Devices with multiple interfaces will have difficulty securing the naming resolution, as responses originating from the home network may not be signed. For devices with all its interfaces attached to a single administrative domain, that is to say the home network, or the Internet. Incoherence between DNS responses may still also occur if the device is able to perform DNS resolutions both using the DNS resolving server of the home network, or one of the ISP. DNS resolution performed via the HNA or the ISP resolver may be different than those performed over the Internet. 7.3. As documented in sec-consequences, it is RECOMMENDED to avoid different views. If network administrators choose to implement multiple views, impacts on devices' resolution SHOULD be evaluated. As a consequence, the Homenet Zone is expected to be an exact copy of the Public Homenet Zone. As a result, services that are not expected to be published on the Internet SHOULD NOT be part of the Homenet Zone, local-scope addresses SHOULD NOT be part of the Homenet Zone, and when possible, the HNA SHOULD sign the Homenet Zone. The Homenet Zone is expected to host public information only. It is not the scope of the DNS service to define local home network boundaries. Instead, local scope information is expected to be provided to the home network using local scope naming services. mDNS RFC6762 DNS-SD RFC6763 are two examples of these services. Currently mDNS is limited to a single link network. However, future protocols are expected to leverage this constraint as pointed out in RFC7558. 7.4. This section is focused on the Homenet Reverse Zone. Firstly, all considerations for the Homenet Zone apply to the Homenet", "comments": "\u2026ument, dealing with issue\nhandling different views (7.0) .... some would like to have different views, This document deals with the view that the Internet gets from the CPE(s) only.", "new_text": "7. This section is focused on the Homenet Reverse Zone. Firstly, all considerations for the Homenet Zone apply to the Homenet"}
{"id": "q-en-ietf-rats-wg-architecture-2f51421d37ac9fd3d8e0b327303ee7b9c5733c4b282d2811fa976fc6b4810a15", "old_text": "This document uses the following terms: Appraisal Policy for Evidence: A set of rules that direct how a verifier evaluates the validity of information about an Attester. Compare /security policy/ in [RFC4949]. Appraisal Policy for Attestation Result: A set of rules that direct how a Relying Party evaluates the validity of information about an Attester. Compare /security policy/ in [RFC4949]. Attestation Result: The evaluation results generated by a Verifier, typically including information about an Attester, where", "comments": "When comparing the term 'Appraisal Policy for Evidence' and 'Appraisal Policy for Attestation Result', they are both saying 'evaluates the validity of information about an Attester'. This makes me feel Verifier and Relying Party are doing the same things and they are redundant. I think it's more appropriate to say that the Relying Party evaluates the evaluation results generated by the Verifiers.", "new_text": "This document uses the following terms: Appraisal Policy for Evidence: A set of rules that direct how a Verifier evaluates the validity of information about an Attester. Compare /security policy/ in [RFC4949]. Appraisal Policy for Attestation Result: A set of rules that direct how a Relying Party uses the evaluation results about an Attester generated by the Verifiers. Compare /security policy/ in [RFC4949]. Attestation Result: The evaluation results generated by a Verifier, typically including information about an Attester, where"}
{"id": "q-en-ietf-rats-wg-architecture-7d9b1eff3a1eed491407ceb8225d61f9ed12a7e078e3f4d343c921adefa234f9", "old_text": "then accepts B as trustworthy, it can choose to accept B as a Verifier for other Attesters. Similarly, the Relying Party also needs to trust the Relying Party Owner for providing its Appraisal Policy for Attestation Results, and in some scenarios the Relying Party might even require that the", "comments": "TLS authentication is sufficient for establishing trust in the various entities before sending them privacy sensitive data. TLS is also widely supported today and attestation between servers is not. Most likely TLS will be the mechanism. This PR adds mention to in addition to attestation.\nAddresses\nsays that the way for an Endorser to trust and Verifier is to use attestation. While I won't categorically say this should never be done, it seems like this is overly complex and a somewhat misuse of attestation. I think of the Verifier as a service like a web site, not an implementation like a TEE, so TLS or similar, not attestations, seems like the more efficient and conventional way to do this. I think attestation is associated with actual HW and SW, for example the Dell server, Intel CPU and Web stack used to run the service. The operator of a Verifier gets to change those around without having to tell the people using the service. On the other hand, if a Verification Service changes owners, say from Chinese to American, then Endorser concerned about confidential stuff in the Endorsements really wants to know. I also think that Verifier Owner should not be lumped in with Endorser. The Verifier Owner is very likely to be tightly coupled organizationally with the Verifier, where the Endorser is not likely to be.\nPR addresses this well enough.\nReviewing the trust relationships described in the architecture document got me thinking about sharpening a distinction between device attestation (e.g., EAT for phone) and service authenticity (e.g. HTTPS for a web site): Device Attestation Focus is on the implementation, the HW and SW The legal entity of interest is the device manufacturer The claims are about the HW and the SW, not the device owner Example: a mobile phone (not its owner/user) Example: integrity of a remote device Service Authenticity Focus is on the provider of the service, not the HW or SW The legal entity of interest is the service provider There is no equivalent of claims, but if there was they would be about the business or person operating the service Example: a web site Example: an email provider (IMAP service) Service providers (e.g., web sites) use whatever HW and SW they want to use. As a user of the service we don\u2019t care what HW and SW they select. We trust the service provider to pick good HW and SW and to configure and operate it correctly. They might change the HW and the SW and as a user we won\u2019t know. For example we trust Bank of Xxx to operate their banking web site correctly with little concern for their HW. This is why attestation of the HW and SW is not so useful to the user of a service. Users of mobile phones and IoT devices are not trustworthy to pick secure HW and SW. They are not professionals like the people running web sites. There is little accountability against them if they pick bad SW and HW. Therefor we want attestation for these usage scenarios. This is why EAT, FIDO and Android Attestation exist. I don\u2019t think the line between device attestation and service authenticity is completely black and white. In some use cases attestation may be bent to provide some service authenticity. But I don\u2019t think it makes sense to add attestation to the millions of web sites out there, add support for it to browsers and make attestation an extra layer of security for web browsing. I think it is fine for a service provider to use attestation inside their service to monitor integrity and such of the HW and SW providing the service. But usually that won\u2019t be manifest outside the service. It could be awkward for the user to the service to have to re evaluate the service every time the service provider changes HW. Another way to look at this is from a certification / regulatory view. If the thing you're looking at is a candidate for implementation certification such as Common Criteria, FIPS, FIDO certification, then it probably lines up with device attestation. On the other hand, if the thing your are looking at lines up with regulation (e.g., banking regulation, GDPR,...) and to some degree OWASP, then it probably lines up with service authenticity. I think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\"\nIn the last editors session, the editor's rough consensus was that https does not provide the authenticity but it provides a secure channel. Instead, authenticity is checked via successful authentication rooted in identity documents (I hope I captured appropriately, please bash). Taking that into account, how does this effect this issue? Looking at the last paragraph, \"Remote attestation procedures are not a replacement for secure authentication, they are based on secure authentication.\" might be missing vital parts. Does anything need to be added or modified here?\nNote that is INCORRECTLY referenced here. Is the resolution of a different issue about HTTPS vs Attestation.\nI think comments from NAME do need to be taken into account when the text to address this issue is written. My text in opening the issue is not intended as the text to resolve the issue. Probably the text that resolves the issue needs to refer to \"HTTPS plus the web certificate infrastructure that includes public CA's and browsers and/or private CA's used in B-to-B...\" or such.\nI think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\" The design team disagrees with this. Every authentication (including HTTPS) can be said in terms of attestation. It is the verifier that allows the decoupling of the RP to know the expected state of the Attester.", "new_text": "then accepts B as trustworthy, it can choose to accept B as a Verifier for other Attesters. As another example, the Relying Party can establish trust in the Verifier by out of band establishment of key material, combined with a protocol like TLS to communicate. There is an assumption that between the establishment of the trusted key material and the creation of the Evidence, that the Verifier has not been compromised. Similarly, the Relying Party also needs to trust the Relying Party Owner for providing its Appraisal Policy for Attestation Results, and in some scenarios the Relying Party might even require that the"}
{"id": "q-en-ietf-rats-wg-architecture-7d9b1eff3a1eed491407ceb8225d61f9ed12a7e078e3f4d343c921adefa234f9", "old_text": "Relying Party(s). In some cases where Evidence contains sensitive information, an Attester might even require that a Verifier first go through a remote attestation procedure with it before the Attester will send the sensitive Evidence. This can be done by having the Attester first act as a Verifier/Relying Party, and the Verifier act as its own Attester, as discussed above. 7.3.", "comments": "TLS authentication is sufficient for establishing trust in the various entities before sending them privacy sensitive data. TLS is also widely supported today and attestation between servers is not. Most likely TLS will be the mechanism. This PR adds mention to in addition to attestation.\nAddresses\nsays that the way for an Endorser to trust and Verifier is to use attestation. While I won't categorically say this should never be done, it seems like this is overly complex and a somewhat misuse of attestation. I think of the Verifier as a service like a web site, not an implementation like a TEE, so TLS or similar, not attestations, seems like the more efficient and conventional way to do this. I think attestation is associated with actual HW and SW, for example the Dell server, Intel CPU and Web stack used to run the service. The operator of a Verifier gets to change those around without having to tell the people using the service. On the other hand, if a Verification Service changes owners, say from Chinese to American, then Endorser concerned about confidential stuff in the Endorsements really wants to know. I also think that Verifier Owner should not be lumped in with Endorser. The Verifier Owner is very likely to be tightly coupled organizationally with the Verifier, where the Endorser is not likely to be.\nPR addresses this well enough.\nReviewing the trust relationships described in the architecture document got me thinking about sharpening a distinction between device attestation (e.g., EAT for phone) and service authenticity (e.g. HTTPS for a web site): Device Attestation Focus is on the implementation, the HW and SW The legal entity of interest is the device manufacturer The claims are about the HW and the SW, not the device owner Example: a mobile phone (not its owner/user) Example: integrity of a remote device Service Authenticity Focus is on the provider of the service, not the HW or SW The legal entity of interest is the service provider There is no equivalent of claims, but if there was they would be about the business or person operating the service Example: a web site Example: an email provider (IMAP service) Service providers (e.g., web sites) use whatever HW and SW they want to use. As a user of the service we don\u2019t care what HW and SW they select. We trust the service provider to pick good HW and SW and to configure and operate it correctly. They might change the HW and the SW and as a user we won\u2019t know. For example we trust Bank of Xxx to operate their banking web site correctly with little concern for their HW. This is why attestation of the HW and SW is not so useful to the user of a service. Users of mobile phones and IoT devices are not trustworthy to pick secure HW and SW. They are not professionals like the people running web sites. There is little accountability against them if they pick bad SW and HW. Therefor we want attestation for these usage scenarios. This is why EAT, FIDO and Android Attestation exist. I don\u2019t think the line between device attestation and service authenticity is completely black and white. In some use cases attestation may be bent to provide some service authenticity. But I don\u2019t think it makes sense to add attestation to the millions of web sites out there, add support for it to browsers and make attestation an extra layer of security for web browsing. I think it is fine for a service provider to use attestation inside their service to monitor integrity and such of the HW and SW providing the service. But usually that won\u2019t be manifest outside the service. It could be awkward for the user to the service to have to re evaluate the service every time the service provider changes HW. Another way to look at this is from a certification / regulatory view. If the thing you're looking at is a candidate for implementation certification such as Common Criteria, FIPS, FIDO certification, then it probably lines up with device attestation. On the other hand, if the thing your are looking at lines up with regulation (e.g., banking regulation, GDPR,...) and to some degree OWASP, then it probably lines up with service authenticity. I think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\"\nIn the last editors session, the editor's rough consensus was that https does not provide the authenticity but it provides a secure channel. Instead, authenticity is checked via successful authentication rooted in identity documents (I hope I captured appropriately, please bash). Taking that into account, how does this effect this issue? Looking at the last paragraph, \"Remote attestation procedures are not a replacement for secure authentication, they are based on secure authentication.\" might be missing vital parts. Does anything need to be added or modified here?\nNote that is INCORRECTLY referenced here. Is the resolution of a different issue about HTTPS vs Attestation.\nI think comments from NAME do need to be taken into account when the text to address this issue is written. My text in opening the issue is not intended as the text to resolve the issue. Probably the text that resolves the issue needs to refer to \"HTTPS plus the web certificate infrastructure that includes public CA's and browsers and/or private CA's used in B-to-B...\" or such.\nI think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\" The design team disagrees with this. Every authentication (including HTTPS) can be said in terms of attestation. It is the verifier that allows the decoupling of the RP to know the expected state of the Attester.", "new_text": "Relying Party(s). In some cases where Evidence contains sensitive information, an Attester might even require that a Verifier first go through a TLS authentication or a remote attestation procedure with it before the Attester will send the sensitive Evidence. This can be done by having the Attester first act as a Verifier/Relying Party, and the Verifier act as its own Attester, as discussed above. 7.3."}
{"id": "q-en-ietf-rats-wg-architecture-7d9b1eff3a1eed491407ceb8225d61f9ed12a7e078e3f4d343c921adefa234f9", "old_text": "first act as an Attester, providing Evidence that the Owner can appraise, before the Owner would give the Relying Party an updated policy that might contain sensitive information. In such a case, mutual attestation might be needed, in which case typically one side's Evidence must be considered safe to share with an untrusted entity, in order to bootstrap the sequence. 7.4.", "comments": "TLS authentication is sufficient for establishing trust in the various entities before sending them privacy sensitive data. TLS is also widely supported today and attestation between servers is not. Most likely TLS will be the mechanism. This PR adds mention to in addition to attestation.\nAddresses\nsays that the way for an Endorser to trust and Verifier is to use attestation. While I won't categorically say this should never be done, it seems like this is overly complex and a somewhat misuse of attestation. I think of the Verifier as a service like a web site, not an implementation like a TEE, so TLS or similar, not attestations, seems like the more efficient and conventional way to do this. I think attestation is associated with actual HW and SW, for example the Dell server, Intel CPU and Web stack used to run the service. The operator of a Verifier gets to change those around without having to tell the people using the service. On the other hand, if a Verification Service changes owners, say from Chinese to American, then Endorser concerned about confidential stuff in the Endorsements really wants to know. I also think that Verifier Owner should not be lumped in with Endorser. The Verifier Owner is very likely to be tightly coupled organizationally with the Verifier, where the Endorser is not likely to be.\nPR addresses this well enough.\nReviewing the trust relationships described in the architecture document got me thinking about sharpening a distinction between device attestation (e.g., EAT for phone) and service authenticity (e.g. HTTPS for a web site): Device Attestation Focus is on the implementation, the HW and SW The legal entity of interest is the device manufacturer The claims are about the HW and the SW, not the device owner Example: a mobile phone (not its owner/user) Example: integrity of a remote device Service Authenticity Focus is on the provider of the service, not the HW or SW The legal entity of interest is the service provider There is no equivalent of claims, but if there was they would be about the business or person operating the service Example: a web site Example: an email provider (IMAP service) Service providers (e.g., web sites) use whatever HW and SW they want to use. As a user of the service we don\u2019t care what HW and SW they select. We trust the service provider to pick good HW and SW and to configure and operate it correctly. They might change the HW and the SW and as a user we won\u2019t know. For example we trust Bank of Xxx to operate their banking web site correctly with little concern for their HW. This is why attestation of the HW and SW is not so useful to the user of a service. Users of mobile phones and IoT devices are not trustworthy to pick secure HW and SW. They are not professionals like the people running web sites. There is little accountability against them if they pick bad SW and HW. Therefor we want attestation for these usage scenarios. This is why EAT, FIDO and Android Attestation exist. I don\u2019t think the line between device attestation and service authenticity is completely black and white. In some use cases attestation may be bent to provide some service authenticity. But I don\u2019t think it makes sense to add attestation to the millions of web sites out there, add support for it to browsers and make attestation an extra layer of security for web browsing. I think it is fine for a service provider to use attestation inside their service to monitor integrity and such of the HW and SW providing the service. But usually that won\u2019t be manifest outside the service. It could be awkward for the user to the service to have to re evaluate the service every time the service provider changes HW. Another way to look at this is from a certification / regulatory view. If the thing you're looking at is a candidate for implementation certification such as Common Criteria, FIPS, FIDO certification, then it probably lines up with device attestation. On the other hand, if the thing your are looking at lines up with regulation (e.g., banking regulation, GDPR,...) and to some degree OWASP, then it probably lines up with service authenticity. I think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\"\nIn the last editors session, the editor's rough consensus was that https does not provide the authenticity but it provides a secure channel. Instead, authenticity is checked via successful authentication rooted in identity documents (I hope I captured appropriately, please bash). Taking that into account, how does this effect this issue? Looking at the last paragraph, \"Remote attestation procedures are not a replacement for secure authentication, they are based on secure authentication.\" might be missing vital parts. Does anything need to be added or modified here?\nNote that is INCORRECTLY referenced here. Is the resolution of a different issue about HTTPS vs Attestation.\nI think comments from NAME do need to be taken into account when the text to address this issue is written. My text in opening the issue is not intended as the text to resolve the issue. Probably the text that resolves the issue needs to refer to \"HTTPS plus the web certificate infrastructure that includes public CA's and browsers and/or private CA's used in B-to-B...\" or such.\nI think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\" The design team disagrees with this. Every authentication (including HTTPS) can be said in terms of attestation. It is the verifier that allows the decoupling of the RP to know the expected state of the Attester.", "new_text": "first act as an Attester, providing Evidence that the Owner can appraise, before the Owner would give the Relying Party an updated policy that might contain sensitive information. In such a case, mutual authentication or attestation might be needed, in which case typically one side's Evidence must be considered safe to share with an untrusted entity, in order to bootstrap the sequence. 7.4."}
{"id": "q-en-ietf-rats-wg-architecture-7d9b1eff3a1eed491407ceb8225d61f9ed12a7e078e3f4d343c921adefa234f9", "old_text": "Verifier Owner may need to trust the Verifier before giving the Endorsement, Reference Values, or Appraisal Policy to it. This can be done similarly to how a Relying Party might establish trust in a Verifier as discussed above, and in such a case, mutual attestation might even be needed as discussed in rpowner-trust. 8.", "comments": "TLS authentication is sufficient for establishing trust in the various entities before sending them privacy sensitive data. TLS is also widely supported today and attestation between servers is not. Most likely TLS will be the mechanism. This PR adds mention to in addition to attestation.\nAddresses\nsays that the way for an Endorser to trust and Verifier is to use attestation. While I won't categorically say this should never be done, it seems like this is overly complex and a somewhat misuse of attestation. I think of the Verifier as a service like a web site, not an implementation like a TEE, so TLS or similar, not attestations, seems like the more efficient and conventional way to do this. I think attestation is associated with actual HW and SW, for example the Dell server, Intel CPU and Web stack used to run the service. The operator of a Verifier gets to change those around without having to tell the people using the service. On the other hand, if a Verification Service changes owners, say from Chinese to American, then Endorser concerned about confidential stuff in the Endorsements really wants to know. I also think that Verifier Owner should not be lumped in with Endorser. The Verifier Owner is very likely to be tightly coupled organizationally with the Verifier, where the Endorser is not likely to be.\nPR addresses this well enough.\nReviewing the trust relationships described in the architecture document got me thinking about sharpening a distinction between device attestation (e.g., EAT for phone) and service authenticity (e.g. HTTPS for a web site): Device Attestation Focus is on the implementation, the HW and SW The legal entity of interest is the device manufacturer The claims are about the HW and the SW, not the device owner Example: a mobile phone (not its owner/user) Example: integrity of a remote device Service Authenticity Focus is on the provider of the service, not the HW or SW The legal entity of interest is the service provider There is no equivalent of claims, but if there was they would be about the business or person operating the service Example: a web site Example: an email provider (IMAP service) Service providers (e.g., web sites) use whatever HW and SW they want to use. As a user of the service we don\u2019t care what HW and SW they select. We trust the service provider to pick good HW and SW and to configure and operate it correctly. They might change the HW and the SW and as a user we won\u2019t know. For example we trust Bank of Xxx to operate their banking web site correctly with little concern for their HW. This is why attestation of the HW and SW is not so useful to the user of a service. Users of mobile phones and IoT devices are not trustworthy to pick secure HW and SW. They are not professionals like the people running web sites. There is little accountability against them if they pick bad SW and HW. Therefor we want attestation for these usage scenarios. This is why EAT, FIDO and Android Attestation exist. I don\u2019t think the line between device attestation and service authenticity is completely black and white. In some use cases attestation may be bent to provide some service authenticity. But I don\u2019t think it makes sense to add attestation to the millions of web sites out there, add support for it to browsers and make attestation an extra layer of security for web browsing. I think it is fine for a service provider to use attestation inside their service to monitor integrity and such of the HW and SW providing the service. But usually that won\u2019t be manifest outside the service. It could be awkward for the user to the service to have to re evaluate the service every time the service provider changes HW. Another way to look at this is from a certification / regulatory view. If the thing you're looking at is a candidate for implementation certification such as Common Criteria, FIPS, FIDO certification, then it probably lines up with device attestation. On the other hand, if the thing your are looking at lines up with regulation (e.g., banking regulation, GDPR,...) and to some degree OWASP, then it probably lines up with service authenticity. I think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\"\nIn the last editors session, the editor's rough consensus was that https does not provide the authenticity but it provides a secure channel. Instead, authenticity is checked via successful authentication rooted in identity documents (I hope I captured appropriately, please bash). Taking that into account, how does this effect this issue? Looking at the last paragraph, \"Remote attestation procedures are not a replacement for secure authentication, they are based on secure authentication.\" might be missing vital parts. Does anything need to be added or modified here?\nNote that is INCORRECTLY referenced here. Is the resolution of a different issue about HTTPS vs Attestation.\nI think comments from NAME do need to be taken into account when the text to address this issue is written. My text in opening the issue is not intended as the text to resolve the issue. Probably the text that resolves the issue needs to refer to \"HTTPS plus the web certificate infrastructure that includes public CA's and browsers and/or private CA's used in B-to-B...\" or such.\nI think the RATS architecture document should probably have text that describes when attestation is applicable and when it is not, for example saying something like \u201cattestation is not a replacement or augmentation for HTTPS/TLS\" The design team disagrees with this. Every authentication (including HTTPS) can be said in terms of attestation. It is the verifier that allows the decoupling of the RP to know the expected state of the Attester.", "new_text": "Verifier Owner may need to trust the Verifier before giving the Endorsement, Reference Values, or Appraisal Policy to it. This can be done similarly to how a Relying Party might establish trust in a Verifier as discussed above, and in such a case, mutual authentication or attestation might even be needed as discussed in rpowner-trust. 8."}
{"id": "q-en-ietf-rats-wg-architecture-b24380d2825743cfca6ade79b59e617fe3b7e4f3509de14b44e79b9c5e27bd98", "old_text": "transitively via an Endorser (e.g., the Verifier puts the Endorser's public key into its trust anchor store). In layered attestation, a root of trust is the initial Attesting Environment. Claims can be collected from or about each layer. The corresponding claims can be structured in a nested fashion that reflects the nesting of the Attester's layers. The device illustrated in layered includes (A) a BIOS stored in read- only memory, (B) an updatable bootloader, and (C) an operating system", "comments": "Added clarifying text -\n\" including roots of trust\" from .\nWe should add the sentence \"Normally, claims are not self-asserted by a layer or root of trust.\" in the section describing layered attester / layering. around line 1120?\nI'd not object that. I'd be okay with \"Normally, claims are not self-asserted by a root of trust.\"", "new_text": "transitively via an Endorser (e.g., the Verifier puts the Endorser's public key into its trust anchor store). In layered attestation, a root of trust is the initial Attesting Environment. Claims can be collected from or about each layer. The corresponding Claims can be structured in a nested fashion that reflects the nesting of the Attester's layers. Normally, Claims are not self-asserted, rather a previous layer acts as the Attesting Environment for the next layer. Claims about a root of trust typically are asserted by Endorsers. The device illustrated in layered includes (A) a BIOS stored in read- only memory, (B) an updatable bootloader, and (C) an operating system"}
{"id": "q-en-ietf-rats-wg-architecture-cb60b699a038dfe2abe438043f079f925790353470efa1e0aba4b27bf2778abc", "old_text": "different _Strength of Function_ strengthoffunction, which results in the Attestation Results being qualitatively different in strength. A result that indicates non-compliance can be used by an Attester (in the passport model) or a Relying Party (in the background-check model) to indicate that the Attester should not be treated as authorized and may be in need of remediation. In some cases, it may even indicate that the Evidence itself cannot be authenticated as being correct. An Attestation Result that indicates compliance can be used by a Relying Party to make authorization decisions based on the Relying Party's appraisal policy. The simplest such policy might be to simply authorize any party supplying a compliant Attestation Result signed by a trusted Verifier. A more complex policy might also entail comparing information provided in the result against Reference Values, or applying more complex logic on such information. Thus, Attestation Results often need to include detailed information about the Attester, for use by Relying Parties, much like physical", "comments": "Reworded 2nd and 3rd paragraphs\n` Is there a different way to say this? I assume in Passport, we\u2019re not relying on the Attester to report itself as untrustworthy? I think it\u2019s more that it doesn\u2019t get a passport, so it has nothing to hand the dude at the immigration desk? i.e. untrustworthiness is indicated by lack of a passport?\nThe paragraph should begin with \"An Attestation Result that indicates ...\" The Relying Party is always the entity / role that consumes the Attestation Result. The text that describes the Attester (in the passport model) technically violates the role interaction model. Instead, the refusal on behalf of an Attester (acting as a proxy for the Attestation Result) to convey the AR is a protocol or interaction break down scenario. And yes it is correct for the Relying Party to, by default, assume that lack of receipt of an Attestation Result means the Attester is presumed to be non-compliant. If the paragraph stated this simply up front then we can avoid dove tailing it into the description of what constitutes non-compliance behavior. For example, paragraph 2 could start differently: \"By default the Relying Party assumes the Attester does not comply with Verifier appraisals. Upon receipt of an authentic Attestation Result and given the Appraisal Policy for Relying Party is satisfied. The Attester's state of compliance changes to be in compliance and actions consistent with compliance may be applied. If the Appraisal Policy for Relying Party is not satisfied, actions consistent with non-compliance may be applied. For example, the Attester may require remediation and re-configuration or simply may be denied further access.\" The third paragraph seems redundant to this (and somewhat wordy).", "new_text": "different _Strength of Function_ strengthoffunction, which results in the Attestation Results being qualitatively different in strength. An Attestation Result that indicates non-compliance can be used by an Attester (in the passport model) or a Relying Party (in the background-check model) to indicate that the Attester should not be treated as authorized and may be in need of remediation. In some cases, it may even indicate that the Evidence itself cannot be authenticated as being correct. By default, the Relying Party does not believe the Attester to be compliant. Upon receipt of an authentic Attestation Result and given the Appraisal Policy for Attestation Results is satisfied, then the Attester is allowed to perform the prescribed actions or access. The simplest such Appraisal Policy might authorize granting the Attester full access or control over the resources guarded by the Relying Party. A more complex Appraisal Policy might involve using the information provided in the Attestation Result to compare against expected values, or to apply complex analysis of other information contained in the Attestation Result. Thus, Attestation Results often need to include detailed information about the Attester, for use by Relying Parties, much like physical"}
{"id": "q-en-ietf-rats-wg-architecture-aedc2640bfc031819576d4c01d2d05c9c2e2bd0a47d32cabc8a3f6d3723a8330", "old_text": "Relying Party or for a Verifier, then integrity of the process is compromised. The security of conveyed information may be applied at different layers, whether by a conveyance protocol, or an information encoding format. This architecture expects attestation messages (i.e., Evidence, Attestation Results, Endorsements, Reference Values, and Policies) are end-to-end protected based on the role interaction context. For example, if an Attester produces Evidence that is relayed through some other entity that doesn't implement the Attester or the intended Verifier roles, then the relaying entity should not", "comments": "rebased.\n` The first sentence seems to say \u2018you can do whatever you want\u2019, but the second seems to say \u201cyou should sign the messages\u201d This doesn\u2019t seem to be about integrity; it says Evidence must be confidential. Is that actually a requirement?\nI don't see where it says RATS messages have to be confidential. 'end-to-end' means between RATS roles. Which can be achieved at different nesting levels; where nesting is interpreted to mean nesting in the message (data) layers. Summary - I don't see a problem with the proposed text.\nI am inclined to agree with Ned here, but we could change the first line, while we're at it. The reuse of the word \"protected\" used in the 1st sentence in the 2nd sentence now might make it more clear that this message protection can happen on many layers.\nOk but this PR merges into the branch instead of master. Recommend rebasing on master so that order of merge doesn't matter.", "new_text": "Relying Party or for a Verifier, then integrity of the process is compromised. The security protecting conveyed information may be applied at different layers, whether by a conveyance protocol, or an information encoding format. This architecture expects attestation messages (i.e., Evidence, Attestation Results, Endorsements, Reference Values, and Policies) are end-to-end protected based on the role interaction context. For example, if an Attester produces Evidence that is relayed through some other entity that doesn't implement the Attester or the intended Verifier roles, then the relaying entity should not"}
{"id": "q-en-ietf-rats-wg-architecture-6ffd51b60b91a195cf808e889182417570bb60b2d763cb91e580956c36f4ddce", "old_text": "measures appropriate for its value. The degree of protection afforded to this key material can vary by device, based upon considerations as to a cost/benefit evaluation of the intended function of the device. The confidentiality protection is fundamentally based upon some amount of physical protection: while encryption is often used to provide confidentiality when a key is conveyed across a factory, where the attestation key is created or applied, it must be available in an unencrypted form. The physical", "comments": "** Section 12.1.2.1. Would it be more precise to say: OLD The degree of protection afforded to this key material can vary by device, based upon considerations as to a cost/benefit evaluation of the intended function of the device. NEW The degree of protection afforded to this key material can vary by the intended function of the device and the specific practices of the device manufacturer or integrator.", "new_text": "measures appropriate for its value. The degree of protection afforded to this key material can vary by the intended function of the device and the specific practices of the device manufacturer or integrator. The confidentiality protection is fundamentally based upon some amount of physical protection: while encryption is often used to provide confidentiality when a key is conveyed across a factory, where the attestation key is created or applied, it must be available in an unencrypted form. The physical"}
{"id": "q-en-ietf-rats-wg-architecture-17c3f0d4ddfbbe84a756fa49e1fc60dc79bd3152365183fdb800229b7cc1af95", "old_text": "constrain the types of information for which the trust anchor is authoritative.\" The trust anchor may be a certificate or it may be a raw public key along with additional data if necessary such as its public key algorithm and parameters. Thus, trusting a Verifier might be expressed by having the Relying Party store the Verifier's public key or certificate in its trust anchor store, or might be expressed by storing the public key or certificate of an entity (e.g., a Certificate Authority) that is in the Verifier's certificate path. For example, the Relying Party can verify that the Verifier is an expected one by out of band establishment of key material, combined with a protocol like TLS to communicate. There is an assumption that between the establishment", "comments": "Throughout the document the term \"trust anchor\" is meant in the to be a public key and associated (meta)data. This definition does not cover attestation schemes based on symmetric crypto, e.g., , and Arm PSA, and should be extended to also cover such cases.\nWas there specific text that you believe needs to be changed?\nyes, we identified at least two places: \u00a77.1 and \u00a712.4.\nReads great to me. Any other opinions?", "new_text": "constrain the types of information for which the trust anchor is authoritative.\" The trust anchor may be a certificate or it may be a raw public key along with additional data if necessary such as its public key algorithm and parameters. In the context of this document, a trust anchor may also be a symmetric key, as in TCG-DICE- SIBDA or the symmetric mode described in I-D.tschofenig-rats-psa- token. Thus, trusting a Verifier might be expressed by having the Relying Party store the Verifier's key or certificate in its trust anchor store, or might be expressed by storing the public key or certificate of an entity (e.g., a Certificate Authority) that is in the Verifier's certificate path. For example, the Relying Party can verify that the Verifier is an expected one by out of band establishment of key material, combined with a protocol like TLS to communicate. There is an assumption that between the establishment"}
{"id": "q-en-ietf-rats-wg-architecture-17c3f0d4ddfbbe84a756fa49e1fc60dc79bd3152365183fdb800229b7cc1af95", "old_text": "As noted in trustmodel, Verifiers and Relying Parties have trust anchor stores that must be secured. RFC6024 contains more discussion of trust anchor store requirements. Specifically, a trust anchor store must resist modification against unauthorized insertion, deletion, and modification. If certificates are used as trust anchors, Verifiers and Relying Parties are also responsible for validating the entire certificate", "comments": "Throughout the document the term \"trust anchor\" is meant in the to be a public key and associated (meta)data. This definition does not cover attestation schemes based on symmetric crypto, e.g., , and Arm PSA, and should be extended to also cover such cases.\nWas there specific text that you believe needs to be changed?\nyes, we identified at least two places: \u00a77.1 and \u00a712.4.\nReads great to me. Any other opinions?", "new_text": "As noted in trustmodel, Verifiers and Relying Parties have trust anchor stores that must be secured. RFC6024 contains more discussion of trust anchor store requirements for protecting public keys. Section 6 of NIST-800-57-p1 contains a comprehensive treatment of the topic, including the protection of symmetric key material. Specifically, a trust anchor store must resist modification against unauthorized insertion, deletion, and modification. Additionally, if the trust anchor is a symmetric key, the trust anchor store must not allow unauthorized read. If certificates are used as trust anchors, Verifiers and Relying Parties are also responsible for validating the entire certificate"}
{"id": "q-en-ietf-rats-wg-architecture-4a1bd51d1e98380ffd26e825c367ceac19d2cf5b376079599f3050b1b5467a99", "old_text": "9. 10. 11.", "comments": "Add in text from Birkholz draft, with some wordsmithing and grammatical fixes from Dave Thaler", "new_text": "9. The conveyance of Evidence and the resulting Attestation Results reveal a great deal of information about the internal state of a device. In many cases, the whole point of the Attestation process is to provide reliable information about the type of the device and the firmware/software that the device is running. This information is particularly interesting to many attackers. For example, knowing that a device is running a weak version of firmware provides a way to aim attacks better. Protocols that convey Evidence or Attestation Results are responsible for detailing what kinds of information are disclosed, and to whom they are exposed. 10. 11."}
{"id": "q-en-ietf-rats-wg-architecture-009d1a84dc2570d08a9cf20d3f96eafbd32136be2c4cd3cb62b06eea75bd01e3", "old_text": "2. 3. 4. 5. 6. 7. 8. 9. The conveyance of Evidence and the resulting Attestation Results reveal a great deal of information about the internal state of a device. In many cases, the whole point of the Attestation process is", "comments": "Here's a first attempt to combine terminology sections, and trust model. Includes new terms Endorser and Appraisal Policy per recent discussion. Signed-off-by: Dave Thaler\nDone\nDone. (But if you have feedback on a sentence like this in the future, please attach the comment directly to the code line under the \"Files changed\" tab, to make it easy to find which phrase you're referring to.)\nDone\nUpdated. How about now?\nI would like to discuss the possibility of combining the terms \"Endorsement\" and \"Attestation Result\". Just like we have an Appraisal Policy for a verifier and an Appraisal Policy for a relying party, some might view an Attestation Result as an Endorsement for use by the relying party. I'm ok either way, but wanted to discuss this in our next call to see if others think it would help or confuse things.\nI still believe my original definition of Attestation is good, I haven't seen any comments here about what the problem was. For comparison, URL says it's \"The process of validating the integrity of a computing device such as a server needed for trusted computing.\". I'm also ok with that definition.\nI\u2019m OK with that wording for introducing the concept. It may break down if the terminology for \u2018device\u2019 and \u2018trusted computing\u2019 become overly specified / specific.\nThe word 'health' is generally being used where other words such as 'trustworthiness', or 'state' could be used. I'm concerned that 'health' might not easily be synonymous with 'trust'. It also seems to bias the architecture toward NEA (RFC5209) which IMO does not define attestation.", "new_text": "2. The document defines the term \"Remote Attestation\" as follows: A process by which one entity (the \"Attester\") provides evidence about its identity and state to another remote entity (the \"Relying Party\"), which then assesses the Attester's trustworthiness for the Relying Party's own purposes. This document then uses the following terms: Appraisal Policy for Evidence: A set of rules that direct how a verifier evaluates the validity of information about an Attester. Compare /security policy/ in [RFC4949]. Appraisal Policy for Attestation Result: A set of rules that direct how a Relying Party evaluates the validity of information about an Attester. Compare /security policy/ in [RFC4949]. Attestation Result: The evaluation results generated by a Verifier, typically including information about an Attester, where the Verifier vouches for the validity of the results. Attester: An entity whose attributes must be evaluated in order to determine whether the entity is considered trustworthy, such as when deciding whether the entity is authorized to perform some operation. Endorsement: A secure statement that some entity (typically a manufacturer) vouches for the integrity of an Attester's signing capability. Endorser: An entity that creates Endorsements that can be used to help evaluate trustworthiness of Attesters. Evidence: A set of information about an Attester that is to be evaluated by a Verifier. Relying Party: An entity that depends on the validity of information about another entity, typically for purposes of authorization. Compare /relying party/ in [RFC4949]. Relying Party Owner: An entity, such as an administrator, that is authorized to configure Appraisal Policy for Attestation Results in a Relying Party. Verifier: An entity that evaluates the validity of Evidence about an Attester. Verifier Owner: An entity, such as an administrator, that is authorized to configure Appraisal Policy for Evidence in a Verifier. [EDITORIAL NOTE] The term Attestation and Remote Attestation are not defined in this document, at this time. This document will include pointers to industry uses of the terms, in an attempt to gain consensus around the term, and be consistent with the charter text defining this term. 3. 4. dataflow depicts the data that flows between different roles, independent of protocol or use case. An Attester creates Evidence that is conveyed to a Verifier. The Verifier uses the Evidence, and any Endorsements from Endorsers, by applying an Evidence Appraisal Policy to assess the trustworthiness of the Attester, and generates Attestation Results for use by Relying Parties. The Evidence Appraisal Policy might be obtained from an Endorser along with the Endorsements, or might be obtained via some other mechanism such as being configured in the Verifier by an administrator. The Relying Party uses Attestation Results by applying its own Appraisal Policy to make application-specific decisions such as authorization decisions. The Attestation Result Appraisal Policy might, for example, be configured in the Relying Party by an administrator. 5. 6. An Attester consists of at least one Attesting Environment and Attested Environment. In some implementations, the Attesting and Attested Environments might be combined. Other implementations might have multiple Attesting and Attested Environments. 7. The scope of this document is scenarios for which a Relying Party trusts a Verifier that can evaluate the trustworthiness of information about an Attester. Such trust might come by the Relying Party trusting the Verifier (or its public key) directly, or might come by trusting an entity (e.g., a Certificate Authority) that is in the Verifier's certificate chain. The Relying Party might implicitly trust a Verifier (such as in the Verifying Relying Party combination). Or, for a stronger level of security, the Relying Party might require that the Verifier itself provide information about itself that the Relying Party can use to evaluate the trustworthiness of the Verifier before accepting its Attestation Results. In solutions following the background-check model, the Attester is assumed to trust the Verifier (again, whether directly or indirectly via a Certificate Authority that it trusts), since the Attester relies on an Attestation Result it obtains from the Verifier, in order to access resources. The Verifier trusts (or more specifically, the Verifier's security policy is written in a way that configures the Verifier to trust) a manufacturer, or the manufacturer's hardware, so as to be able to evaluate the trustworthiness of that manufacturer's devices. In solutions with weaker security, a Verifier might be configured to implicitly trust firmware or even software (e.g., a hypervisor). That is, it might evaluate the trustworthiness of an application component, or operating system component or service, under the assumption that information provided about it by the lower-layer hypervisor or firmware is true. A stronger level of security comes when information can be vouched for by hardware or by ROM code, especially if such hardware is physically resistant to hardware tampering. The component that is implicitly trusted is often referred to as a Root of Trust. 8. 9. 10. The conveyance of Evidence and the resulting Attestation Results reveal a great deal of information about the internal state of a device. In many cases, the whole point of the Attestation process is"}
{"id": "q-en-ietf-rats-wg-architecture-009d1a84dc2570d08a9cf20d3f96eafbd32136be2c4cd3cb62b06eea75bd01e3", "old_text": "for detailing what kinds of information are disclosed, and to whom they are exposed. 10. 11. This document does not require any actions by IANA. 12. Special thanks go to David Wooten, Joerg Borchert, Hannes Tschofenig, Laurence Lundblade, Diego Lopez, Jessica Fitzgerald-McKay, Frank Xia,", "comments": "Here's a first attempt to combine terminology sections, and trust model. Includes new terms Endorser and Appraisal Policy per recent discussion. Signed-off-by: Dave Thaler\nDone\nDone. (But if you have feedback on a sentence like this in the future, please attach the comment directly to the code line under the \"Files changed\" tab, to make it easy to find which phrase you're referring to.)\nDone\nUpdated. How about now?\nI would like to discuss the possibility of combining the terms \"Endorsement\" and \"Attestation Result\". Just like we have an Appraisal Policy for a verifier and an Appraisal Policy for a relying party, some might view an Attestation Result as an Endorsement for use by the relying party. I'm ok either way, but wanted to discuss this in our next call to see if others think it would help or confuse things.\nI still believe my original definition of Attestation is good, I haven't seen any comments here about what the problem was. For comparison, URL says it's \"The process of validating the integrity of a computing device such as a server needed for trusted computing.\". I'm also ok with that definition.\nI\u2019m OK with that wording for introducing the concept. It may break down if the terminology for \u2018device\u2019 and \u2018trusted computing\u2019 become overly specified / specific.\nThe word 'health' is generally being used where other words such as 'trustworthiness', or 'state' could be used. I'm concerned that 'health' might not easily be synonymous with 'trust'. It also seems to bias the architecture toward NEA (RFC5209) which IMO does not define attestation.", "new_text": "for detailing what kinds of information are disclosed, and to whom they are exposed. 11. 12. This document does not require any actions by IANA. 13. Special thanks go to David Wooten, Joerg Borchert, Hannes Tschofenig, Laurence Lundblade, Diego Lopez, Jessica Fitzgerald-McKay, Frank Xia,"}
{"id": "q-en-ietf-rats-wg-architecture-972315f0f5f8ed95a660303761574dd2ad4f6c8063e4498414aa26ca42682a79", "old_text": "obtained via some other mechanism such as being configured in the Verifier by an administrator. The Relying Party uses Attestation Results by applying its own Appraisal Policy to make application-specific decisions such as authorization decisions. The Attestation Result Appraisal Policy", "comments": "possible text closing .\nThe architecture draft should say that known-good reference values can be either a part of an endorsement or part of an appraisal policy or both.\nI would only want to mention \"known-good reference values\" at all if it comes with an explanation of why they're not sufficient in the real world. Not sure whether we want the arch doc to go there. We can discuss in a meeting.\nI wouldn't say they are never sufficient. I would say KGVs serve a specific purpose. And this purpose is documenting the successful loading of well-known software on well-known Attester equipment, where the Attester equipment supports the signing of extended/hashed measurements. Adding Guy/Ned as they probably have better words for this based on their work with TCG & DICE. Eric\nHi Eric, Dave, I'd just add one more restriction to the list... if \"known good values\" means \"if the measured hash in a PCR is exactly this value then we're good\", the domain has to be narrowed to single-threaded well-known software instances, e.g., the components of a BIOS. The complexity of known-good-values comes with asynchronous threads such as IMA in Linux; in that case, the hash in the TPM PCR can only serve as a check on the (perhaps huge) number of individual hashes in the log file, each of which needs its own Known Good Value. Hence the SWID tags. I'm certainly not qualified to say how all this would work in EAT- or TEEP-land... /guy Juniper Business Use Only\nGoing back to Laurence\u2019s original request. The observation he is making is that both the Endorser and the Appraisal Policy for Evidence Owner entities can say what is a \u2018reference\u2019 or \u2018known good\u2019 value. Presumably, an Owner entity did some due diligence to ensure the KGV is indeed \u2018good\u2019 given Owners may not have access to source code or other tools that the original vendor has access to. But ultimately, all appraisal decisions are qualified by the Owner\u2019s wishes. There is complexity in ecosystems when a vendor (e.g. BIOS vendor) doesn\u2019t give up ownership of the FW even though the computer is sold to the customer. The PC owner might have a policy that allows the BIOS vendor to supply an Appraisal policy for just the BIOS. The Roles architecture for the BIOS component may differ from the Roles architecture for other components. In this case, the Endorsement and Policy roles may be supplied by the same Entity (i.e. the bios vendor). See more inline below - Nms>\nThe architecture doc should (must?) facilitate a good understanding of attestation end-end for the reader. K-g-v seem like a very core concept to attestation. Leaving them out of the architecture document will leave the reader scratching their head, like I was scratching mine. I don\u2019t think it has to be a detailed, concise or definitional explanation. Something like this: For some claims the Verifier will compare the claims in the Evidence to known-good-values or reference values. In many cases these are hashes of files, SW or memory regions. In other cases they might be expected SW or HW versions. In other cases, they may be something else. The actual data format and semantics of a known-good-value are specific to claims and implementations. There is no general purpose format for them or general means for comparison defined in this architecture document. These known-good-values may be conveyed to the Verifier as part of an Endorsement or as part of Appraisal Policy or both as these are the two input paths to the Verifier. LL\nI agree that the concept Laurence outlines should be in the architecture doc. I think there are words to that effect currently. For example, section 7.2 says \u201cdevices from a given manufacturer have information matching a set of known-good reference values,\u201d and \u201cThus, an Endorsement may be helpful information in authenticating information about a device, but is not necessarily sufficient to authorize access to resources which may need device-specific information such as a public key for the device\u201d Admittedly, this is scant and leaves a lot to the reader to figure out or make assumption. The architecture could more clearly capture the idea that Endorsements are \u201creference\u201d claims that are compared with Evidence claims to validate their correctness. It should introduce other common terms such as \u2018known good values\u2019, \u2018reference values\u2019 and \u2018endorsed values\u2019 as roughly synonymous. I think it should go further to say that Endorsements can include values that are static (intrinsic) that don\u2019t require attestation, but can be asserted about a class of device, platform, entity and appraised according to an appraisal policy for Evidence (Even though the claims don\u2019t appear in Evidence). While I think it is possible to interpret the existing wording according to the above understanding, it is also possible to interpret it differently. Therefore, the section should be improved. I also think the first sentence (and definition in terminology) are insufficient. While \u201cAn Endorsement is a secure statement that some entity (e.g., a manufacturer) vouches for the integrity of the device\u2019s signing capability.\u201d \u2013 is correct, it is insufficient because it is also a statement about the devices Attesting Environment ability to collect claims securely.\nGiven that this Architecture document will serve as a guide to additional work that is needed from RATS wg to define data schema, interfaces for an interoperable implementations of verifiers with endorsements from multiple vendors to be compared with evidence from different implementation of attesters, isn't it worthwhile creating a hook in this document for scope of work needed to tackle it in future documents? If yes this document should define KGV, the types of KGVs that can be endorsed or defined in attestation policy, a need for data schema for each type of KGV and a set of interfaces to fetch/publish it deferred to a future document. Similar to claims encoding formats specified in section 8, a specification needed but deferred to future document for KGV endorsement formats would be useful.\nA google search for \"known good values\" yields some mention in the Cisco trust center and a few books on security. Linux IMA mentions \"good measurements\" once that I could find. So there's some use of the term out there and it seems worth mentioning along with RIM. It is also a very descriptive term that is probably immediately clear to the user.", "new_text": "obtained via some other mechanism such as being configured in the Verifier by an administrator. For example, for some claims the Verifier might check the values of claims in the Evidence against constraints specified in the Appraisal Policy for Evidence. Such constraints might involve a comparison for equality against reference values, or a check for being in a range bounded by reference values, or membership in a set of reference values, or a check against values in other claims, or any other test. Such reference values might be specified as part of the Appraisal Policy for Evidence itself, or might be obtained from a separate source, such as an Endorsement, and then used by the Appraisal Policy for Evidence. The actual data format and semantics of a known-good value are specific to claims and implementations. There is no general purpose format for them or general means for comparison defined in this architecture document. Similarly, for some claims the Verifier might check the values of claims in the Evidence for membership in a set, or against a range of values, or against known- bad values such as an expiration time. These reference values may be conveyed to the Verifier as part of an Endorsement or as part of Appraisal Policy or both as these are the two input paths to the Verifier. The Relying Party uses Attestation Results by applying its own Appraisal Policy to make application-specific decisions such as authorization decisions. The Attestation Result Appraisal Policy"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-2e51b5b3e278bbae2391573b25fddba48d027ed1a2d2d49b46ee4bf01036e21e", "old_text": "Issuer completes the issuance flow by computing a blinded response as follows: The rsabssa_blind_sign function is defined in BLINDRSA, Section 5.1.2.. The Issuer generates an HTTP response with status code 200 whose body consists of \"blind_sig\", with the content type set as \"message/token-response\". 6.3.", "comments": "Generated from . cc NAME NAME NAME\nThese test vectors should cover the whole flow of the protocol, starting with the input and producing a output.", "new_text": "Issuer completes the issuance flow by computing a blinded response as follows: This is encoded and transmitted to the client in the following TokenResponse structure: The rsabssa_blind_sign function is defined in BLINDRSA, Section 5.1.2.. The Issuer generates an HTTP response with status code 200 whose body consists of TokenResponse, with the content type set as \"message/token-response\". 6.3."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-8a9c1527aa53418c29d06d6463b812b7aa2ca28c3096dbde48f5d30e0ea9d0fa", "old_text": "TokenChallenge message, along with information about what keys to use when requesting a token from the issuer. The TokenChallenge message has the following structure: The structure fields are defined as follows:", "comments": "Spell out origin steps more clearly Split token caching into its own section I tried having an overall \"origin\" and \"client\" section, but it didn't flow as well.", "new_text": "TokenChallenge message, along with information about what keys to use when requesting a token from the issuer. Origins that support this authentication scheme need to handle the following tasks: Select which issuer to use, and configure the issuer name and token-key to include in WWW-Authenticate challenges. Determine a redemption context construction to include in the TokenChallenge, as discussed in context-construction. Select the origin information to include in the TokenChallenge. This can be empty to allow fully cross-origin tokens, a single origin name that matches the origin itself, or a list of origin names containing the origin. The TokenChallenge message has the following structure: The structure fields are defined as follows:"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-8a9c1527aa53418c29d06d6463b812b7aa2ca28c3096dbde48f5d30e0ea9d0fa", "old_text": "names for which current connection is not authoritative (according to the TLS certificate). Note that it is possible for the WWW-Authenticate header to include multiple challenges, in order to allow the client to fetch a batch of multiple tokens for future use. For example, the WWW-Authenticate header could look like this: If a client fetches a batch of multiple tokens for future use that are bound to a specific redemption context (the redemption_context in the TokenChallenge was not empty), clients SHOULD discard these tokens upon flushing state such as HTTP cookies COOKIES, or changing networks. Using these tokens in a context that otherwise would not be linkable to the original context could allow the origin to recognize a client. 2.1.1.", "comments": "Spell out origin steps more clearly Split token caching into its own section I tried having an overall \"origin\" and \"client\" section, but it didn't flow as well.", "new_text": "names for which current connection is not authoritative (according to the TLS certificate). Caching and pre-fetching of tokens is discussed in caching. Note that it is possible for the WWW-Authenticate header to include multiple challenges. This allows the origin to indicate support for different token types, issuers, or to include multiple redemption contexts. For example, the WWW-Authenticate header could look like this: 2.1.1."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-8a9c1527aa53418c29d06d6463b812b7aa2ca28c3096dbde48f5d30e0ea9d0fa", "old_text": "address prefix). An empty redemption context is not bound to any property of the client session. 2.2.", "comments": "Spell out origin steps more clearly Split token caching into its own section I tried having an overall \"origin\" and \"client\" section, but it didn't flow as well.", "new_text": "address prefix). An empty redemption context is not bound to any property of the client session. Preventing double spending on tokens requires the origin to keep state associated with the redemption context. The size of this state varies based on the size of the redemption context. For example, double spend state for unique, per-request redemption contexts does only needs to exist within the scope of the request connection or session. In contrast, double spend state for empty redemption contexts must be stored and shared across all requests until token-key expiration or rotation. 2.1.2. Clients can generate multiple tokens from a single TokenChallenge, and cache them for future use. This improves privacy by separating the time of token issuance from the time of token redemption, and also allows clients to avoid any overhead of receiving new tokens via the issuance protocol. Cached tokens can only be redeemed when they match all of the fields in the TokenChallenge: token_type, issuer_name, redemption_context, and origin_info. Clients ought to store cached tokens based on all of these fields, to avoid trying to redeem a token that does not match. Note that each token has a unique client nonce, which is sent in token redemption (redemption). If a client fetches a batch of multiple tokens for future use that are bound to a specific redemption context (the redemption_context in the TokenChallenge was not empty), clients SHOULD discard these tokens upon flushing state such as HTTP cookies COOKIES, or changing networks. Using these tokens in a context that otherwise would not be linkable to the original context could allow the origin to recognize a client. 2.2."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-4d467baae1e9aad24ea2b67e7d1fcd88cb660a542cd3e808b5e60b9a4d16f6e3", "old_text": "empty redemption contexts must be stored and shared across all requests until token-key expiration or rotation. 2.1.2. Clients can generate multiple tokens from a single TokenChallenge,", "comments": "The section references seem to be breaking the build, so I'll fix that later before we land this.\nOrigins currently choose whether their challenges are per-origin or cross-origin. For cross-origin, double-spend prevention is only as good as the coordination between origins and the Issuer. Per-origin allows double-spend prevention to be isolated to a single origin; also prevents the cache of tokens being take up by some other origin.\nWrite this as \"any origins that share a redemption context must also share double spend state for that context.\"", "new_text": "empty redemption contexts must be stored and shared across all requests until token-key expiration or rotation. Origins that share redemption contexts, i.e., by using the same redemption context, choosing the same issuer, and providing the same origin_info field in the TokenChallenge, must necessarily share state required to enforce double spend prevention. Origins should consider the operational complexity of this shared state before choosing to share redemption contexts. Failure to successfully synchronize this state and use it for double spend prevention can allow Clients to redeem tokens to one Origin that were issued after an interaction with another Origin that shares the context. 2.1.2. Clients can generate multiple tokens from a single TokenChallenge,"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-4d467baae1e9aad24ea2b67e7d1fcd88cb660a542cd3e808b5e60b9a4d16f6e3", "old_text": "Issuer identity. Tokens identify which issuers are trusted for a given issuance protocol. Interactive or non-interactive. Tokens can either be interactive or not. An interactive token is one which requires a freshly issued token based on the challenge, whereas a non-interactive token can be issued proactively and cached for future use. Per-origin or cross-origin. Tokens can be constrained to the Origin for which the challenge originated, or can be used across Origins. Depending on the use case, Origins may need to maintain state to track redeemed tokens. For example, Origins that accept non- interactive, cross-origin tokens SHOULD track which tokens have been redeemed already, since these tokens can be issued and then spent multiple times in response to any such challenge. See double-spend for discussion. Origins that admit cross-origin tokens bear some risk of allowing tokens issued for one Origin to be spent in an interaction with", "comments": "The section references seem to be breaking the build, so I'll fix that later before we land this.\nOrigins currently choose whether their challenges are per-origin or cross-origin. For cross-origin, double-spend prevention is only as good as the coordination between origins and the Issuer. Per-origin allows double-spend prevention to be isolated to a single origin; also prevents the cache of tokens being take up by some other origin.\nWrite this as \"any origins that share a redemption context must also share double spend state for that context.\"", "new_text": "Issuer identity. Tokens identify which issuers are trusted for a given issuance protocol. Redemption context. Tokens can be bound to a given redemption context, which influences a client's ability to pre-fetch and cache tokens. For example, an empty redemption context always allows tokens to be issued and redeemed non-interactively, whereas a fresh and random redemption context means that the redeemed token must be issued only after the client receives the challenge. See Section 2.1.1 of HTTP-Authentication for more details. Per-origin or cross-origin. Tokens can be constrained to the Origin for which the challenge originated, or can be used across Origins. Depending on the use case, Origins may need to maintain state to track redeemed tokens. For example, Origins that accept cross-origin across shared redemption contexts tokens SHOULD track which tokens have been redeemed already in those redemption contexts, since these tokens can be issued and then spent multiple times in response to any such challenge. See Section 2.1.1 of HTTP-Authentication for discussion. Origins that admit cross-origin tokens bear some risk of allowing tokens issued for one Origin to be spent in an interaction with"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-4d467baae1e9aad24ea2b67e7d1fcd88cb660a542cd3e808b5e60b9a4d16f6e3", "old_text": "reconstruct a Client's browsing history. Attestation and redemption context unlinkability requires that these events be separated over time, e.g., through the use of non- interactive tokens that can be issued without a fresh Origin challenge, or over space, e.g., through the use of an anonymizing proxy when connecting to the Origin. 4.2.", "comments": "The section references seem to be breaking the build, so I'll fix that later before we land this.\nOrigins currently choose whether their challenges are per-origin or cross-origin. For cross-origin, double-spend prevention is only as good as the coordination between origins and the Issuer. Per-origin allows double-spend prevention to be isolated to a single origin; also prevents the cache of tokens being take up by some other origin.\nWrite this as \"any origins that share a redemption context must also share double spend state for that context.\"", "new_text": "reconstruct a Client's browsing history. Attestation and redemption context unlinkability requires that these events be separated over time, e.g., through the use of tokens with an empty redemption context, or over space, e.g., through the use of an anonymizing proxy when connecting to the Origin. 4.2."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-4d467baae1e9aad24ea2b67e7d1fcd88cb660a542cd3e808b5e60b9a4d16f6e3", "old_text": "6.1. When applicable for non-interactive tokens, all Origins SHOULD implement a robust storage-query mechanism for checking that tokens sent by clients have not been spent before. Such tokens only need to be checked for each Origin individually. But all Origins must perform global double-spend checks to avoid clients from exploiting the possibility of spending tokens more than once against distributed token checking systems. For the same reason, the global data storage must have quick update times. While an update is occurring it may be possible for a malicious client to spend a token more than once. 6.2. When a Client holds tokens for an Issuer, it is possible for any verifier to invoke that client to redeem tokens for that Issuer. This can lead to an attack where a malicious verifier can force a", "comments": "The section references seem to be breaking the build, so I'll fix that later before we land this.\nOrigins currently choose whether their challenges are per-origin or cross-origin. For cross-origin, double-spend prevention is only as good as the coordination between origins and the Issuer. Per-origin allows double-spend prevention to be isolated to a single origin; also prevents the cache of tokens being take up by some other origin.\nWrite this as \"any origins that share a redemption context must also share double spend state for that context.\"", "new_text": "6.1. When a Client holds tokens for an Issuer, it is possible for any verifier to invoke that client to redeem tokens for that Issuer. This can lead to an attack where a malicious verifier can force a"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-efee838b1b0c349b313e2aa813a3ab8384d4f8466092f5bcd650471b807f5f96", "old_text": "previous variant in two important ways: The output tokens are publicly verifiable by anyone with the Issuer public key; and The issuance protocol does not admit public or private metadata to bind additional context to tokens. Otherwise, this variant is nearly identical. In particular, Issuers provide a Private and Public Key, denoted skI and pkI, respectively, used to produce tokens as input to the protocol. See public-issuer- configuration for how this key pair is generated. Clients provide the following as input to the issuance protocol:", "comments": "The publicly verifiable basic token issuance protocol doesn't include an origin name that the issuer can see, so it does allow origins to be able to use that issuer (as long as the know the key) without explicit coordination or permission from the issuer. This isn't necessarily a problem, but we should likely mention it.", "new_text": "previous variant in two important ways: The output tokens are publicly verifiable by anyone with the Issuer public key. The issuance protocol does not admit public or private metadata to bind additional context to tokens. The first property means that any Origin can select a given Issuer to produce tokens, as long as the Origin has the Issuer public key, without explicit coordination or permission from the Issuer. This is because the Issuer does not learn the Origin that requested the token during the issuance protocol. Beyond these differences, the publicly verifiable issuance protocol variant is nearly identical to the privately verifiable issuance protocol variant. In particular, Issuers provide a Private and Public Key, denoted skI and pkI, respectively, used to produce tokens as input to the protocol. See public-issuer-configuration for how this key pair is generated. Clients provide the following as input to the issuance protocol:"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-29354ad285db797bf037e78b529056a9ae776a5d487df48f2efbdbf86cd3ef7c", "old_text": "Issuer portrayed in fig-overview can be instantiated and deployed in a number of ways. This section covers some expected deployment models and their corresponding security and privacy considerations. The discussion below assumes non-collusion between entities when operated by separate parties. Mechanisms for enforcing non-collusion are out of scope for this architecture. 4.1.", "comments": "This doesn't go overboard and clarify non-collusion assumptions for all models. It simply states the requirement that entities with access to the attestation context and redemption context do not collude, and then generalizes somewhat for simplicity. cc NAME NAME\nThe architecture assumes that any two parties do not collude with each other. I think this is too strong an assumption. In particular, if the Origin does not collude with anyone else, what is the point of unlinkability of the issuance and redemption flows? Looking at the use cases, it seems that where the Attester and Issuer are distinct there is a needed non-collusion assumption between them to achieve desirable privacy, but I think the origin colluding with one of the Issuer or Attester should be fine and would be worth considering in scope. Also, in addition to the non-collusion assumption, are there any other assumptions made on the parties, such as honest-but-curious, or do we consider them to be potentially fully malicious? It would be nice to spell this out.\nIf the Origin colluded with the Issuer it would learn the client's identity (presumably), which would be bad. Ultimately, the desired property is that any sort of collusion that would allow the redemption context to be linked to the attestation context is prohibited. I'll try to clarify.", "new_text": "Issuer portrayed in fig-overview can be instantiated and deployed in a number of ways. This section covers some expected deployment models and their corresponding security and privacy considerations. The discussion below assumes non-collusion between entities that have access to the attestation context and entities that have access to the redemption context, as collusion between such entities would enable linking of these contexts. Generally, this means that entities operated by separate parties do not collude. Mechanisms for enforcing non-collusion are out of scope for this architecture. 4.1."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-a586d338b1bbb9a4c25346d622ac021f0e08067a9465b55c1c971807e5a10339", "old_text": "used later as described in private-finalize. The Client then generates an HTTP POST request to send to the Issuer, with the TokenRequest as the body. The media type for this request is \"message/token-request\". An example request is shown below. Upon receipt of the request, the Issuer validates the following conditions:", "comments": "I also moved from the message namespace to the application namespace. The difference is helpfully summarized .\nI see in the current draft that they register and . Since \"token\" is a very broad term, I'm wondering if something slightly more specific to Privacy Pass such as and might be reasonable?\nIf we make it more specific, it should be \"private\" instead of \"privacy\", to be \"private-token\" and align with the auth scheme. There's a fair amount of deployment on these types today, so this would be a relatively large change, or servers would need to tolerate both versions.\nI'm in favor of making this change and having issuers support both for a little while.\nCool, that would work. NAME want to make the proposed update?\nWill do!\nNAME PR's up!\nLooks good!", "new_text": "used later as described in private-finalize. The Client then generates an HTTP POST request to send to the Issuer, with the TokenRequest as the body. The media type for this request is \"application/private-token-request\". An example request is shown below. Upon receipt of the request, the Issuer validates the following conditions:"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-a586d338b1bbb9a4c25346d622ac021f0e08067a9465b55c1c971807e5a10339", "old_text": "where Ns is as defined in OPRF, Section 4. The Issuer generates an HTTP response with status code 200 whose body consists of TokenResponse, with the content type set as \"message/ token-response\". 5.3.", "comments": "I also moved from the message namespace to the application namespace. The difference is helpfully summarized .\nI see in the current draft that they register and . Since \"token\" is a very broad term, I'm wondering if something slightly more specific to Privacy Pass such as and might be reasonable?\nIf we make it more specific, it should be \"private\" instead of \"privacy\", to be \"private-token\" and align with the auth scheme. There's a fair amount of deployment on these types today, so this would be a relatively large change, or servers would need to tolerate both versions.\nI'm in favor of making this change and having issuers support both for a little while.\nCool, that would work. NAME want to make the proposed update?\nWill do!\nNAME PR's up!\nLooks good!", "new_text": "where Ns is as defined in OPRF, Section 4. The Issuer generates an HTTP response with status code 200 whose body consists of TokenResponse, with the content type set as \"application/ private-token-response\". 5.3."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-a586d338b1bbb9a4c25346d622ac021f0e08067a9465b55c1c971807e5a10339", "old_text": "The Client then generates an HTTP POST request to send to the Issuer, with the TokenRequest as the body. The media type for this request is \"message/token-request\". An example request is shown below, where Nk = 512. Upon receipt of the request, the Issuer validates the following conditions:", "comments": "I also moved from the message namespace to the application namespace. The difference is helpfully summarized .\nI see in the current draft that they register and . Since \"token\" is a very broad term, I'm wondering if something slightly more specific to Privacy Pass such as and might be reasonable?\nIf we make it more specific, it should be \"private\" instead of \"privacy\", to be \"private-token\" and align with the auth scheme. There's a fair amount of deployment on these types today, so this would be a relatively large change, or servers would need to tolerate both versions.\nI'm in favor of making this change and having issuers support both for a little while.\nCool, that would work. NAME want to make the proposed update?\nWill do!\nNAME PR's up!\nLooks good!", "new_text": "The Client then generates an HTTP POST request to send to the Issuer, with the TokenRequest as the body. The media type for this request is \"application/private-token-request\". An example request is shown below, where Nk = 512. Upon receipt of the request, the Issuer validates the following conditions:"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-a586d338b1bbb9a4c25346d622ac021f0e08067a9465b55c1c971807e5a10339", "old_text": "The rsabssa_blind_sign function is defined in BLINDRSA, Section 5.1.2.. The Issuer generates an HTTP response with status code 200 whose body consists of TokenResponse, with the content type set as \"message/token-response\". 6.3.", "comments": "I also moved from the message namespace to the application namespace. The difference is helpfully summarized .\nI see in the current draft that they register and . Since \"token\" is a very broad term, I'm wondering if something slightly more specific to Privacy Pass such as and might be reasonable?\nIf we make it more specific, it should be \"private\" instead of \"privacy\", to be \"private-token\" and align with the auth scheme. There's a fair amount of deployment on these types today, so this would be a relatively large change, or servers would need to tolerate both versions.\nI'm in favor of making this change and having issuers support both for a little while.\nCool, that would work. NAME want to make the proposed update?\nWill do!\nNAME PR's up!\nLooks good!", "new_text": "The rsabssa_blind_sign function is defined in BLINDRSA, Section 5.1.2.. The Issuer generates an HTTP response with status code 200 whose body consists of TokenResponse, with the content type set as \"application/private-token-response\". 6.3."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-a586d338b1bbb9a4c25346d622ac021f0e08067a9465b55c1c971807e5a10339", "old_text": "This specification defines the following protocol messages, along with their corresponding media types: TokenRequest: \"message/token-request\" TokenResponse: \"message/token-response\" The definition for each media type is in the following subsections. 8.2.1. message token-request N/A", "comments": "I also moved from the message namespace to the application namespace. The difference is helpfully summarized .\nI see in the current draft that they register and . Since \"token\" is a very broad term, I'm wondering if something slightly more specific to Privacy Pass such as and might be reasonable?\nIf we make it more specific, it should be \"private\" instead of \"privacy\", to be \"private-token\" and align with the auth scheme. There's a fair amount of deployment on these types today, so this would be a relatively large change, or servers would need to tolerate both versions.\nI'm in favor of making this change and having issuers support both for a little while.\nCool, that would work. NAME want to make the proposed update?\nWill do!\nNAME PR's up!\nLooks good!", "new_text": "This specification defines the following protocol messages, along with their corresponding media types: TokenRequest: \"application/private-token-request\" TokenResponse: \"application/private-token-response\" The definition for each media type is in the following subsections. 8.2.1. application private-token-request N/A"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-a586d338b1bbb9a4c25346d622ac021f0e08067a9465b55c1c971807e5a10339", "old_text": "8.2.2. message access-token-response N/A", "comments": "I also moved from the message namespace to the application namespace. The difference is helpfully summarized .\nI see in the current draft that they register and . Since \"token\" is a very broad term, I'm wondering if something slightly more specific to Privacy Pass such as and might be reasonable?\nIf we make it more specific, it should be \"private\" instead of \"privacy\", to be \"private-token\" and align with the auth scheme. There's a fair amount of deployment on these types today, so this would be a relatively large change, or servers would need to tolerate both versions.\nI'm in favor of making this change and having issuers support both for a little while.\nCool, that would work. NAME want to make the proposed update?\nWill do!\nNAME PR's up!\nLooks good!", "new_text": "8.2.2. application private-token-response N/A"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-7f8afe82a7fb8d987588e503593c13d489643015e6aeff3dc2458f3f6a8a945c", "old_text": "its own name as one of the names in the list. When used in an authentication challenge, the \"PrivateToken\" scheme uses the following attributes: \"challenge\", which contains a base64url-encoded RFC4648 TokenChallenge value. Since the length of the challenge is not", "comments": "The document refers to authentication scheme attributes throughout; the term from HTTP is 'parameters'. Also, both the challenge and credentials definitions should say what to do with unrecognised parameters.", "new_text": "its own name as one of the names in the list. When used in an authentication challenge, the \"PrivateToken\" scheme uses the following parameters: \"challenge\", which contains a base64url-encoded RFC4648 TokenChallenge value. Since the length of the challenge is not"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-7f8afe82a7fb8d987588e503593c13d489643015e6aeff3dc2458f3f6a8a945c", "old_text": "and might be required to be a quoted-string if the base64url string includes \"=\" characters. This challenge value MUST be unique for every 401 HTTP response to prevent replay attacks. This attribute is required for all challenges. \"token-key\", which contains a base64url encoding of the public key for use with the issuance protocol indicated by the challenge.", "comments": "The document refers to authentication scheme attributes throughout; the term from HTTP is 'parameters'. Also, both the challenge and credentials definitions should say what to do with unrecognised parameters.", "new_text": "and might be required to be a quoted-string if the base64url string includes \"=\" characters. This challenge value MUST be unique for every 401 HTTP response to prevent replay attacks. This parameter is required for all challenges. \"token-key\", which contains a base64url encoding of the public key for use with the issuance protocol indicated by the challenge."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-7f8afe82a7fb8d987588e503593c13d489643015e6aeff3dc2458f3f6a8a945c", "old_text": "include padding. As an Authentication Parameter (\"auth-param\" from RFC9110, Section 11.2), the value can be either a token or a quoted-string, and might be required to be a quoted-string if the base64url string includes \"=\" characters. This attribute MAY be omitted in deployments where clients are able to retrieve the issuer key using an out-of-band mechanism. \"max-age\", an optional attribute that consists of the number of seconds for which the challenge will be accepted by the origin. Clients can ignore the challenge if the token-key is invalid or otherwise untrusted. The header field MAY also include the standard \"realm\" attribute, if desired. Issuance protocols MAY require other attributes. As an example, the WWW-Authenticate header field could look like this:", "comments": "The document refers to authentication scheme attributes throughout; the term from HTTP is 'parameters'. Also, both the challenge and credentials definitions should say what to do with unrecognised parameters.", "new_text": "include padding. As an Authentication Parameter (\"auth-param\" from RFC9110, Section 11.2), the value can be either a token or a quoted-string, and might be required to be a quoted-string if the base64url string includes \"=\" characters. This parameter MAY be omitted in deployments where clients are able to retrieve the issuer key using an out-of-band mechanism. \"max-age\", an optional parameter that consists of the number of seconds for which the challenge will be accepted by the origin. Clients can ignore the challenge if the token-key is invalid or otherwise untrusted. The header field MAY also include the standard \"realm\" parameter, if desired. Issuance protocols MAY require other parameters. Clients SHOULD ignore unknown parameters in challenges, except if otherwise specified by issuance protocols. As an example, the WWW-Authenticate header field could look like this:"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-7f8afe82a7fb8d987588e503593c13d489643015e6aeff3dc2458f3f6a8a945c", "old_text": "obtains a token at time T2, a colluding issuer and origin can link this to the same client if T2 is unique to the client. This linkability is less feasible as the number of issuance events at time T2 increases. Depending on the \"max-age\" token challenge attribute, clients MAY try to augment the time between getting challenged then redeeming a token so as to make this sort of linkability more difficult. For more discussion on correlation risks between token", "comments": "The document refers to authentication scheme attributes throughout; the term from HTTP is 'parameters'. Also, both the challenge and credentials definitions should say what to do with unrecognised parameters.", "new_text": "obtains a token at time T2, a colluding issuer and origin can link this to the same client if T2 is unique to the client. This linkability is less feasible as the number of issuance events at time T2 increases. Depending on the \"max-age\" token challenge parameter, clients MAY try to augment the time between getting challenged then redeeming a token so as to make this sort of linkability more difficult. For more discussion on correlation risks between token"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-ad82381f1ae483825adfe901c94f59341d60bbb77aaf59156616c4a104ae928f", "old_text": "Server functionality: \"PP_Server_Setup\": Generates server configuration and keys \"PP_Issue\": Run on the contents of the client message in the issuance phase. \"PP_Verify\": Run on the contents of the client message in the redemption phase. Client functionality: \"PP_Client_Setup\": Generates the client configuration based on the configuration used by a given server. \"PP_Generate\": Generates public and private data associated with the contents of the client message in the issuance phase. \"PP_Process\": Processes the contents of the server response in the issuance phase. \"PP_Redeem\": Generates the data that forms the client message in the redemption phase. We will use each of the functions internally in the description of the interfaces that follows.", "comments": "These functions do not need the \"PP_\" namespace prefix as it's implied from the specification. I also removed underscores in favor of normal camel case notation.", "new_text": "Server functionality: \"ServerSetup\": Generates server configuration and keys \"Issue\": Run on the contents of the client message in the issuance phase. \"Verify\": Run on the contents of the client message in the redemption phase. Client functionality: \"ClientSetup\": Generates the client configuration based on the configuration used by a given server. \"Generate\": Generates public and private data associated with the contents of the client message in the issuance phase. \"Process\": Processes the contents of the server response in the issuance phase. \"Redeem\": Generates the data that forms the client message in the redemption phase. We will use each of the functions internally in the description of the interfaces that follows."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-ad82381f1ae483825adfe901c94f59341d60bbb77aaf59156616c4a104ae928f", "old_text": "Steps: Run \"(cfg, update) = PP_Server_Setup(id)\". Construct a \"config_update\" message.", "comments": "These functions do not need the \"PP_\" namespace prefix as it's implied from the specification. I also removed underscores in favor of normal camel case notation.", "new_text": "Steps: Run \"(cfg, update) = ServerSetup(id)\". Construct a \"config_update\" message."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-ad82381f1ae483825adfe901c94f59341d60bbb77aaf59156616c4a104ae928f", "old_text": "Run by the client when processing the server response in the issuance phase of the protocol. The output of this function is an array of \"RedemptionToken\" objects that are unlinkable from the server's computation in \"PP_Issue\". Inputs:", "comments": "These functions do not need the \"PP_\" namespace prefix as it's implied from the specification. I also removed underscores in favor of normal camel case notation.", "new_text": "Run by the client when processing the server response in the issuance phase of the protocol. The output of this function is an array of \"RedemptionToken\" objects that are unlinkable from the server's computation in \"Issue\". Inputs:"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-ad82381f1ae483825adfe901c94f59341d60bbb77aaf59156616c4a104ae928f", "old_text": "Formally speaking the security model is the following: The adversary runs \"PP_Server_Setup\" and generates a key-pair \"(key, pub_key)\". The adversary specifies a number \"Q\" of issuance phases to initiate, where each phase \"i in 1..Q\" consists of \"m_i\" server evaluations. The adversary runs \"PP_Issue\" using the key-pair that it generated on each of the client messages in the issuance phase. When the adversary wants, it stops the issuance phase, and a random number \"l\" is picked from \"1..Q\".", "comments": "These functions do not need the \"PP_\" namespace prefix as it's implied from the specification. I also removed underscores in favor of normal camel case notation.", "new_text": "Formally speaking the security model is the following: The adversary runs \"ServerSetup\" and generates a key-pair \"(key, pub_key)\". The adversary specifies a number \"Q\" of issuance phases to initiate, where each phase \"i in 1..Q\" consists of \"m_i\" server evaluations. The adversary runs \"Issue\" using the key-pair that it generated on each of the client messages in the issuance phase. When the adversary wants, it stops the issuance phase, and a random number \"l\" is picked from \"1..Q\"."}
{"id": "q-en-ietf-wg-privacypass-base-drafts-ad82381f1ae483825adfe901c94f59341d60bbb77aaf59156616c4a104ae928f", "old_text": "The security model takes the following form: A server is created that runs \"PP_Server_Setup\" and broadcasts the \"ServerUpdate\" message \"s_update\". The adversary runs \"PP_Client_Setup\" on \"s_update\". The adversary specifies a number \"Q\" of issuance phases to initiate with the server, where each phase \"i in 1..Q\" consists of", "comments": "These functions do not need the \"PP_\" namespace prefix as it's implied from the specification. I also removed underscores in favor of normal camel case notation.", "new_text": "The security model takes the following form: A server is created that runs \"ServerSetup\" and broadcasts the \"ServerUpdate\" message \"s_update\". The adversary runs \"ClientSetup\" on \"s_update\". The adversary specifies a number \"Q\" of issuance phases to initiate with the server, where each phase \"i in 1..Q\" consists of"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-ccf97fae925e598a4abca5bc64a8ea0077fcc4e717ae1a70fd5e204f663630ef", "old_text": "encoded SubjectPublicKeyInfo (SPKI) object carrying pkI. The SPKI object MUST use the RSASSA-PSS OID RFC5756, which specifies the hash algorithm and salt size. The salt size MUST match the output size of the hash function associated with the public key and token type. Since Clients truncate \"token_key_id\" in each \"TokenRequest\", Issuers should ensure that the truncated form of new key IDs do not collide", "comments": "cc NAME NAME for an extra set of eyes to check me here -- ASN.1 is not my forte. I went based on the implementation , which I think matches what Frank did , but I would definitely appreciate confirmation!\nFriendly bump NAME and NAME =)\nDo you have a test vector / example to make sure it matches what I believe?\nNAME yeah, each includes the encoded public key ().\nExample, taken from the current test vector:\nLooks good to me. But yeah, ASN.1 is not the most concise nor readable format.\nASN1 matches my expectations as well.\n... we should make it clear, and provide test vectors to make sure it matches. The test vector could include public keys in PEM format and then print out the expected key ID.", "new_text": "encoded SubjectPublicKeyInfo (SPKI) object carrying pkI. The SPKI object MUST use the RSASSA-PSS OID RFC5756, which specifies the hash algorithm and salt size. The salt size MUST match the output size of the hash function associated with the public key and token type. The parameters field for the digest used in the mask generation function and the digest being signed MUST be omitted. An example sequence of the SPKI object (in ASN.1 format) for a 2048-bit key is below: \"$ cat spki.bin | xxd -r -p | openssl asn1parse -dump -inform DER 0:d=0 hl=4 l= 338 cons: SEQUENCE 4:d=1 hl=2 l= 61 cons: SEQUENCE 6:d=2 hl=2 l= 9 prim: OBJECT :rsassaPss 17:d=2 hl=2 l= 48 cons: SEQUENCE 19:d=3 hl=2 l= 13 cons: cont [ 0 ] 21:d=4 hl=2 l= 11 cons: SEQUENCE 23:d=5 hl=2 l= 9 prim: OBJECT :sha384 34:d=3 hl=2 l= 26 cons: cont [ 1 ] 36:d=4 hl=2 l= 24 cons: SEQUENCE 38:d=5 hl=2 l= 9 prim: OBJECT :mgf1 49:d=5 hl=2 l= 11 cons: SEQUENCE 51:d=6 hl=2 l= 9 prim: OBJECT :sha384 62:d=3 hl=2 l= 3 cons: cont [ 2 ] 64:d=4 hl=2 l= 1 prim: INTEGER :30 67:d=1 hl=4 l= 271 prim: BIT STRING \" Since Clients truncate \"token_key_id\" in each \"TokenRequest\", Issuers should ensure that the truncated form of new key IDs do not collide"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-fb9866f6b403698bd74e101e35b0f28392d78ea6487d81afdd1298208a4a066d", "old_text": " Privacy Pass: The Protocol draft-davidson-pp-protocol-latest Abstract", "comments": "Also update URL with correct links\nLooks reasonable. Once this lands we should see about hooking about the gh-pages generation and link to that.", "new_text": " Privacy Pass Protocol Specification draft-ietf-privacypass-protocol-latest Abstract"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-fb9866f6b403698bd74e101e35b0f28392d78ea6487d81afdd1298208a4a066d", "old_text": " Privacy Pass: HTTP API draft-svaldez-pp-http-api-latest Abstract", "comments": "Also update URL with correct links\nLooks reasonable. Once this lands we should see about hooking about the gh-pages generation and link to that.", "new_text": " Privacy Pass HTTP API draft-ietf-privacypass-http-api-latest Abstract"}
{"id": "q-en-ietf-wg-privacypass-base-drafts-fb9866f6b403698bd74e101e35b0f28392d78ea6487d81afdd1298208a4a066d", "old_text": " Privacy Pass: Architectural Framework draft-davidson-pp-architecture-latest Abstract", "comments": "Also update URL with correct links\nLooks reasonable. Once this lands we should see about hooking about the gh-pages generation and link to that.", "new_text": " Privacy Pass Architectural Framework draft-ietf-privacypass-architecture-latest Abstract"}
{"id": "q-en-jsep-61c069945f13de88cca302ae81a02e6349940706573c9a76adb934e16597d53f", "old_text": "sections as one BUNDLE group. However, whether the m= sections are bundle-only or not depends on the BUNDLE policy. Attributes which SDP permits to either be at the session level or the media level SHOULD generally be at the media level even if they are identical. This promotes readability, especially if one of a set of", "comments": "I cleaned up the language.\nlgtm\nCullen wants LS groups because that is what legacy gear understands. Magnus wants to use MSID, which carries this.\nDiscussed in DC: will test out the LS group concept, Jonathan will send text to the list and we will discuss. Concerns are that there is a potential for mismatch between what is signaled in LS groups and what is signaled in MSID. We also need to work out what to do in the absence of LS groups.\nbug 32 is prob dup of this\npinged Lennox\nGiven the discussion on CNAME, no one in the room at IETF90 objected to the idea that LS group was MTI.\nPlan here is update draft to say that when creating the SDP, everything in PC has same CNAME, and the things in the same MediaStream get put in a LS (lipsync) group. This will mean legacy stuff that support LS will do the right thing, and legacy stuff that does not, will simply try and sync everything.\nCullen to ping Harald to get his LG regarding the comment above (ie. regarding its interaction with MSID)\nJust to add some clarity here .... this is what I think we decided. All the RTP in a given PeerConection will use the same CNAME. For each stream, If a stream has more than one track, then the JSEP needs to create a lip sync group (see URL ) to put all the tracks in a given stream into the same lip sync group.\nYes. If no lipsync group from the remote side, everything is synchronized. If only some tracks in LS group, the rest are unsynchronized. (Basically, if you indicate you understand LS, we take what you give us, all synced)\nMT, would you be willing to write some text here?\nWho has the token here?\nJust realized there could be a problem. Consider A, who does not want to sync its audio/video, and B, who wants its video to be synced. A offers a=group:LS with no mids. B has to reply with a subset of the proposed mids, by 5888 rules. Therefore B's a/v won't be synced. I don't see a good reason to require this, but 5888 is pretty clear. The identification-tags contained in this \"a=group\" line MUST be the same as those received in the offer, or a subset of them (zero identification-tags is a valid subset). When the identification-tags in the answer are a subset, the \"group\" value to be used in the session MUST be the one present in the answer.\nThere's an issue with LS when using bidirectional media sections. If media stream X (from A to B) uses media sections 1 and 2, and media stream Y (from B to A) uses media sections 2 and 3, what should be the lipsync groups? Are lipsync groups unidirectional or bidirectional?\nsee comment above. 5888 says bidi, but that is crazy talk.\nIndeed. Crazy. What is our plan of attack here then? Can we ask MMUSIC to fix this? Either by un-screwing-up a=group (which seems unlikely), by making a new grouping semantic, or by telling us that there is a better option.\nJSEP reserves the right to contradict existing standards where necessary. Generally, it seems like the need for symmetry in a=group is semantic-dependent, and so fixing it later in 5888 seems possible.\nSo you are suggesting that we define a=group:LS as declarative: it applies to the media that is sent by the peer that includes the grouping. I can live with that, though I suspect it will cause considerable heartburn. It certainly complicates the grouping model considerably.\nyes, essentially. I don't see any way to reconcile LS with reality otherwise.\nChanging the semantics of existing mechanisms by declaring that they are changed without actually changing them is a dangerous process. A simpler alternative is to drop media section reuse when LS groups are present: If a particular media section is a member of an LS group, or if a MediaStream requires its tracks to be part of an LS group, don't use it for a track in the opposite direction. The legacy case of one audio section, one video section that is used in both directions will then require that we somehow turn off LS group generation.", "new_text": "sections as one BUNDLE group. However, whether the m= sections are bundle-only or not depends on the BUNDLE policy. The next step is to generate session-level lip sync groups as defined in RFC5888, Section 7. For each MediaStream with more than one MediaStreamTrack, a group of type \"LS\" MUST be added that contains the mid values for each MediaStreamTrack in that MediaStream. Attributes which SDP permits to either be at the session level or the media level SHOULD generally be at the media level even if they are identical. This promotes readability, especially if one of a set of"}
{"id": "q-en-jsep-61c069945f13de88cca302ae81a02e6349940706573c9a76adb934e16597d53f", "old_text": "contain all m= sections that were previously bundled, as long as they are still alive, as well as any new m= sections. 5.2.3. The createOffer method takes as a parameter an RTCOfferOptions", "comments": "I cleaned up the language.\nlgtm\nCullen wants LS groups because that is what legacy gear understands. Magnus wants to use MSID, which carries this.\nDiscussed in DC: will test out the LS group concept, Jonathan will send text to the list and we will discuss. Concerns are that there is a potential for mismatch between what is signaled in LS groups and what is signaled in MSID. We also need to work out what to do in the absence of LS groups.\nbug 32 is prob dup of this\npinged Lennox\nGiven the discussion on CNAME, no one in the room at IETF90 objected to the idea that LS group was MTI.\nPlan here is update draft to say that when creating the SDP, everything in PC has same CNAME, and the things in the same MediaStream get put in a LS (lipsync) group. This will mean legacy stuff that support LS will do the right thing, and legacy stuff that does not, will simply try and sync everything.\nCullen to ping Harald to get his LG regarding the comment above (ie. regarding its interaction with MSID)\nJust to add some clarity here .... this is what I think we decided. All the RTP in a given PeerConection will use the same CNAME. For each stream, If a stream has more than one track, then the JSEP needs to create a lip sync group (see URL ) to put all the tracks in a given stream into the same lip sync group.\nYes. If no lipsync group from the remote side, everything is synchronized. If only some tracks in LS group, the rest are unsynchronized. (Basically, if you indicate you understand LS, we take what you give us, all synced)\nMT, would you be willing to write some text here?\nWho has the token here?\nJust realized there could be a problem. Consider A, who does not want to sync its audio/video, and B, who wants its video to be synced. A offers a=group:LS with no mids. B has to reply with a subset of the proposed mids, by 5888 rules. Therefore B's a/v won't be synced. I don't see a good reason to require this, but 5888 is pretty clear. The identification-tags contained in this \"a=group\" line MUST be the same as those received in the offer, or a subset of them (zero identification-tags is a valid subset). When the identification-tags in the answer are a subset, the \"group\" value to be used in the session MUST be the one present in the answer.\nThere's an issue with LS when using bidirectional media sections. If media stream X (from A to B) uses media sections 1 and 2, and media stream Y (from B to A) uses media sections 2 and 3, what should be the lipsync groups? Are lipsync groups unidirectional or bidirectional?\nsee comment above. 5888 says bidi, but that is crazy talk.\nIndeed. Crazy. What is our plan of attack here then? Can we ask MMUSIC to fix this? Either by un-screwing-up a=group (which seems unlikely), by making a new grouping semantic, or by telling us that there is a better option.\nJSEP reserves the right to contradict existing standards where necessary. Generally, it seems like the need for symmetry in a=group is semantic-dependent, and so fixing it later in 5888 seems possible.\nSo you are suggesting that we define a=group:LS as declarative: it applies to the media that is sent by the peer that includes the grouping. I can live with that, though I suspect it will cause considerable heartburn. It certainly complicates the grouping model considerably.\nyes, essentially. I don't see any way to reconcile LS with reality otherwise.\nChanging the semantics of existing mechanisms by declaring that they are changed without actually changing them is a dangerous process. A simpler alternative is to drop media section reuse when LS groups are present: If a particular media section is a member of an LS group, or if a MediaStream requires its tracks to be part of an LS group, don't use it for a track in the opposite direction. The legacy case of one audio section, one video section that is used in both directions will then require that we somehow turn off LS group generation.", "new_text": "contain all m= sections that were previously bundled, as long as they are still alive, as well as any new m= sections. The \"LS\" groups are generated in the same way as with initial offers. 5.2.3. The createOffer method takes as a parameter an RTCOfferOptions"}
{"id": "q-en-jsep-61c069945f13de88cca302ae81a02e6349940706573c9a76adb934e16597d53f", "old_text": "session-level attributes. The process here is identical to that indicated in the Initial Offers section above. The next step is to generate m= sections for each m= section that is present in the remote offer, as specified in RFC3264, Section 6. For the purposes of this discussion, any session-level attributes in the", "comments": "I cleaned up the language.\nlgtm\nCullen wants LS groups because that is what legacy gear understands. Magnus wants to use MSID, which carries this.\nDiscussed in DC: will test out the LS group concept, Jonathan will send text to the list and we will discuss. Concerns are that there is a potential for mismatch between what is signaled in LS groups and what is signaled in MSID. We also need to work out what to do in the absence of LS groups.\nbug 32 is prob dup of this\npinged Lennox\nGiven the discussion on CNAME, no one in the room at IETF90 objected to the idea that LS group was MTI.\nPlan here is update draft to say that when creating the SDP, everything in PC has same CNAME, and the things in the same MediaStream get put in a LS (lipsync) group. This will mean legacy stuff that support LS will do the right thing, and legacy stuff that does not, will simply try and sync everything.\nCullen to ping Harald to get his LG regarding the comment above (ie. regarding its interaction with MSID)\nJust to add some clarity here .... this is what I think we decided. All the RTP in a given PeerConection will use the same CNAME. For each stream, If a stream has more than one track, then the JSEP needs to create a lip sync group (see URL ) to put all the tracks in a given stream into the same lip sync group.\nYes. If no lipsync group from the remote side, everything is synchronized. If only some tracks in LS group, the rest are unsynchronized. (Basically, if you indicate you understand LS, we take what you give us, all synced)\nMT, would you be willing to write some text here?\nWho has the token here?\nJust realized there could be a problem. Consider A, who does not want to sync its audio/video, and B, who wants its video to be synced. A offers a=group:LS with no mids. B has to reply with a subset of the proposed mids, by 5888 rules. Therefore B's a/v won't be synced. I don't see a good reason to require this, but 5888 is pretty clear. The identification-tags contained in this \"a=group\" line MUST be the same as those received in the offer, or a subset of them (zero identification-tags is a valid subset). When the identification-tags in the answer are a subset, the \"group\" value to be used in the session MUST be the one present in the answer.\nThere's an issue with LS when using bidirectional media sections. If media stream X (from A to B) uses media sections 1 and 2, and media stream Y (from B to A) uses media sections 2 and 3, what should be the lipsync groups? Are lipsync groups unidirectional or bidirectional?\nsee comment above. 5888 says bidi, but that is crazy talk.\nIndeed. Crazy. What is our plan of attack here then? Can we ask MMUSIC to fix this? Either by un-screwing-up a=group (which seems unlikely), by making a new grouping semantic, or by telling us that there is a better option.\nJSEP reserves the right to contradict existing standards where necessary. Generally, it seems like the need for symmetry in a=group is semantic-dependent, and so fixing it later in 5888 seems possible.\nSo you are suggesting that we define a=group:LS as declarative: it applies to the media that is sent by the peer that includes the grouping. I can live with that, though I suspect it will cause considerable heartburn. It certainly complicates the grouping model considerably.\nyes, essentially. I don't see any way to reconcile LS with reality otherwise.\nChanging the semantics of existing mechanisms by declaring that they are changed without actually changing them is a dangerous process. A simpler alternative is to drop media section reuse when LS groups are present: If a particular media section is a member of an LS group, or if a MediaStream requires its tracks to be part of an LS group, don't use it for a track in the opposite direction. The legacy case of one audio section, one video section that is used in both directions will then require that we somehow turn off LS group generation.", "new_text": "session-level attributes. The process here is identical to that indicated in the Initial Offers section above. The next step is to generate lip sync groups as defined in RFC5888, Section 7. For each MediaStream with more than one MediaStreamTrack, a group of type \"LS\" MUST be added that contains the mid values for each MediaStreamTrack in that MediaStream. In some cases this may result in adding a mid to a given LS group that was not in that LS group in the associated offer. Although this is not allowed by RFC5888, it is allowed when implementing this specification. The next step is to generate m= sections for each m= section that is present in the remote offer, as specified in RFC3264, Section 6. For the purposes of this discussion, any session-level attributes in the"}
{"id": "q-en-jsep-9b7ce3cd8b357642a35eda2b742807c4fbad1aabe8eb97f3afd1ff3b353fbd69", "old_text": "in RFC5956, section 4.3. For simplicity, if both RTX and FEC are supported, the FEC SSRC MUST be the same as the RTX SSRC. [OPEN ISSUE: Handling of a=imageattr] If the BUNDLE policy for this PeerConnection is set to \"max- bundle\", and this is not the first m= section, or the BUNDLE policy is set to \"balanced\", and this is not the first m= section", "comments": "This LGTM. Actually, I thought I had written something like this :)\nLGTM\nAlways remember to update both the offer and answer sections :)", "new_text": "in RFC5956, section 4.3. For simplicity, if both RTX and FEC are supported, the FEC SSRC MUST be the same as the RTX SSRC. If the BUNDLE policy for this PeerConnection is set to \"max- bundle\", and this is not the first m= section, or the BUNDLE policy is set to \"balanced\", and this is not the first m= section"}
{"id": "q-en-jsep-9b7ce3cd8b357642a35eda2b742807c4fbad1aabe8eb97f3afd1ff3b353fbd69", "old_text": "maximum supported frame sizes out of all codecs included above, as specified in RFC4566, Section 6. If this m= section is for video media, an \"a=imageattr\" line, as specified in sec.imageattr. If \"rtx\" is present in the offer, for each primary codec where RTP retransmission should be used, a corresponding \"a=rtpmap\" line", "comments": "This LGTM. Actually, I thought I had written something like this :)\nLGTM\nAlways remember to update both the offer and answer sections :)", "new_text": "maximum supported frame sizes out of all codecs included above, as specified in RFC4566, Section 6. If this m= section is for video media, and there are known limitations on the size of images which can be decoded, an \"a=imageattr\" line, as specified in sec.imageattr. If \"rtx\" is present in the offer, for each primary codec where RTP retransmission should be used, a corresponding \"a=rtpmap\" line"}
{"id": "q-en-jsep-9b7ce3cd8b357642a35eda2b742807c4fbad1aabe8eb97f3afd1ff3b353fbd69", "old_text": "section 4.3. For simplicity, if both RTX and FEC are supported, the FEC SSRC MUST be the same as the RTX SSRC. [OPEN ISSUE: Handling of a=imageattr] If a data channel m= section has been offered, a m= section MUST also be generated for data. The field MUST be set to \"application\" and the field MUST be set to exactly match the", "comments": "This LGTM. Actually, I thought I had written something like this :)\nLGTM\nAlways remember to update both the offer and answer sections :)", "new_text": "section 4.3. For simplicity, if both RTX and FEC are supported, the FEC SSRC MUST be the same as the RTX SSRC. If a data channel m= section has been offered, a m= section MUST also be generated for data. The field MUST be set to \"application\" and the field MUST be set to exactly match the"}
{"id": "q-en-jsep-660019e992075f6eb93447308ce4978d990c0c1928a50392bd11a01d8f9ad709", "old_text": "Each \"m=\" and c=\" line MUST be filled in with the port, protocol, and address of the default candidate for the m= section, as described in RFC5245, Section 4.3. Each \"a=rtcp\" attribute line MUST also be filled in with the port and address of the appropriate default candidate, either the default RTP or RTCP candidate, depending on whether RTCP multiplexing is currently active or not. Note that if RTCP multiplexing is being offered, but not yet active, the default RTCP candidate MUST be used, as indicated in RFC5761, section 5.1.3. In each case, if no candidates of the desired type have yet been gathered, dummy values MUST be used, as described above. Each \"a=mid\" line MUST stay the same.", "comments": "LGTM\nlgtm\nThe question has come up about what the default candidate should be set to during trickle ICE but after at least one candidate has been gathered so at least in principle you could set c= to one of those candidates. See also; URL\nWant to pick a solution that works even if we we do continous trickle and ice does not finish.\nConsensus on today's call is: Before ICE is completed, follow the 5245 guidance and pick the \"best\" candidate. Once ICE is completed, pick whatever you are actually using.\nIf we move to a mode where ICE never completes, perhaps use the active pair.\nIt turns out that this was pretty clear already. I have made it slightly clearer.", "new_text": "Each \"m=\" and c=\" line MUST be filled in with the port, protocol, and address of the default candidate for the m= section, as described in RFC5245, Section 4.3. If ICE checking has already completed for one or more candidate pairs and a candidate pair is in active use, then that pair MUST be used, even if ICE has not yet completed. Note that this differs from the guidance in RFC5245, Section 9.1.2.2, which only refers to offers created when ICE has completed. Each \"a=rtcp\" attribute line MUST also be filled in with the port and address of the appropriate default candidate, either the default RTP or RTCP candidate, depending on whether RTCP multiplexing is currently active or not. Note that if RTCP multiplexing is being offered, but not yet active, the default RTCP candidate MUST be used, as indicated in RFC5761, section 5.1.3. In each case, if no candidates of the desired type have yet been gathered, dummy values MUST be used, as described above. Each \"a=mid\" line MUST stay the same."}
{"id": "q-en-jsep-a6e4c08963a21919074f2d18f95f8fb8187b923325deb222049426dbbe44cd20", "old_text": "For any specified \"TIAS\" bandwidth value, set this value as a constraint on the maximum RTP bitrate to be used when sending media as specified in RFC3890. If a \"TIAS\" value is not present, but an \"AS\" value is specified, generate a \"TIAS\" value using this formula:", "comments": "and\nI tweaked this a bit, nothing major. LGTM.\nFor CT without AS, probably want to just tell the congestion control to stay below CT. For CT where AS is specified, do the same, but limit any streams with AS to the indicated value (in addition to total limit).\nThis should go into the handling-a-remote-description section.", "new_text": "For any specified \"TIAS\" bandwidth value, set this value as a constraint on the maximum RTP bitrate to be used when sending media, as specified in RFC3890. If a \"TIAS\" value is not present, but an \"AS\" value is specified, generate a \"TIAS\" value using this formula:"}
{"id": "q-en-jsep-a6e4c08963a21919074f2d18f95f8fb8187b923325deb222049426dbbe44cd20", "old_text": "to RTCP. If more accurate control of bandwidth is needed, \"TIAS\" should be used instead of \"AS\". Any specified \"CT\" bandwidth value MUST be ignored, as the meaning of this construct at the media level is not well defined. For any \"RR\" or \"RS\" bandwidth values, handle as specified in RFC3556, Section 2. [TODO: handling of CN, telephone-event, \"red\"] If the media section if of type audio:", "comments": "and\nI tweaked this a bit, nothing major. LGTM.\nFor CT without AS, probably want to just tell the congestion control to stay below CT. For CT where AS is specified, do the same, but limit any streams with AS to the indicated value (in addition to total limit).\nThis should go into the handling-a-remote-description section.", "new_text": "to RTCP. If more accurate control of bandwidth is needed, \"TIAS\" should be used instead of \"AS\". For any \"RR\" or \"RS\" bandwidth values, handle as specified in RFC3556, Section 2. Any specified \"CT\" bandwidth value MUST be ignored, as the meaning of this construct at the media level is not well defined. [TODO: handling of CN, telephone-event, \"red\"] If the media section if of type audio:"}
{"id": "q-en-jsep-eb458cd12f16843cef609badd6cdb895d18e4070c708cc066109b1cf6a561f77", "old_text": "following restriction: The fields of the \"o=\" line MUST stay the same except for the field, which MUST increment if the session description changes in any way, including the addition of ICE candidates. If the initial offer was applied using setLocalDescription, but an answer from the remote side has not yet been applied, meaning the", "comments": "From meeting minutes \"SDP o= line increment [] While the proposed solution on the slide was accepted, it was noted that the proposal on the slide goes against what was agreed at last meeting. Emil Ivov also commented that Trickle ICE for SIP does exactly what is described on the slide. Jonathan Lennox said that we should make sure this is compatible with whatever is done in trickle ICE SIP stacks.\"\nThis algorithm becomes correct once we decide to not allow o= modification. Editorial comments follow.\nDiscussion in-room: 1) add example to make it clear that there is a separate counter (other than current o=) which is what increments. 2) change text about <session-version> to be more of a \"note that\" for an edge case, as opposed to normative text about a mainline case.\nUpdated this branch. PTAL\nLGTM\n2349: \"might differ\" is creepy. \"differs\" perhaps. 2359: \"N + 2\" -> must be greater than N + 1 This is basically OK modulo the above", "new_text": "following restriction: The fields of the \"o=\" line MUST stay the same except for the field, which MUST increment by one on each call to createOffer if the offer might differ from the output of the previous call to createOffer; implementations MAY opt to increment on every call. The value of the generated is independent of the of the current local description; in particular, in the case where the current version is N, an offer is created with version N+1, and then that offer is rolled back so that the current version is again N, the next generated offer will still have version N+2. Note that if the application creates an offer by reading currentLocalDescription instead of calling createOffer, the returned SDP may be different than when setLocalDescription was originally called, due to the addition of gathered ICE candidates, but the will not have changed. There are no known scenarios in which this causes problems, but if this is a concern, the solution is simply to use createOffer to ensure a unique . If the initial offer was applied using setLocalDescription, but an answer from the remote side has not yet been applied, meaning the"}
{"id": "q-en-jsep-929e562d796423f27b6f9139254fea6460fce7403739535ab656d2faa47be586", "old_text": "all streams other than the first will be marked as bundle-only. This policy aims to minimize candidate gathering and maximize multiplexing, at the cost of less compatibility with legacy endpoints. When acting as answerer, if there if no bundle group in the offer, the implementation will reject all but the first m= section. As it provides the best tradeoff between performance and compatibility with legacy endpoints, the default bundle policy MUST", "comments": "NAME NAME PTAL\nFrom Berlin: \"A clarification to that rule was found when an offer is received with some m= lines bundled and some not; you accept all m= lines that are possible to bundle with the first m= line. \"\nI think we'll also have to update section 5.3.1 (Initial Answers) where it lists conditions for rejecting an m= section. Also, do we need to make similar modifications for the \"balanced\" bundle policy? LG with nits", "new_text": "all streams other than the first will be marked as bundle-only. This policy aims to minimize candidate gathering and maximize multiplexing, at the cost of less compatibility with legacy endpoints. When acting as answerer, the implementation will reject any m= sections other than the first m= section, unless they are in the same bundle group as that m= section. As it provides the best tradeoff between performance and compatibility with legacy endpoints, the default bundle policy MUST"}
{"id": "q-en-jsep-929e562d796423f27b6f9139254fea6460fce7403739535ab656d2faa47be586", "old_text": "No supported codec is present in the offer. The bundle policy is \"max-bundle\", the m= section is not in a bundle group, and this is not the first m= section. The bundle policy is \"balanced\", the m= section is not in a bundle group, and this is not the first m= section for this media type. The RTP/RTCP multiplexing policy is \"require\" and the m= section doesn't contain an \"a=rtcp-mux\" attribute.", "comments": "NAME NAME PTAL\nFrom Berlin: \"A clarification to that rule was found when an offer is received with some m= lines bundled and some not; you accept all m= lines that are possible to bundle with the first m= line. \"\nI think we'll also have to update section 5.3.1 (Initial Answers) where it lists conditions for rejecting an m= section. Also, do we need to make similar modifications for the \"balanced\" bundle policy? LG with nits", "new_text": "No supported codec is present in the offer. The bundle policy is \"max-bundle\", and this is not the first m= section or in the same bundle group as the first m= section. The bundle policy is \"balanced\", and this is not the first m= section for this media type or in the same bundle group as the first m= section for this media type. The RTP/RTCP multiplexing policy is \"require\" and the m= section doesn't contain an \"a=rtcp-mux\" attribute."}
{"id": "q-en-jsep-360256d7752b72df660cf9806943cb09223bc7d8450f999244d95bb722c9a417", "old_text": "The PeerConnection constructor allows the application to specify global parameters for the media session, such as the STUN/TURN servers and credentials to use when gathering candidates. The size of the ICE candidate pool can also be set, if desired; by default the candidate pool size is zero. [[OPEN ISSUE: this is an inconsistency with the W3C spec https://github.com/rtcweb-wg/jsep/issues/12.]] In addition, the application can specify its preferred policy regarding use of BUNDLE, the multiplexing mechanism defined in I-", "comments": "Looks good to me\nJSEP says: S 4.1.1. \" The PeerConnection constructor allows the application to specify global parameters for the media session, such as the STUN/TURN servers and credentials to use when gathering candidates. The size of the ICE candidate pool can also be set, if desired; by default the candidate pool size is zero.\" The WebRTC spec says: \"At this point the ICE Agent does not know how many ICE components it needs (and hence the number of candidates to gather), but it can make a reasonable assumption such as 2. As the RTCPeerConnection object gets more information, the ICE Agent can adjust the number of components\"\nImagine 4 is more realistic.\nso default in not defined but their is way for the JS to to set the default for a PC\nDiscussed in DC: default will be unspecified and left up to browsers, there will need to be a setting for applications to override this default.\nw3c bug is URL", "new_text": "The PeerConnection constructor allows the application to specify global parameters for the media session, such as the STUN/TURN servers and credentials to use when gathering candidates. The size of the ICE candidate pool can also be set, if desired. If the application does not indicate a candidate pool size, the browser may select any default candidate pool size. In addition, the application can specify its preferred policy regarding use of BUNDLE, the multiplexing mechanism defined in I-"}
{"id": "q-en-jsep-d6e7df3189eaf130554b4510a94223c8c4f477eaf3f24e32a1555d1832f97fad", "old_text": "4.2.4. The setCodecPreferences method sets the codec preferences of a transceiver, which in turn affect the presence and order of codecs of the associated m= section on future calls to createOffer and", "comments": "This distinguishes between the direction which was last set via setDirection (which simply affects offer/answer generation) from the direction that was negotiated, which affects whether a transceiver can send/recv media. Addresses issue .\nAccording to WebRTC 1.0 Section 5.4 setDirection() does not take effect immediately: \"The setDirection method sets the direction of the RTCRtpTransceiver. Calls to setDirection() do not take effect immediately. Instead, future calls to createOffer and createAnswer mark the corresponding media description as sendrecv, sendonly, recvonly or inactive as defined in [JSEP] (section 5.2.2. and section 5.3.2.). Calling setDirection() sets the negotiation-needed flag.\" My interpretation of the above is that setDirection immediately sets the transceiver.direction attribute, but that the \"current direction\" (distinct from the direction attribute) is only changed by calls to setLocal/setRemoteDescription. I would like to suggest that JSEP indicate precisely when the \"current direction\" changes. My reading of JSEP is that this is determined from the currentLocalDescription and currentRemoteDescription. For example: The direction provided in the currentLocalDescription (e.g. the \"current direction\") is \"sendrecv\" and setDirection(\"recvonly\") is called. Since setDirection does not change the direction in currentLocalDescription, the RtpSender does not stop sending. createOffer() is called. A \"recvonly\" offer is generated. setLocalDescription(RecvOnlyOffer) is called. Now the direction in pendingLocalDescription changes to \"recvonly\", but the direction in currentLocalDescription is still \"sendrecv\", so the RtpSender does not stop sending. setRemoteDescription(SendOnlyAnswer) is called. Now the direction in currentLocalDescription is \"recvonly\", so the RtpSender will stop sending.\nBernard's description makes sense to me; currentLocal/Remote indicate the current direction state.\nFixed by", "new_text": "4.2.4. The direction method returns the last value passed into setDirection. If setDirection has never been called, it returns the direction the transceiver was initialized with. 4.2.5. The currentDirection method returns the last negotiated direction for the transceiver's associated m= section. More specifically, it returns the RFC3264 directional attribute of the associated m= section in the last applied answer, with \"send\" and \"recv\" directions reversed if it was a remote answer. For example, if the directional attribute for the associated m= section in a remote answer is \"recvonly\", currentDirection returns \"sendonly\". If an answer that references this transceiver has not yet been applied, or if the transceiver is stopped, currentDirection returns null. 4.2.6. The setCodecPreferences method sets the codec preferences of a transceiver, which in turn affect the presence and order of codecs of the associated m= section on future calls to createOffer and"}
{"id": "q-en-jsep-14e7394713837b26bd8bf50e8af0f6e1e5de6cb2afafa56dbaabec2ef0c6d23f", "old_text": "3.7. JSEP supports simulcast of a MediaStreamTrack, where multiple encodings of the source media can be transmitted within the context of a single m= section. The current JSEP API is designed to allow applications to send simulcasted media but only to receive a single encoding. This allows for multi-user scenarios where each sending client sends multiple encodings to a server, which then, for each receiving client, chooses the appropriate encoding to forward. Applications request support for simulcast by configuring multiple encodings on an RTPSender, which, upon generation of an offer or", "comments": "Originally raised by Magnus: Section 3.7: JSEP supports simulcast of a MediaStreamTrack, where multiple encodings of the source media can be transmitted within the context of a single m= section. I think this should be explcit that it is \"simulcast transmission of a MediaStreamTrack\".", "new_text": "3.7. JSEP supports simulcast transmission of a MediaStreamTrack, where multiple encodings of the source media can be transmitted within the context of a single m= section. The current JSEP API is designed to allow applications to send simulcasted media but only to receive a single encoding. This allows for multi-user scenarios where each sending client sends multiple encodings to a server, which then, for each receiving client, chooses the appropriate encoding to forward. Applications request support for simulcast by configuring multiple encodings on an RTPSender, which, upon generation of an offer or"}
{"id": "q-en-jsep-eb77a8f9ec37539a3644d8750c8141ed47f1af5e7bdab618eaa6525ce0e1f3d1", "old_text": "RFC4566 is the base SDP specification and MUST be implemented. RFC5764 MUST be supported for signaling the UDP/TLS/RTP/SAVPF RFC5764, TCP/DTLS/RTP/SAVPF I-D.nandakumar-mmusic-proto-iana- registration, \"UDP/DTLS/SCTP\" I-D.ietf-mmusic-sctp-sdp, and \"TCP/DTLS/SCTP\" I-D.ietf-mmusic-sctp-sdp RTP profiles. RFC5245 MUST be implemented for signaling the ICE credentials and candidate lines corresponding to each media stream. The ICE", "comments": "Originally raised by Magnus: Section 5.1.1: R-2 [RFC5764] MUST be supported for signaling the UDP/TLS/RTP/SAVPF [RFC5764], TCP/DTLS/RTP/SAVPF [I-D.nandakumar-mmusic-proto-iana-registration], \"UDP/DTLS/ SCTP\" [I-D.ietf-mmusic-sctp-sdp], and \"TCP/DTLS/SCTP\" [I-D.ietf-mmusic-sctp-sdp] RTP profiles. I would note that [I-D.nandakumar-mmusic-proto-iana-registration] is actually published as RFC 7850. Section 5.1.1: R-12 [RFC5761] MUST be implemented to signal multiplexing of RTP and RTCP. This should include the updated as ref also.", "new_text": "RFC4566 is the base SDP specification and MUST be implemented. RFC5764 MUST be supported for signaling the UDP/TLS/RTP/SAVPF RFC5764, TCP/DTLS/RTP/SAVPF RFC7850, \"UDP/DTLS/SCTP\" I-D.ietf- mmusic-sctp-sdp, and \"TCP/DTLS/SCTP\" I-D.ietf-mmusic-sctp-sdp RTP profiles. RFC5245 MUST be implemented for signaling the ICE credentials and candidate lines corresponding to each media stream. The ICE"}
{"id": "q-en-jsep-eb77a8f9ec37539a3644d8750c8141ed47f1af5e7bdab618eaa6525ce0e1f3d1", "old_text": "To properly indicate use of DTLS, the field MUST be set to \"UDP/TLS/RTP/SAVPF\", as specified in RFC5764, Section 8, if the default candidate uses UDP transport, or \"TCP/DTLS/RTP/SAVPF\", as specified in I-D.nandakumar-mmusic-proto-iana-registration if the default candidate uses TCP transport. If codec preferences have been set for the associated transceiver, media formats MUST be generated in the corresponding order, and", "comments": "Originally raised by Magnus: Section 5.1.1: R-2 [RFC5764] MUST be supported for signaling the UDP/TLS/RTP/SAVPF [RFC5764], TCP/DTLS/RTP/SAVPF [I-D.nandakumar-mmusic-proto-iana-registration], \"UDP/DTLS/ SCTP\" [I-D.ietf-mmusic-sctp-sdp], and \"TCP/DTLS/SCTP\" [I-D.ietf-mmusic-sctp-sdp] RTP profiles. I would note that [I-D.nandakumar-mmusic-proto-iana-registration] is actually published as RFC 7850. Section 5.1.1: R-12 [RFC5761] MUST be implemented to signal multiplexing of RTP and RTCP. This should include the updated as ref also.", "new_text": "To properly indicate use of DTLS, the field MUST be set to \"UDP/TLS/RTP/SAVPF\", as specified in RFC5764, Section 8, if the default candidate uses UDP transport, or \"TCP/DTLS/RTP/SAVPF\", as specified in RFC7850 if the default candidate uses TCP transport. If codec preferences have been set for the associated transceiver, media formats MUST be generated in the corresponding order, and"}
{"id": "q-en-jsep-647be66497dcae551dcdff54c4054b548a4642cc4453f6844dbcb613f0abc811", "old_text": "clearer. Session Information (\"i=\"), URI (\"u=\"), Email Address (\"e=\"), Phone Number (\"p=\"), Bandwidth (\"b=\"), Repeat Times (\"r=\"), and Time Zones (\"z=\") lines are not useful in this context and SHOULD NOT be included. Encryption Keys (\"k=\") lines do not provide sufficient security and MUST NOT be included.", "comments": "Originally raised by Magnus: Section 5.2.1: o Session Information (\"i=\"), URI (\"u=\"), Email Address (\"e=\"), Phone Number (\"p=\"), Bandwidth (\"b=\"), Repeat Times (\"r=\"), and Time Zones (\"z=\") lines are not useful in this context and SHOULD NOT be included. Considering that b=CT is not useless, I am a bit surprised to see b= on this list. Yes, b=CT: is only useful is one like to provide a upper total bandwidth limitation. So, maybe some discussion of the possible choice?\nThis is just a mistake in 5.2.1 - we cover how to deal with it later in section 5. Prob need to remove the mention of b= here.", "new_text": "clearer. Session Information (\"i=\"), URI (\"u=\"), Email Address (\"e=\"), Phone Number (\"p=\"), Repeat Times (\"r=\"), and Time Zones (\"z=\") lines are not useful in this context and SHOULD NOT be included. Encryption Keys (\"k=\") lines do not provide sufficient security and MUST NOT be included."}
{"id": "q-en-jsep-fd2422f9dc124c3e9cceb884ccca4591302bcbf53f78f879a2bd59e5bcfd6eba", "old_text": "JSEP's handling of session descriptions is simple and straightforward. Whenever an offer/answer exchange is needed, the initiating side creates an offer by calling a createOffer() API. The application optionally modifies that offer, and then uses it to set up its local config via the setLocalDescription() API. The offer is then sent off to the remote side over its preferred signaling mechanism (e.g., WebSockets); upon receipt of that offer, the remote party installs it using the setRemoteDescription() API. To complete the offer/answer exchange, the remote party uses the createAnswer() API to generate an appropriate answer, applies it", "comments": "Did we not decide that modifying that offer (or an answer created by createAnswer()) is not allowed?\nAgreed. This text should be culled.", "new_text": "JSEP's handling of session descriptions is simple and straightforward. Whenever an offer/answer exchange is needed, the initiating side creates an offer by calling a createOffer() API. The application then uses that offer to set up its local config via the setLocalDescription() API. The offer is finally sent off to the remote side over its preferred signaling mechanism (e.g., WebSockets); upon receipt of that offer, the remote party installs it using the setRemoteDescription() API. To complete the offer/answer exchange, the remote party uses the createAnswer() API to generate an appropriate answer, applies it"}
{"id": "q-en-jsep-c91b36cb24d7ff6dd0b037f1aea8cd9f3c1e4a301f6967ef26d4fe644da12c18", "old_text": "the ICE candidate event. If the pool becomes depleted, either because a larger-than-expected number of ICE components is used, or because the pool has not had enough time to gather candidates, the remaining candidates are gathered as usual. One example of where this concept is useful is an application that expects an incoming call at some point in the future, and wants to", "comments": "The pool is only used in the first offer/answer exchange. After applying a local description, the pool size is frozen and changing the ICE servers will not result in new candidates being pooled. After an answer is applied, any leftover candidates are discarded. This only happens when applying an answer so that candidates aren't prematurely discarded for use cases that involve pranswer, or multiple offers.\nI can live with this as is. Want to read again if we end up making changes to address EKR comments\nUpdated PR and its description based on our discussion.", "new_text": "the ICE candidate event. If the pool becomes depleted, either because a larger-than-expected number of ICE components is used, or because the pool has not had enough time to gather candidates, the remaining candidates are gathered as usual. This only occurs for the first offer/answer exchange, after which the candidate pool is emptied and no longer used. One example of where this concept is useful is an application that expects an incoming call at some point in the future, and wants to"}
{"id": "q-en-jsep-c91b36cb24d7ff6dd0b037f1aea8cd9f3c1e4a301f6967ef26d4fe644da12c18", "old_text": "createOffer to generate new ICE credentials, for the purpose of forcing an ICE restart and kicking off a new gathering phase, in which the new servers will be used. If the ICE candidate pool has a nonzero size, any existing candidates will be discarded, and new candidates will be gathered from the new servers. Any change to the ICE candidate policy affects the next gathering phase. If an ICE gathering phase has already started or", "comments": "The pool is only used in the first offer/answer exchange. After applying a local description, the pool size is frozen and changing the ICE servers will not result in new candidates being pooled. After an answer is applied, any leftover candidates are discarded. This only happens when applying an answer so that candidates aren't prematurely discarded for use cases that involve pranswer, or multiple offers.\nI can live with this as is. Want to read again if we end up making changes to address EKR comments\nUpdated PR and its description based on our discussion.", "new_text": "createOffer to generate new ICE credentials, for the purpose of forcing an ICE restart and kicking off a new gathering phase, in which the new servers will be used. If the ICE candidate pool has a nonzero size, and a local description has not yet been applied, any existing candidates will be discarded, and new candidates will be gathered from the new servers. Any change to the ICE candidate policy affects the next gathering phase. If an ICE gathering phase has already started or"}
{"id": "q-en-jsep-c91b36cb24d7ff6dd0b037f1aea8cd9f3c1e4a301f6967ef26d4fe644da12c18", "old_text": "until a gathering phase occurs, and so any necessary filtering can still be done on any pooled candidates. Any changes to the ICE candidate pool size take effect immediately; if increased, additional candidates are pre-gathered; if decreased, the now-superfluous candidates are discarded.", "comments": "The pool is only used in the first offer/answer exchange. After applying a local description, the pool size is frozen and changing the ICE servers will not result in new candidates being pooled. After an answer is applied, any leftover candidates are discarded. This only happens when applying an answer so that candidates aren't prematurely discarded for use cases that involve pranswer, or multiple offers.\nI can live with this as is. Want to read again if we end up making changes to address EKR comments\nUpdated PR and its description based on our discussion.", "new_text": "until a gathering phase occurs, and so any necessary filtering can still be done on any pooled candidates. The ICE candidate pool size MUST NOT be changed after applying a local description. If a local description has not yet been applied, any changes to the ICE candidate pool size take effect immediately; if increased, additional candidates are pre-gathered; if decreased, the now-superfluous candidates are discarded."}
{"id": "q-en-jsep-c91b36cb24d7ff6dd0b037f1aea8cd9f3c1e4a301f6967ef26d4fe644da12c18", "old_text": "accordingly, as described in I-D.ietf-mmusic-sdp-bundle-negotiation, Section 8.2. 6. Note: The following algorithm does not yet have WG consensus but is", "comments": "The pool is only used in the first offer/answer exchange. After applying a local description, the pool size is frozen and changing the ICE servers will not result in new candidates being pooled. After an answer is applied, any leftover candidates are discarded. This only happens when applying an answer so that candidates aren't prematurely discarded for use cases that involve pranswer, or multiple offers.\nI can live with this as is. Want to read again if we end up making changes to address EKR comments\nUpdated PR and its description based on our discussion.", "new_text": "accordingly, as described in I-D.ietf-mmusic-sdp-bundle-negotiation, Section 8.2. If the description is of type \"answer\", and there are still remaining candidates in the ICE candidate pool, discard them. 6. Note: The following algorithm does not yet have WG consensus but is"}
{"id": "q-en-jsep-0be0fddfbe3ed302a56a6c624aa27b0a8a18f715edb4d50bb438c206eb116a21", "old_text": "but also will offer \"a=rtcp-mux\", thus allowing for compatibility with either multiplexing or non-multiplexing endpoints. The JSEP implementation will only gather RTP candidates. This halves the number of candidates that the offerer needs to gather. Applying a description with an m= section that does not contain an \"a=rtcp-mux\" attribute will cause an error to be returned. The default multiplexing policy MUST be set to \"require\". Implementations MAY choose to reject attempts by the application to", "comments": "I applied the following principles, which unfortunately are somewhat inconsistent with the current mux-exclusive draft. You use a=rtcp-mux to negotiate mux a=rtcp-mux-only is used by the offerer to say \"I really mean it\" To that end, offers contain a=rtcp-mux-only if the policy is require. Answers don't contain it because you don't need it to negotiate mux.\nThose principles are consistent with bundle-only, although I wrote the mux-only examples as if mux-only should always be included whenever a=rtcp-mux is present.\nNAME back at you\nNAME ping\nLG", "new_text": "but also will offer \"a=rtcp-mux\", thus allowing for compatibility with either multiplexing or non-multiplexing endpoints. The JSEP implementation will only gather RTP candidates and will insert an \"a=rtcp-mux-only\" indication into any offers it generates. This halves the number of candidates that the offerer needs to gather. Applying a description with an m= section that does not contain an \"a=rtcp-mux\" attribute will cause an error to be returned. The default multiplexing policy MUST be set to \"require\". Implementations MAY choose to reject attempts by the application to"}
{"id": "q-en-jsep-0be0fddfbe3ed302a56a6c624aa27b0a8a18f715edb4d50bb438c206eb116a21", "old_text": "An \"a=rtcp-mux\" line, as specified in RFC5761, Section 5.1.3. An \"a=rtcp-rsize\" line, as specified in RFC5506, Section 5. Lastly, if a data channel has been created, a m= section MUST be", "comments": "I applied the following principles, which unfortunately are somewhat inconsistent with the current mux-exclusive draft. You use a=rtcp-mux to negotiate mux a=rtcp-mux-only is used by the offerer to say \"I really mean it\" To that end, offers contain a=rtcp-mux-only if the policy is require. Answers don't contain it because you don't need it to negotiate mux.\nThose principles are consistent with bundle-only, although I wrote the mux-only examples as if mux-only should always be included whenever a=rtcp-mux is present.\nNAME back at you\nNAME ping\nLG", "new_text": "An \"a=rtcp-mux\" line, as specified in RFC5761, Section 5.1.3. If the RTP/RTCP multiplexing policy is \"require\", an \"a=rtcp-mux- only\" line, as specified in I-D.ietf-mmusic-mux-exclusive, Section 4. An \"a=rtcp-rsize\" line, as specified in RFC5506, Section 5. Lastly, if a data channel has been created, a m= section MUST be"}
{"id": "q-en-jsep-0be0fddfbe3ed302a56a6c624aa27b0a8a18f715edb4d50bb438c206eb116a21", "old_text": "The \"a=rtcp-mux\" line MUST only be added if present in the most recent answer. The \"a=rtcp-mux-only\" line MUST only be added if present in the most recent answer. The \"a=rtcp-rsize\" line MUST only be added if present in the most recent answer.", "comments": "I applied the following principles, which unfortunately are somewhat inconsistent with the current mux-exclusive draft. You use a=rtcp-mux to negotiate mux a=rtcp-mux-only is used by the offerer to say \"I really mean it\" To that end, offers contain a=rtcp-mux-only if the policy is require. Answers don't contain it because you don't need it to negotiate mux.\nThose principles are consistent with bundle-only, although I wrote the mux-only examples as if mux-only should always be included whenever a=rtcp-mux is present.\nNAME back at you\nNAME ping\nLG", "new_text": "The \"a=rtcp-mux\" line MUST only be added if present in the most recent answer. The \"a=rtcp-rsize\" line MUST only be added if present in the most recent answer."}
{"id": "q-en-jsep-0be0fddfbe3ed302a56a6c624aa27b0a8a18f715edb4d50bb438c206eb116a21", "old_text": "attribute MUST NOT be specified. If the RTP/RTCP multiplexing policy is \"require\", each m= section MUST contain an \"a=rtcp-mux\" attribute. If this session description is of type \"pranswer\" or \"answer\", the following additional checks are applied:", "comments": "I applied the following principles, which unfortunately are somewhat inconsistent with the current mux-exclusive draft. You use a=rtcp-mux to negotiate mux a=rtcp-mux-only is used by the offerer to say \"I really mean it\" To that end, offers contain a=rtcp-mux-only if the policy is require. Answers don't contain it because you don't need it to negotiate mux.\nThose principles are consistent with bundle-only, although I wrote the mux-only examples as if mux-only should always be included whenever a=rtcp-mux is present.\nNAME back at you\nNAME ping\nLG", "new_text": "attribute MUST NOT be specified. If the RTP/RTCP multiplexing policy is \"require\", each m= section MUST contain an \"a=rtcp-mux\" attribute. If an \"m=\" section contains an \"a=rtcp-mux-only\" attribute then that section MUST also contain an \"a=rtcp-mux\" attribute. If this session description is of type \"pranswer\" or \"answer\", the following additional checks are applied:"}
{"id": "q-en-jsep-69fef9df1c3d94271fda0bf5e239ddd2f08df5779d5db6209d414c016bcc57c3", "old_text": "TRANSPORT categories for bundled m= sections, as described in I- D.ietf-mmusic-sdp-bundle-negotiation, Section 8.1. The \"a=group:BUNDLE\" attribute MUST include the MID identifiers specified in the bundle group in the most recent answer, minus any m= sections that have been marked as rejected, plus any newly added or", "comments": "attributes. As far as I can tell this can happen: On re-offers On any answer because nothing requires the offerer to generate bundle tags in the same order as m= lines (even though the data channel m= line is last) See: URL\nNAME PTAL\nSay you have a session where only a data channel is created. SDP looks like [session boilerplate] a=group:BUNDLE d1 m=X application UDP/DTLS/SCTP webrtc-datachannel a=mid:d1 ... Now, a media m= section is added (i.e. after first o/a). SDP becomes: [session boilerplate] a=group:BUNDLE d1 a1 m=X application UDP/DTLS/SCTP webrtc-datachannel a=mid:d1 a=ice-ufrag:ABCD ... m=X audio UDP/TLS/RTP/SAVPF 0 a=mid:a1 [no ice-ufrag or TRANSPORT attrs, since bundled] Now, where does a=rtcp-mux go? It can't go into because of the language that states Thoughts on how to resolve?\nCould generating a session description be split into two parts? First, generate media-level attributes for each \"m=\" section, then transport-level attributes for the BUNDLE group and each non-bundled \"m=\" section.\nThat would be my preference, i.e., jam it into the BUNDLE group m= section, regardless of media type, as that's conceptually simplest, and easily handled here in JSEP. Either way, I suspect this will require changes upstream in the bundle spec.\nSo ... is it really not allowed in the d1 section. Unknown attributes are allowed in the d1 section, they are just ignored if they have no meaning for the data channel. Agree this sounds like something that will hit bundle.\nCouldn't you do a=group:BUNDLE a1 d1 I would have thought this was prohibited, but is it really? URL\nI think that's legal, but that approach doesn't work in the general case, where \"m=\" section A has TRANSPORT attributes that only it uses, and section B has TRANSPORT attributes that only it uses. BUNDLE just needs to allow an \"m=\" section to contain attributes that it normally wouldn't contain if it wasn't bundled. (If this isn't already allowed).\nNAME any action needed here?\nI think what we want in this scenario is to put \"a=rtcp-mux\" and \"a=rtcp-rsize\" in the data channel section. The simplest way would be to say: Which I guess could only happen in subsequent offers.\nLet's go with what NAME suggests, although this makes it a bundle problem.\nWhat would you say is needed on the BUNDLE side? Just clarification that the offerer BUNDLE-tag m= section should contain the union of all TRANSPORT and IDENTICAL attributes used by the BUNDLE group?\ntalked to chairs and advice from chairs on this and similar things that depend on mmusic outcomes. We put in JSEP now what our best guess is of how mmusic will end up and publish JSEP. At the same time we log a bug to check if we need to make any changes to line up with what mmusic decides. We can take changes to line up with mmusic as Last Call comments later on.\n+1 with Taylor proposal to put RTP related things in data m= section if it is first and good with making this be bundles problem\nNAME NAME I assume that you think that on initial offers this cannot happen because data comes last?\nNAME Right. I can't think of a scenario where it wouldn't go last in an initial offer.\nAgreed.\nOh, except that you aren't required to make the bundle group in order So you can have\nI don't think the question whether you are allowed or not to place an RTP attribute in a non-RTP m= section is a pure BUNDLE issue. One needs to check what the specs defining those attributes say.\nevery SDP spec has to accept (and ignore) lines it does not understand ...\nI looked into the a=rtcp-mux and a=rtcp-rsize specs, and I didn't see anything that prohibited their inclusion into a non-media section. Generally, I think BUNDLE can define the rules for this sort of thing, without fear that it will somehow contradict some other spec.\nNAME NAME You may want to chime in on the current MMUSIC thread. I agree with you, but it seems to be going in the direction of \"you can't do this\".\n\nI looked into the a=rtcp-mux and a=rtcp-rsize specs, and I didn't see anything that prohibited their >inclusion into a non-media section. It may not be explicitly stated, but since the semantics is about RTP and RTCP I think it is implicit. Having said that, I personally have no problem with allowing the attributes in non-RTP m- sections. I just want to verify that there is no text that has to be modified somewhere, and/or whether there is some text that has to be added somewhere :)\nResolved by\nLooks good.", "new_text": "TRANSPORT categories for bundled m= sections, as described in I- D.ietf-mmusic-sdp-bundle-negotiation, Section 8.1. Note that if media m= sections are bundled into a data m= section, then certain TRANSPORT and IDENTICAL attributes may appear in the data m= section even if they would otherwise only be appropriate for a media m= section (e.g., \"a=rtcp-mux\"). This cannot happen in initial offers because in the initial offer JSEP implementations always list media m= sections (if any) before the data m= section (if any), and at least one of those media m= sections will not have the \"a=bundle-only\" attribute. Therefore, in initial offers, any \"a=bundle-only\" m= sections will be bundled into a preceding non-bundle-only media m= section. The \"a=group:BUNDLE\" attribute MUST include the MID identifiers specified in the bundle group in the most recent answer, minus any m= sections that have been marked as rejected, plus any newly added or"}
{"id": "q-en-jsep-69fef9df1c3d94271fda0bf5e239ddd2f08df5779d5db6209d414c016bcc57c3", "old_text": "If a data channel m= section has been offered, a m= section MUST also be generated for data. The field MUST be set to \"application\" and the and \"fmt\" fields MUST be set to exactly match the fields in the offer. Within the data m= section, an \"a=mid\" line MUST be generated and included as described above, along with an \"a=sctp-port\" line", "comments": "attributes. As far as I can tell this can happen: On re-offers On any answer because nothing requires the offerer to generate bundle tags in the same order as m= lines (even though the data channel m= line is last) See: URL\nNAME PTAL\nSay you have a session where only a data channel is created. SDP looks like [session boilerplate] a=group:BUNDLE d1 m=X application UDP/DTLS/SCTP webrtc-datachannel a=mid:d1 ... Now, a media m= section is added (i.e. after first o/a). SDP becomes: [session boilerplate] a=group:BUNDLE d1 a1 m=X application UDP/DTLS/SCTP webrtc-datachannel a=mid:d1 a=ice-ufrag:ABCD ... m=X audio UDP/TLS/RTP/SAVPF 0 a=mid:a1 [no ice-ufrag or TRANSPORT attrs, since bundled] Now, where does a=rtcp-mux go? It can't go into because of the language that states Thoughts on how to resolve?\nCould generating a session description be split into two parts? First, generate media-level attributes for each \"m=\" section, then transport-level attributes for the BUNDLE group and each non-bundled \"m=\" section.\nThat would be my preference, i.e., jam it into the BUNDLE group m= section, regardless of media type, as that's conceptually simplest, and easily handled here in JSEP. Either way, I suspect this will require changes upstream in the bundle spec.\nSo ... is it really not allowed in the d1 section. Unknown attributes are allowed in the d1 section, they are just ignored if they have no meaning for the data channel. Agree this sounds like something that will hit bundle.\nCouldn't you do a=group:BUNDLE a1 d1 I would have thought this was prohibited, but is it really? URL\nI think that's legal, but that approach doesn't work in the general case, where \"m=\" section A has TRANSPORT attributes that only it uses, and section B has TRANSPORT attributes that only it uses. BUNDLE just needs to allow an \"m=\" section to contain attributes that it normally wouldn't contain if it wasn't bundled. (If this isn't already allowed).\nNAME any action needed here?\nI think what we want in this scenario is to put \"a=rtcp-mux\" and \"a=rtcp-rsize\" in the data channel section. The simplest way would be to say: Which I guess could only happen in subsequent offers.\nLet's go with what NAME suggests, although this makes it a bundle problem.\nWhat would you say is needed on the BUNDLE side? Just clarification that the offerer BUNDLE-tag m= section should contain the union of all TRANSPORT and IDENTICAL attributes used by the BUNDLE group?\ntalked to chairs and advice from chairs on this and similar things that depend on mmusic outcomes. We put in JSEP now what our best guess is of how mmusic will end up and publish JSEP. At the same time we log a bug to check if we need to make any changes to line up with what mmusic decides. We can take changes to line up with mmusic as Last Call comments later on.\n+1 with Taylor proposal to put RTP related things in data m= section if it is first and good with making this be bundles problem\nNAME NAME I assume that you think that on initial offers this cannot happen because data comes last?\nNAME Right. I can't think of a scenario where it wouldn't go last in an initial offer.\nAgreed.\nOh, except that you aren't required to make the bundle group in order So you can have\nI don't think the question whether you are allowed or not to place an RTP attribute in a non-RTP m= section is a pure BUNDLE issue. One needs to check what the specs defining those attributes say.\nevery SDP spec has to accept (and ignore) lines it does not understand ...\nI looked into the a=rtcp-mux and a=rtcp-rsize specs, and I didn't see anything that prohibited their inclusion into a non-media section. Generally, I think BUNDLE can define the rules for this sort of thing, without fear that it will somehow contradict some other spec.\nNAME NAME You may want to chime in on the current MMUSIC thread. I agree with you, but it seems to be going in the direction of \"you can't do this\".\n\nI looked into the a=rtcp-mux and a=rtcp-rsize specs, and I didn't see anything that prohibited their >inclusion into a non-media section. It may not be explicitly stated, but since the semantics is about RTP and RTCP I think it is implicit. Having said that, I personally have no problem with allowing the attributes in non-RTP m- sections. I just want to verify that there is no text that has to be modified somewhere, and/or whether there is some text that has to be added somewhere :)\nResolved by\nLooks good.", "new_text": "If a data channel m= section has been offered, a m= section MUST also be generated for data. The field MUST be set to \"application\" and the and \"fmt\" fields MUST be set to exactly match the fields in the offer. Note that if media m= sections are bundled into a data m= section, then certain TRANSPORT and IDENTICAL attributes may appear in the data m= section even if they would otherwise only be appropriate for a media m= section (e.g., \"a=rtcp- mux\"). Within the data m= section, an \"a=mid\" line MUST be generated and included as described above, along with an \"a=sctp-port\" line"}
{"id": "q-en-jsep-ad3b2d3197b9351ccfd977d5a321b7a7c3bbf695b796deb5de660e2fa793f386", "old_text": "excluded by subsequent calls to createOffer and createAnswer, in which case the corresponding media formats in the associated m= section will be excluded. The codec preferences cannot add media formats that would otherwise not be present. This includes codecs that were not negotiated in a previous offer/answer exchange that included the transceiver. The codec preferences of an RtpTransceiver can also determine the order of codecs in subsequent calls to createOffer and createAnswer,", "comments": "LGTM and if Justin wants to tie in the stuff from my comment a bit more that is great\nNAME PTAL\nseems like it's generally right,LGTM", "new_text": "excluded by subsequent calls to createOffer and createAnswer, in which case the corresponding media formats in the associated m= section will be excluded. The codec preferences cannot add media formats that would otherwise not be present. The codec preferences of an RtpTransceiver can also determine the order of codecs in subsequent calls to createOffer and createAnswer,"}
{"id": "q-en-jsep-ad3b2d3197b9351ccfd977d5a321b7a7c3bbf695b796deb5de660e2fa793f386", "old_text": "For each offered m= section, if any of the following conditions are true, the corresponding m= section in the answer MUST be marked as rejected by setting the port in the m= line to zero, as indicated in RFC3264, Section 6., and further processing for this m= section can be skipped: The associated RtpTransceiver has been stopped. No supported codec is present in the offer. The bundle policy is \"max-bundle\", and this is not the first m= section or in the same bundle group as the first m= section.", "comments": "LGTM and if Justin wants to tie in the stuff from my comment a bit more that is great\nNAME PTAL\nseems like it's generally right,LGTM", "new_text": "For each offered m= section, if any of the following conditions are true, the corresponding m= section in the answer MUST be marked as rejected by setting the port in the m= line to zero, as indicated in RFC3264, Section 6, and further processing for this m= section can be skipped: The associated RtpTransceiver has been stopped. None of the offered media formats are supported and, if applicable, allowed by codec preferences. The bundle policy is \"max-bundle\", and this is not the first m= section or in the same bundle group as the first m= section."}
{"id": "q-en-jsep-ad3b2d3197b9351ccfd977d5a321b7a7c3bbf695b796deb5de660e2fa793f386", "old_text": "for the corresponding m= line in the offer. If codec preferences have been set for the associated transceiver, media formats MUST be generated in the corresponding order, and MUST exclude any codecs not present in the codec preferences or not present in the offer. Note that non-JSEP endpoints are not subject to this restriction, and might add media formats in the answer that are not present in the offer, as specified in RFC3264, Section 6.1. Therefore, JSEP implementations MUST be prepared to receive such answers. Unless excluded by the above restrictions, the media formats MUST include the mandatory audio/video codecs as specified in RFC7874, Section 3, and RFC7742, Section 5. The m= line MUST be followed immediately by a \"c=\" line, as specified in RFC4566, Section 5.7. Again, as no candidates are available yet,", "comments": "LGTM and if Justin wants to tie in the stuff from my comment a bit more that is great\nNAME PTAL\nseems like it's generally right,LGTM", "new_text": "for the corresponding m= line in the offer. If codec preferences have been set for the associated transceiver, media formats MUST be generated in the corresponding order, regardless of what was offered, and MUST exclude any codecs not present in the codec preferences. Otherwise, the media formats on the m= line MUST be generated in the same order as those offered in the current remote description, excluding any currently unsupported formats. Any currently available media formats that are not present in the current remote description MUST be added after all existing formats. In either case, the media formats in the answer MUST include at least one format that is present in the offer, but MAY include formats that are locally supported but not present in the offer, as mentioned in RFC3264, Section 6.1. If no common format exists, the m= section is rejected as described above. The m= line MUST be followed immediately by a \"c=\" line, as specified in RFC4566, Section 5.7. Again, as no candidates are available yet,"}
{"id": "q-en-jsep-ad3b2d3197b9351ccfd977d5a321b7a7c3bbf695b796deb5de660e2fa793f386", "old_text": "must instead match what was supplied in the offer, as described above. Unless codec preferences have been set for the associated transceiver, the media formats on the m= line MUST be generated in the same order as in the current local description. Each \"a=ice-ufrag\" and \"a=ice-pwd\" line MUST stay the same, unless the m= section is restarting, in which case new ICE credentials must be created as specified in RFC5245, Section 9.2.1.1. If the", "comments": "LGTM and if Justin wants to tie in the stuff from my comment a bit more that is great\nNAME PTAL\nseems like it's generally right,LGTM", "new_text": "must instead match what was supplied in the offer, as described above. Each \"a=ice-ufrag\" and \"a=ice-pwd\" line MUST stay the same, unless the m= section is restarting, in which case new ICE credentials must be created as specified in RFC5245, Section 9.2.1.1. If the"}
{"id": "q-en-jsep-ad3b2d3197b9351ccfd977d5a321b7a7c3bbf695b796deb5de660e2fa793f386", "old_text": "local answers, \"recvonly\" for remote answers, or \"sendrecv\" for either type of answer), choose the media format to send as the most preferred media format from the remote description that is also present in the answer, as described in RFC3264, Sections 6.1 and 7, and start transmitting RTP media once the underlying transport layers have been established. If an SSRC has not already been chosen for this outgoing RTP stream, choose a random one. If media is already being transmitted, the same SSRC SHOULD be used unless the clockrate of the new codec is different, in which case a new SSRC MUST be chosen, as specified in RFC7160, Section 3.1. The payload type mapping from the remote description is used to", "comments": "LGTM and if Justin wants to tie in the stuff from my comment a bit more that is great\nNAME PTAL\nseems like it's generally right,LGTM", "new_text": "local answers, \"recvonly\" for remote answers, or \"sendrecv\" for either type of answer), choose the media format to send as the most preferred media format from the remote description that is also locally supported, as discussed in RFC3264, Sections 6.1 and 7, and start transmitting RTP media using that format once the underlying transport layers have been established. If an SSRC has not already been chosen for this outgoing RTP stream, choose a random one. If media is already being transmitted, the same SSRC SHOULD be used unless the clockrate of the new codec is different, in which case a new SSRC MUST be chosen, as specified in RFC7160, Section 3.1. The payload type mapping from the remote description is used to"}
{"id": "q-en-jsep-217b9503eb1016a42f77fe85409b6272f5e6445df17a55d0e1e696c313e60dc1", "old_text": "If the RTP/RTCP multiplexing policy is \"require\", each m= section MUST contain an \"a=rtcp-mux\" attribute. If an m= section contains an \"a=rtcp-mux-only\" attribute then that section MUST also contain an \"a=rtcp-mux\" attribute. If this m= section was present in the previous answer then the state of RTP/RTCP multiplexing MUST match what was previously negotiated. If this session description is of type \"pranswer\" or \"answer\", the following additional checks are applied:", "comments": "I agree with justin here. my bad for missing that.\nThe 4th bullet on page 64 is ambiguous at \"this m= section\". It looks like this bullet should just be merged with the previous bullet.\nI'm not sure what the problem is here, but I'm not against merging it. NAME\nThe bullet point in question is the one that reads If this m= section was present in the previous answer then the state of RTP/RTCP multiplexing MUST match what was previously negotiated.\nLGTM", "new_text": "If the RTP/RTCP multiplexing policy is \"require\", each m= section MUST contain an \"a=rtcp-mux\" attribute. If an m= section contains an \"a=rtcp-mux-only\" attribute, that section MUST also contain an \"a=rtcp-mux\" attribute. If an m= section was present in the previous answer, the state of RTP/RTCP multiplexing MUST match what was previously negotiated. If this session description is of type \"pranswer\" or \"answer\", the following additional checks are applied:"}
{"id": "q-en-jsep-5d17daba5ca8f56217be631b3426167c704089dd1134f0911d37e4c9d1664f38", "old_text": "If present, \"a=msid\" attributes MUST be parsed as specified in I- D.ietf-mmusic-msid, Section 3.2, and their values stored, ignoring any \"appdata\" field. Any \"a=imageattr\" attributes MUST be parsed as specified in RFC6236, Section 3, and their values stored.", "comments": "Fixed\nalso tagging NAME (who might be unhappy with 'default' which matches Chrome behaviour but not Firefox behaviour (it generates a uuid)\nping NAME NAME\nNAME NAME PTAL - Barring any objections this one ought to get merged.\nFolks, I think this one is pretty uncontroversial - planning to land this next week if there are no objections.\nTake this simplified SDP offer from a remote endpoint (in parts taken from : A single audio and video stream. No a=mid (there is quite some text taking care of that) but no a=msid line either. URL shows a live example. Both Chrome (without unified-plan) and Firefox will fire the ontrack event (or legacy onaddstream) with a single mediastream containing an audio and video track. This has been like that since... 2012-ish? Unified plan support in Chrome changes this behaviour: URL ontrack is fired with an empty set of media streams. onaddstream is not fired. This is quite a change in behaviour. And unexpected, I didn't expect any issues for this trivial case of a single audio/video track. As NAME webrtc-pc assumes/requires a=msid to be present and . In particular this says which means firing with no streams is technically correct. JSEP seems to assume it is present as well, but has this text about a=msid in Note the \"If present\". What if it is not present? The closest I could find in terms of guidance was this from the and its section about handling a=msid is supposed to be present in initial offer so we are talking about dealing with non-webrtc entities send an offer. Given that there is plenty of support for entities that do not support a=mid (an RFC published in 2010) in JSEP do we need backward compability for the case of a single audio/video track each but no msid? Interop between browsers is probably not affected, due to Chrome still interpreting and sending a=ssrc:123 msid:...\nnote that a proper webrtc client will include \"no streams\" as (maybe with a track) which allows differentiating between \"no streams because I do not use any\" and \"msid???\" Thanks NAME for another\naside: the interesting case you have trouble distinguishing between is \"no msid because I haven't started sending a track yet\" and \"no msid because I don't support msid\". It's one case where we still have trouble if RTP packets arrive ahead of signalling. But - yes, I think we are better off sticking with the 2012 behavior, default stream and all. We can handle the case above by changing the stream associations of the receiver if a new SDP (with msid) arrives after the RTP packets.\nthere used to be a a=msid-semantic line that would allow determining whether a party wants its offer to be treated assuming msid support. Browsers still send it :-)\nHow are non-signaled tracks supposed to be handled anyway? There's JSEP language about creating a track for it, but what about webrtc-pc? (If a track exists in the forest, and nobody is around to surface it to JavaScript, does it still exist?) \"Run the following steps for each media description in description\" means no ontrack will fire until we do have something signaled. If the track (and/or stream) only exists in some inaccessible way they're almost conceptual until the signaling does arrive. And when it does arrive, we know whether or not MediaStreams were signaled. So we should know what to do, same things as if we didn't receive early RTP packets? Does the distinction matter? I love that we can tell the difference between \"no streams because I do not use any\" and \"no msid signaled\" whether or not \"a=msid:-\" was present. I had not thought about that! Unless I'm confused about something, I suggest: If is present, do what Chrome does: don't create any stream. If the line is completely missing, do what Firefox (or Chrome Plan B) does: create a MediaStream with a random id. Does that work for people?\nBy the way I thought this was a webrtc-pc issue when I replied above, so I was thinking about what to do in JavaScript rather than talking about any JSEP change.\nI suppose we need some JSEP language to handle the lack of any a=msid but i agree with what you propose above. We do have have a=msid:- and we need to handle that in webrtc-pc too. Will file an issue there.\nAll webrtc-pc says is \"let msids be a list of the MSIDs that the media description indicates transceiver.[[Receiver]].[[ReceiverTrack]] is to be associated with. Otherwise, let msids be an empty list\" And says \"The special value \"-\" indicates \"no MediaStream\".\" So that already takes care of the case correctly, I claim. I agree. We need to solve what the media description (JSEP) says about this. Right now : \"If present, \"a=msid\" attributes MUST be parsed as specified in [I-D.ietf-mmusic-msid], Section 3.2, and their values stored.\" I think it needs to say what happens when it is not present. Something like: \"If no a=msid is present, then in place of \"msid-id\", create a single \"id\" unique to this end-point, if one hasn't already been created for this connection.\"\n\"if one hasn't already been created for this connection\"; so future tracks will share the same stream? There is always at most a single \"default stream\"?\nNAME says he observed this is how Firefox and Chrome work. URL\nSee URL\nLGTMreason I asked is that chrome generates a MediaStream with id \"default\" -- see URL while Firefox does not. (I do not see any reason to change the legacy behaviour of chrome even though it seems a bit odd that the RNG always emits the right bytes for \"default\")", "new_text": "If present, \"a=msid\" attributes MUST be parsed as specified in I- D.ietf-mmusic-msid, Section 3.2, and their values stored, ignoring any \"appdata\" field. If no \"a=msid\" attributes are present, a random msid-id value is generated for a \"default\" MediaStream for the session, if not already present, and this value is stored. Any \"a=imageattr\" attributes MUST be parsed as specified in RFC6236, Section 3, and their values stored."}
{"id": "q-en-jsep-bc4e685f31928193c12f43f0f238ca21d7cfb1a3b5cdfeb383b8a5e7e99e6f37", "old_text": "The setConfiguration method allows the global configuration of the PeerConnection, which was initially set by constructor parameters, to be changed during the session. The effects of this method call depend on when it is invoked, and they will differ, depending on which specific parameters are changed: This call may result in a change to the state of the ICE agent. 4.1.17.", "comments": "Several small updates: Issue 15: local media state, local resources correct, removed comment. - Issue 16: updated wording to use 'calling this method', removed comments, - Issue 17: changed stopTransceiver to stop, removed comments, - Issue 18: changed 'directional attribute' to 'direction attribute', removed comments,\nLGTM once lands\n18) [rfced] Section 4.2.5: We see \"direction attribute\" but not \"directional attribute\" in RFC 3264. Will this text be clear to readers? (We ask because we also see \"A direction attribute, determined by applying the rules regarding the offered direction specified in [RFC3264], Section 6.1\" in Section 5.3.1 of this document.) Original: More specifically, it indicates the [RFC3264] directional attribute of the associated m= section in the last applied answer (including provisional answers), with \"send\" and \"recv\" directions reversed if it was a remote answer.\nThis should indeed be , as that's the terminology used by 3264.\nNAME please close this issue once you consider it resolved.\n17) [rfced] Section 4.2.2: Will \"stopTransceiver\" be clear to readers? We could not find this string elsewhere in this document, anywhere else in this cluster of documents, in any published RFC, or in google searches. Original: The stopped property indicates whether the transceiver has been stopped, either by a call to stopTransceiver or by applying an answer that rejects the associated m= section.\nThis should be , rather than .\nNAME please close this issue once you consider it resolved.\n16) [rfced] Section 4.1.16: Should \"The effects of this method call\" be \"The effects of calling this method,\" and should \"This call\" be \"Calling this method\"? Original: The effects of this method call depend on when it is invoked, and differ depending on which specific parameters are changed: ... This call may result in a change to the state of the ICE Agent.\nSGTM\nNAME please close this issue once you consider it resolved.\n15) [rfced] Section 4.1.10: Please confirm that \"local media state\" and \"local resources\" (as opposed to remote) are correct in the context of setRemoteDescription. (We ask because we see identical wording in Section 4.1.9 (\"setLocalDescription\").) Original: This API changes the local media state; among other things, it sets up local resources for sending and encoding media.\nyes, this is correct\nNAME please close this issue once you consider it resolved.\n12) [rfced] Section 4.1.8.1: We had trouble with this sentence. If neither suggestion below is correct, please clarify. Original: While this could also be done with a \"sendonly\" pranswer, followed by a \"sendrecv\" answer, the initial pranswer leaves the offer-answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer, if followed by a \"sendrecv\" answer the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer followed by a \"sendrecv\" answer, the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time.\nsuggestion seems right", "new_text": "The setConfiguration method allows the global configuration of the PeerConnection, which was initially set by constructor parameters, to be changed during the session. The effects of calling this method depend on when it is invoked, and they will differ, depending on which specific parameters are changed: Calling this method may result in a change to the state of the ICE agent. 4.1.17."}
{"id": "q-en-jsep-bc4e685f31928193c12f43f0f238ca21d7cfb1a3b5cdfeb383b8a5e7e99e6f37", "old_text": "4.2.2. The stopped property indicates whether the transceiver has been stopped, either by a call to stopTransceiver or by applying an answer that rejects the associated \"m=\" section. In either of these cases, it is set to \"true\" and otherwise will be set to \"false\". A stopped RtpTransceiver does not send any outgoing RTP or RTCP or process any incoming RTP or RTCP. It cannot be restarted.", "comments": "Several small updates: Issue 15: local media state, local resources correct, removed comment. - Issue 16: updated wording to use 'calling this method', removed comments, - Issue 17: changed stopTransceiver to stop, removed comments, - Issue 18: changed 'directional attribute' to 'direction attribute', removed comments,\nLGTM once lands\n18) [rfced] Section 4.2.5: We see \"direction attribute\" but not \"directional attribute\" in RFC 3264. Will this text be clear to readers? (We ask because we also see \"A direction attribute, determined by applying the rules regarding the offered direction specified in [RFC3264], Section 6.1\" in Section 5.3.1 of this document.) Original: More specifically, it indicates the [RFC3264] directional attribute of the associated m= section in the last applied answer (including provisional answers), with \"send\" and \"recv\" directions reversed if it was a remote answer.\nThis should indeed be , as that's the terminology used by 3264.\nNAME please close this issue once you consider it resolved.\n17) [rfced] Section 4.2.2: Will \"stopTransceiver\" be clear to readers? We could not find this string elsewhere in this document, anywhere else in this cluster of documents, in any published RFC, or in google searches. Original: The stopped property indicates whether the transceiver has been stopped, either by a call to stopTransceiver or by applying an answer that rejects the associated m= section.\nThis should be , rather than .\nNAME please close this issue once you consider it resolved.\n16) [rfced] Section 4.1.16: Should \"The effects of this method call\" be \"The effects of calling this method,\" and should \"This call\" be \"Calling this method\"? Original: The effects of this method call depend on when it is invoked, and differ depending on which specific parameters are changed: ... This call may result in a change to the state of the ICE Agent.\nSGTM\nNAME please close this issue once you consider it resolved.\n15) [rfced] Section 4.1.10: Please confirm that \"local media state\" and \"local resources\" (as opposed to remote) are correct in the context of setRemoteDescription. (We ask because we see identical wording in Section 4.1.9 (\"setLocalDescription\").) Original: This API changes the local media state; among other things, it sets up local resources for sending and encoding media.\nyes, this is correct\nNAME please close this issue once you consider it resolved.\n12) [rfced] Section 4.1.8.1: We had trouble with this sentence. If neither suggestion below is correct, please clarify. Original: While this could also be done with a \"sendonly\" pranswer, followed by a \"sendrecv\" answer, the initial pranswer leaves the offer-answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer, if followed by a \"sendrecv\" answer the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer followed by a \"sendrecv\" answer, the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time.\nsuggestion seems right", "new_text": "4.2.2. The stopped property indicates whether the transceiver has been stopped, either by a call to stop or by applying an answer that rejects the associated \"m=\" section. In either of these cases, it is set to \"true\" and otherwise will be set to \"false\". A stopped RtpTransceiver does not send any outgoing RTP or RTCP or process any incoming RTP or RTCP. It cannot be restarted."}
{"id": "q-en-jsep-bc4e685f31928193c12f43f0f238ca21d7cfb1a3b5cdfeb383b8a5e7e99e6f37", "old_text": "affects the direction property of the associated \"m=\" section on future calls to createOffer and createAnswer. The permitted values for direction are \"recvonly\", \"sendrecv\", \"sendonly\", and \"inactive\", mirroring the identically named directional attributes defined in RFC4566. When creating offers, the transceiver direction is directly reflected", "comments": "Several small updates: Issue 15: local media state, local resources correct, removed comment. - Issue 16: updated wording to use 'calling this method', removed comments, - Issue 17: changed stopTransceiver to stop, removed comments, - Issue 18: changed 'directional attribute' to 'direction attribute', removed comments,\nLGTM once lands\n18) [rfced] Section 4.2.5: We see \"direction attribute\" but not \"directional attribute\" in RFC 3264. Will this text be clear to readers? (We ask because we also see \"A direction attribute, determined by applying the rules regarding the offered direction specified in [RFC3264], Section 6.1\" in Section 5.3.1 of this document.) Original: More specifically, it indicates the [RFC3264] directional attribute of the associated m= section in the last applied answer (including provisional answers), with \"send\" and \"recv\" directions reversed if it was a remote answer.\nThis should indeed be , as that's the terminology used by 3264.\nNAME please close this issue once you consider it resolved.\n17) [rfced] Section 4.2.2: Will \"stopTransceiver\" be clear to readers? We could not find this string elsewhere in this document, anywhere else in this cluster of documents, in any published RFC, or in google searches. Original: The stopped property indicates whether the transceiver has been stopped, either by a call to stopTransceiver or by applying an answer that rejects the associated m= section.\nThis should be , rather than .\nNAME please close this issue once you consider it resolved.\n16) [rfced] Section 4.1.16: Should \"The effects of this method call\" be \"The effects of calling this method,\" and should \"This call\" be \"Calling this method\"? Original: The effects of this method call depend on when it is invoked, and differ depending on which specific parameters are changed: ... This call may result in a change to the state of the ICE Agent.\nSGTM\nNAME please close this issue once you consider it resolved.\n15) [rfced] Section 4.1.10: Please confirm that \"local media state\" and \"local resources\" (as opposed to remote) are correct in the context of setRemoteDescription. (We ask because we see identical wording in Section 4.1.9 (\"setLocalDescription\").) Original: This API changes the local media state; among other things, it sets up local resources for sending and encoding media.\nyes, this is correct\nNAME please close this issue once you consider it resolved.\n12) [rfced] Section 4.1.8.1: We had trouble with this sentence. If neither suggestion below is correct, please clarify. Original: While this could also be done with a \"sendonly\" pranswer, followed by a \"sendrecv\" answer, the initial pranswer leaves the offer-answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer, if followed by a \"sendrecv\" answer the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer followed by a \"sendrecv\" answer, the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time.\nsuggestion seems right", "new_text": "affects the direction property of the associated \"m=\" section on future calls to createOffer and createAnswer. The permitted values for direction are \"recvonly\", \"sendrecv\", \"sendonly\", and \"inactive\", mirroring the identically named direction attributes defined in RFC4566. When creating offers, the transceiver direction is directly reflected"}
{"id": "q-en-jsep-bc4e685f31928193c12f43f0f238ca21d7cfb1a3b5cdfeb383b8a5e7e99e6f37", "old_text": "The currentDirection property indicates the last negotiated direction for the transceiver's associated \"m=\" section. More specifically, it indicates the directional attribute RFC3264 of the associated \"m=\" section in the last applied answer (including provisional answers), with \"send\" and \"recv\" directions reversed if it was a remote answer. For example, if the directional attribute for the associated \"m=\" section in a remote answer is \"recvonly\", currentDirection is set to \"sendonly\".", "comments": "Several small updates: Issue 15: local media state, local resources correct, removed comment. - Issue 16: updated wording to use 'calling this method', removed comments, - Issue 17: changed stopTransceiver to stop, removed comments, - Issue 18: changed 'directional attribute' to 'direction attribute', removed comments,\nLGTM once lands\n18) [rfced] Section 4.2.5: We see \"direction attribute\" but not \"directional attribute\" in RFC 3264. Will this text be clear to readers? (We ask because we also see \"A direction attribute, determined by applying the rules regarding the offered direction specified in [RFC3264], Section 6.1\" in Section 5.3.1 of this document.) Original: More specifically, it indicates the [RFC3264] directional attribute of the associated m= section in the last applied answer (including provisional answers), with \"send\" and \"recv\" directions reversed if it was a remote answer.\nThis should indeed be , as that's the terminology used by 3264.\nNAME please close this issue once you consider it resolved.\n17) [rfced] Section 4.2.2: Will \"stopTransceiver\" be clear to readers? We could not find this string elsewhere in this document, anywhere else in this cluster of documents, in any published RFC, or in google searches. Original: The stopped property indicates whether the transceiver has been stopped, either by a call to stopTransceiver or by applying an answer that rejects the associated m= section.\nThis should be , rather than .\nNAME please close this issue once you consider it resolved.\n16) [rfced] Section 4.1.16: Should \"The effects of this method call\" be \"The effects of calling this method,\" and should \"This call\" be \"Calling this method\"? Original: The effects of this method call depend on when it is invoked, and differ depending on which specific parameters are changed: ... This call may result in a change to the state of the ICE Agent.\nSGTM\nNAME please close this issue once you consider it resolved.\n15) [rfced] Section 4.1.10: Please confirm that \"local media state\" and \"local resources\" (as opposed to remote) are correct in the context of setRemoteDescription. (We ask because we see identical wording in Section 4.1.9 (\"setLocalDescription\").) Original: This API changes the local media state; among other things, it sets up local resources for sending and encoding media.\nyes, this is correct\nNAME please close this issue once you consider it resolved.\n12) [rfced] Section 4.1.8.1: We had trouble with this sentence. If neither suggestion below is correct, please clarify. Original: While this could also be done with a \"sendonly\" pranswer, followed by a \"sendrecv\" answer, the initial pranswer leaves the offer-answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer, if followed by a \"sendrecv\" answer the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time. Suggestion : While this could also be done with a \"sendonly\" pranswer followed by a \"sendrecv\" answer, the initial pranswer leaves the offer/answer exchange open, which means that the caller cannot send an updated offer during this time.\nsuggestion seems right", "new_text": "The currentDirection property indicates the last negotiated direction for the transceiver's associated \"m=\" section. More specifically, it indicates the direction attribute RFC3264 of the associated \"m=\" section in the last applied answer (including provisional answers), with \"send\" and \"recv\" directions reversed if it was a remote answer. For example, if the direction attribute for the associated \"m=\" section in a remote answer is \"recvonly\", currentDirection is set to \"sendonly\"."}
{"id": "q-en-load-balancers-a6c63ae1a09494173177151658505ae8b53efd83bef243e22f2c26d2cf8af76e", "old_text": "packets in response to Initial packets that don't include a valid token. Both server and service must have access to Universal time, though tight synchronization is unnecessary. The tokens are protected using AES128-GCM AEAD, as explained in token-protection-with-aead. All tokens, generated by either the", "comments": "This is in response to a post NAME made on .\nThanks for the PR! So as an interoperability standard, if the retry service encodes \"4000\", what does the server do with that? Do we need to add a config parameter for the time_t that serves as time 0?\nTimestamps should be absolute, not relative, in which case I'm not sure you need a 0. There's nothing special about midnight 1 January 1970.\nAh, so you're just suggesting declaring a later date to begin the epoch? Makes sense\nWhy do we need to have a sential time? If it's a precision vs. range issue then shifting the epoch makes sense.\nSorry, I don't know what \"sential time\" is. What are you proposing?\nWhat I'm proposing is that we not treat any times differently. It's not clear to me why 4000 needed special treatment.\n4000 is just an example. The intent of using the epoch was to have common agreement on when tokens expire without any QUIC-LB-specific coordination. The current draft uses 1970 as a timebase, which absolutely makes integers longer than it needs to be. IIUC the PR doesn't have any baseline so it's unclear how the server and retry service are supposed to interpret each other's expiration times. As we're probably speaking past each other: what, precisely, does the integer in this field indicate?\nIt's the end of the validity period for the token in some convenient timebase.\nAlright, we agree on that. So this PR is being a little less precise on synchronization, which is fine, but isn't addressing the fact that we're counting from 1970. Fair enough. Dear QUIC WG, First a moment of unforgivable pedantry. Universal time is unfortunately not the right name for UTC, which is slightly different. It could also mean UT1 or some other variations It's also not clear what the future of computer timekeeping is and I know several places don't synchronize to UTC internally. I think the best way to handle this is to say something like \"synchronized\" and leave open as to exactly how that is done. I agree with the comments that there are too many options and should be fewer. Other then that I didn't see any problems but I'm probably too sleepy right now to spot them. Sincerely, Watson Ladd Astra mortemque praestare gradatim", "new_text": "packets in response to Initial packets that don't include a valid token. Both server and service must have time synchronized with respect to one another to prevent tokens being incorrectly marked as expired, though tight synchronization is unnecessary. The tokens are protected using AES128-GCM AEAD, as explained in token-protection-with-aead. All tokens, generated by either the"}
{"id": "q-en-load-balancers-9d417a7eb648026de29c0c626c4094e77f2f51daa74d69affed2eb0c78db0f4e", "old_text": "6. For protocols where 4-tuple load balancing is sufficient, it is straightforward to deliver ICMP packets from the network to the correct server, by reading the echoed IP and transport-layer headers to obtain the 4-tuple. When routing is based on connection ID, further measures are required, as most QUIC packets that trigger ICMP responses will only contain a client-generated connection ID that contains no routing information. To solve this problem, load balancers MAY maintain a mapping of Client IP and port to server ID based on recently observed packets. Alternatively, servers MAY implement the technique described in Section 14.4.1 of RFC9000 to increase the likelihood a Source Connection ID is included in ICMP responses to Path Maximum Transmission Unit (PMTU) probes. Load balancers MAY parse the echoed packet to extract the Source Connection ID, if it contains a QUIC long header, and extract the Server ID as if it were in a Destination CID. 7.", "comments": "Definitely an edge case, but the LB will spray stateless resets everywhere because the CID is random. If the LB tracks fourtuples, though, it shouldn't be a problem.\nMaybe make this part of a thing about stateless LBs.\nSo, what's the plan ?\nNAME raises an interesting point about stateless LBs in .", "new_text": "6. QUIC-LB requires no per-connection state at the load balancer. The load balancer can extract the server ID from the connection ID of each incoming packet and route that packet accordingly. However, once the routing decision has been made, the load balancer MAY associate the 4-tuple with the decision. This has two advantages: The load balancer only extracts the server ID once per incoming 4-tuple. When the CID is encrypted, this substantially reduces computational load. Incoming Stateless Reset packets and ICMP messages are easily routed to the correct origin server. In addition to the increased state requirements, however, load balancers cannot detect the CONNECTION_CLOSE frame to indicate the end of the connection, so they rely on a timeout to delete connection state. There are numerous considerations around setting such a timeout. In the event a connection ends, freeing an IP and port, and a different connection migrates to that IP and port before the timeout, the load balancer will misroute the different connection's packets to the original server. A short timeout limits the likelihood of such a misrouting. Furthermore, if a short timeout causes premature deletion of state, the routing is easily recoverable by decoding an incoming Connection ID. However, a short timeout also reduces the chance that an incoming Stateless Reset is correctly routed. Servers MAY implement the technique described in Section 14.4.1 of RFC9000 in case the load balancer is stateless, to increase the likelihood a Source Connection ID is included in ICMP responses to Path Maximum Transmission Unit (PMTU) probes. Load balancers MAY parse the echoed packet to extract the Source Connection ID, if it contains a QUIC long header, and extract the Server ID as if it were in a Destination CID. 7."}
{"id": "q-en-load-balancers-1961558cc4d476eabca90147ea740de7cf16882c4b5084a6622ff4d116f7c270", "old_text": "verification) or make sure there is common context for the server to verify the address using a service-generated token. There are two different mechanisms to allow offload of DoS mitigation to a trusted network service. One requires no shared state; the server need only be configured to trust a retry service, though this", "comments": "After thinking it through, various XOR schemes were extremely vulnerable to manipulation, so I went with a brute force approach.\nThe server now needs to include this field. For the no-shared state service, we just need language that the Retry service needs to encode enough information validate the packet DCID as well and drop if it fails validation. If it does, the server MUST use the packet DCID in the retrysourceconnection_id TP. For the shared state service, the retry source connection ID is going to have to be in the token. We might be able to compress this by xoring some fields; I'll think about it.", "new_text": "verification) or make sure there is common context for the server to verify the address using a service-generated token. The service must also communicate the source connection ID of the Retry packet to the server so that it can include it in a transport parameter for client verification. There are two different mechanisms to allow offload of DoS mitigation to a trusted network service. One requires no shared state; the server need only be configured to trust a retry service, though this"}
{"id": "q-en-load-balancers-1961558cc4d476eabca90147ea740de7cf16882c4b5084a6622ff4d116f7c270", "old_text": "triggering Initial packet. This is in cleartext to be readable for the server, but authenticated later in the token. Original Destination Connection ID: This also in cleartext and authenticated later. Opaque Data: This data MUST contain encrypted information that allows the retry service to validate the client's IP address, in accordance with the QUIC specification. It MUST also encode a secure hash of the original destination connection ID field to verify that this field has not been edited. Upon receipt of an Initial packet with a token that begins with '0', the retry service MUST validate the token in accordance with the QUIC specification. It must also verify that the secure hash of the Connect ID is correct. If incorrect, the token is invalid. In active mode, the service MUST issue Retry packets for all Client initial packets that contain no token, or a token that has the first bit set to '1'. It MUST NOT forward the packet to the server. The service MUST validate all tokens with the first bit set to '0'. If successful, the service MUST forward the packet with the token intact. If unsuccessful, it MUST drop the packet. Note that this scheme has a performance drawback. When the retry service is in active mode, clients with a token from a NEW_TOKEN", "comments": "After thinking it through, various XOR schemes were extremely vulnerable to manipulation, so I went with a brute force approach.\nThe server now needs to include this field. For the no-shared state service, we just need language that the Retry service needs to encode enough information validate the packet DCID as well and drop if it fails validation. If it does, the server MUST use the packet DCID in the retrysourceconnection_id TP. For the shared state service, the retry source connection ID is going to have to be in the token. We might be able to compress this by xoring some fields; I'll think about it.", "new_text": "triggering Initial packet. This is in cleartext to be readable for the server, but authenticated later in the token. RSCIL: The retry source connection ID length. Original Destination Connection ID: This also in cleartext and authenticated later. Retry Source Connection ID: This also in cleartext and authenticated later. Opaque Data: This data MUST contain encrypted information that allows the retry service to validate the client's IP address, in accordance with the QUIC specification. It MUST also provide a cryptographically secure means to validate the integrity of the entire token. Upon receipt of an Initial packet with a token that begins with '0', the retry service MUST validate the token in accordance with the QUIC specification. In active mode, the service MUST issue Retry packets for all Client initial packets that contain no token, or a token that has the first bit set to '1'. It MUST NOT forward the packet to the server. The service MUST validate all tokens with the first bit set to '0'. If successful, the service MUST forward the packet with the token intact. If unsuccessful, it MUST drop the packet. The Retry Service MAY send an Initial Packet containing a CONNECTION_CLOSE frame with the INVALID_TOKEN error code when dropping the packet. Note that this scheme has a performance drawback. When the retry service is in active mode, clients with a token from a NEW_TOKEN"}
{"id": "q-en-load-balancers-1961558cc4d476eabca90147ea740de7cf16882c4b5084a6622ff4d116f7c270", "old_text": "receives an Initial Packet with the first bit to '0', it is a Retry token and the server MUST NOT attempt to validate it. Instead, it MUST assume the address is validated and MUST extract the Original Destination Connection ID, assuming the format described in nss- service-requirements. 6.3.", "comments": "After thinking it through, various XOR schemes were extremely vulnerable to manipulation, so I went with a brute force approach.\nThe server now needs to include this field. For the no-shared state service, we just need language that the Retry service needs to encode enough information validate the packet DCID as well and drop if it fails validation. If it does, the server MUST use the packet DCID in the retrysourceconnection_id TP. For the shared state service, the retry source connection ID is going to have to be in the token. We might be able to compress this by xoring some fields; I'll think about it.", "new_text": "receives an Initial Packet with the first bit to '0', it is a Retry token and the server MUST NOT attempt to validate it. Instead, it MUST assume the address is validated and MUST extract the Original Destination Connection ID and Retry Source Connection ID, assuming the format described in nss-service-requirements. 6.3."}
{"id": "q-en-load-balancers-1961558cc4d476eabca90147ea740de7cf16882c4b5084a6622ff4d116f7c270", "old_text": "ODCIL: The original destination connection ID length. Tokens in NEW_TOKEN frames MUST set this field to zero. Original Destination Connection ID: This is copied from the field in the client Initial packet. Client IP Address: The source IP address from the triggering Initial packet. The client IP address is 16 octets. If an IPv4 address, the", "comments": "After thinking it through, various XOR schemes were extremely vulnerable to manipulation, so I went with a brute force approach.\nThe server now needs to include this field. For the no-shared state service, we just need language that the Retry service needs to encode enough information validate the packet DCID as well and drop if it fails validation. If it does, the server MUST use the packet DCID in the retrysourceconnection_id TP. For the shared state service, the retry source connection ID is going to have to be in the token. We might be able to compress this by xoring some fields; I'll think about it.", "new_text": "ODCIL: The original destination connection ID length. Tokens in NEW_TOKEN frames MUST set this field to zero. RSCIL: The retry source connection ID length. Tokens in NEW_TOKEN frames MUST set this field to zero. Original Destination Connection ID: The server or Retry Service copies this from the field in the client Initial packet. Retry Source Connection ID: The server or Retry service copies this from the Source Connection ID of the Retry packet. Client IP Address: The source IP address from the triggering Initial packet. The client IP address is 16 octets. If an IPv4 address, the"}
{"id": "q-en-load-balancers-1961558cc4d476eabca90147ea740de7cf16882c4b5084a6622ff4d116f7c270", "old_text": "retry tokens marked as being a few seconds in the future, due to possible clock synchronization issues. A server MUST NOT send a Retry packet in response to an Initial packet that contains a retry token. 7.", "comments": "After thinking it through, various XOR schemes were extremely vulnerable to manipulation, so I went with a brute force approach.\nThe server now needs to include this field. For the no-shared state service, we just need language that the Retry service needs to encode enough information validate the packet DCID as well and drop if it fails validation. If it does, the server MUST use the packet DCID in the retrysourceconnection_id TP. For the shared state service, the retry source connection ID is going to have to be in the token. We might be able to compress this by xoring some fields; I'll think about it.", "new_text": "retry tokens marked as being a few seconds in the future, due to possible clock synchronization issues. After decrypting the token, the server uses the corresponding fields to populate the original_destination_connection_id transport parameter, with a length equal to ODCIL, and the retry_source_connection_id transport parameter, with length equal to RSCIL. As discussed in QUIC-TRANSPORT, a server MUST NOT send a Retry packet in response to an Initial packet that contains a retry token. 7."}
{"id": "q-en-load-balancers-899c424042421151ce30d05eb862440423cf682903436ccf6c525e1774e8bf7a", "old_text": "4.4.2. Upon receipt of a QUIC packet that is not of type Initial or 0-RTT, the load balancer extracts as many of the earliest octets from the destination connection ID as necessary to match the nonce length. The server ID immediately follows. The load balancer decrypts the nonce and the server ID using the following three pass algorithm:", "comments": "I will probably delete the Stream Cipher algorithm entirely, but consider this an intermediate step.\nMerging to enable the next round of edits", "new_text": "4.4.2. Upon receipt of a QUIC packet, the load balancer extracts as many of the earliest octets from the destination connection ID as necessary to match the nonce length. The server ID immediately follows. The load balancer decrypts the nonce and the server ID using the following three pass algorithm:"}
{"id": "q-en-load-balancers-899c424042421151ce30d05eb862440423cf682903436ccf6c525e1774e8bf7a", "old_text": "encrypted-nonce) Pass 2: The load balancer decrypts the nonce octets using 128-bit AES Electronic Codebook (ECB) mode, much like QUIC header protection, using the server-id intermediate as \"nonce\" for this pass. The server-id intermediate octets are zero-padded to 16 octets. AES-ECB encrypts this padded server-id intermediate using its key to generate a mask which it applies to the encrypted", "comments": "I will probably delete the Stream Cipher algorithm entirely, but consider this an intermediate step.\nMerging to enable the next round of edits", "new_text": "encrypted-nonce) Pass 2: The load balancer decrypts the nonce octets using 128-bit AES ECB mode, using the server-id intermediate as \"nonce\" for this pass. The server-id intermediate octets are zero-padded to 16 octets. AES-ECB encrypts this padded server-id intermediate using its key to generate a mask which it applies to the encrypted"}
{"id": "q-en-load-balancers-899c424042421151ce30d05eb862440423cf682903436ccf6c525e1774e8bf7a", "old_text": "nonce = encrypted_nonce ^ AES-ECB(key, padded-server_id_intermediate) Pass 3: The load balancer decrypts the server ID using 128-bit AES Electronic Codebook (ECB) mode, much like QUIC header protection. The nonce octets are zero-padded to 16 octets. AES-ECB encrypts this nonce using its key to generate a mask which it applies to the intermediate server id. This provides the dercypted server ID. server_id = server_id_intermediate ^ AES-ECB(key, padded-nonce) This three pass algorithm is a simplified version of the FFX algorithm. It has the same connection ID length requirements as the Stream Cipher CID Algorithm, with the additional property that each encrypted nonce value depends on all server ID bits, and each", "comments": "I will probably delete the Stream Cipher algorithm entirely, but consider this an intermediate step.\nMerging to enable the next round of edits", "new_text": "nonce = encrypted_nonce ^ AES-ECB(key, padded-server_id_intermediate) Pass 3: The load balancer decrypts the server ID using 128-bit AES ECB mode. The nonce octets are zero-padded to 16 octets. AES-ECB encrypts this nonce using its key to generate a mask which it applies to the intermediate server id. This provides the decrypted server ID. server_id = server_id_intermediate ^ AES-ECB(key, padded-nonce) This three-pass algorithm is a simplified version of the FFX algorithm. It has the same connection ID length requirements as the Stream Cipher CID Algorithm, with the additional property that each encrypted nonce value depends on all server ID bits, and each"}
{"id": "q-en-load-balancers-899c424042421151ce30d05eb862440423cf682903436ccf6c525e1774e8bf7a", "old_text": "When generating a routable connection ID, the server writes arbitrary bits into its nonce octets, and its provided server ID into the server ID octets. Servers MAY opt to have a longer connection ID beyond the nonce and server ID. The nonce and additional bits MAY encode additional information, but SHOULD appear essentially random to observers. The server encrypts the server ID using exactly the algorithm as described in triple-stream-cipher-load-balancer-actions, performing", "comments": "I will probably delete the Stream Cipher algorithm entirely, but consider this an intermediate step.\nMerging to enable the next round of edits", "new_text": "When generating a routable connection ID, the server writes arbitrary bits into its nonce octets, and its provided server ID into the server ID octets. Servers MAY opt to have a longer connection ID beyond the nonce and server ID. The additional bits MAY encode additional information, but SHOULD appear essentially random to observers. If the decrypted nonce bits increase monotonically, that guarantees that nonces are not reused between connection IDs from the same server. The server encrypts the server ID using exactly the algorithm as described in triple-stream-cipher-load-balancer-actions, performing"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "4.2. The Obfuscated CID Algorithm makes an attempt to obscure the mapping of connections to servers to reduce linkability, while not requiring true encryption and decryption. The format is depicted in the figure below. 4.2.1. The configuration agent selects an arbitrary set of bits of the server connection ID that it will use to route to a given server, called the \"routing bits\". The number of bits MUST have enough entropy to have a different code point for each server, and SHOULD have enough entropy so that there are many codepoints for each server. The configuration agent MUST NOT select a routing mask with more than 136 routing bits set to 1, which allows for the first octet and up to 2 octets for server purposes in a maximum-length connection ID. The configuration agent selects a divisor that MUST be larger than the number of servers. It SHOULD be large enough to accommodate reasonable increases in the number of servers. The divisor MUST be an odd integer so certain addition operations do not always produce an even number. The configuration agent also assigns each server a \"modulus\", an integer between 0 and the divisor minus 1. These MUST be unique for each server, and SHOULD be distributed across the entire number space between zero and the divisor. 4.2.2. Upon receipt of a QUIC packet, the load balancer extracts the selected bits of the Server CID and expresses them as an unsigned integer of that length. The load balancer then divides the result by the chosen divisor. The modulus of this operation maps to the modulus for the destination server. Note that any Server CID that contains a server's modulus, plus an arbitrary integer multiple of the divisor, in the routing bits is routable to that server regardless of the contents of the non-routing bits. Outside observers that do not know the divisor or the routing bits will therefore have difficulty identifying that two Server CIDs route to the same server. Note also that not all Connection IDs are necessarily routable, as the computed modulus may not match one assigned to any server. These DCIDs are non-compliant as described above. 4.2.3. The server chooses a connection ID length. This MUST contain all of the routing bits and MUST be at least 9 octets to provide adequate entropy. When a server needs a new connection ID, it adds an arbitrary nonnegative integer multiple of the divisor to its modulus, without exceeding the maximum integer value implied by the number of routing bits. The choice of multiple should appear random within these constraints. The server encodes the result in the routing bits. It MAY put any other value into bits that used neither for routing nor config rotation. These bits SHOULD appear random to observers. 4.3. The Stream Cipher CID algorithm provides true cryptographic protection, rather than mere obfuscation, at the cost of additional per-packet processing at the load balancer to decrypt every incoming connection ID. The CID format is depicted below. 4.3.1. The configuration agent assigns a server ID to every server in its pool, and determines a server ID length (in octets) sufficiently large to encode all server IDs, including potential future servers.", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "4.2. The Stream Cipher CID algorithm provides cryptographic protection at the cost of additional per-packet processing at the load balancer to decrypt every incoming connection ID. The CID format is depicted below. 4.2.1. The configuration agent assigns a server ID to every server in its pool, and determines a server ID length (in octets) sufficiently large to encode all server IDs, including potential future servers."}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "MUST be at least 8 octets and no more than 16 octets. The nonce length and server ID length MUST sum to 19 or fewer octets. 4.3.2. Upon receipt of a QUIC packet, the load balancer extracts as many of the earliest octets from the destination connection ID as necessary", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "MUST be at least 8 octets and no more than 16 octets. The nonce length and server ID length MUST sum to 19 or fewer octets. 4.2.2. Upon receipt of a QUIC packet, the load balancer extracts as many of the earliest octets from the destination connection ID as necessary"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "The output of the decryption is the server ID that the load balancer uses for routing. 4.3.3. When generating a routable connection ID, the server writes arbitrary bits into its nonce octets, and its provided server ID into the", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "The output of the decryption is the server ID that the load balancer uses for routing. 4.2.3. When generating a routable connection ID, the server writes arbitrary bits into its nonce octets, and its provided server ID into the"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "described in stream-cipher-load-balancer-actions, performing the three passes in reverse order. 4.4. The Block Cipher CID Algorithm, by using a full 16 octets of plaintext and a 128-bit cipher, provides higher cryptographic", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "described in stream-cipher-load-balancer-actions, performing the three passes in reverse order. 4.3. The Block Cipher CID Algorithm, by using a full 16 octets of plaintext and a 128-bit cipher, provides higher cryptographic"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "it also requires connection IDs of at least 17 octets, increasing overhead of client-to-server packets. 4.4.1. The configuration agent assigns a server ID to every server in its pool, and determines a server ID length (in octets) sufficiently", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "it also requires connection IDs of at least 17 octets, increasing overhead of client-to-server packets. 4.3.1. The configuration agent assigns a server ID to every server in its pool, and determines a server ID length (in octets) sufficiently"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "The configuration agent also selects an 16-octet AES-ECB key to use for connection ID decryption. 4.4.2. Upon receipt of a QUIC packet, the load balancer reads the first octet to obtain the config rotation bits. It then decrypts the", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "The configuration agent also selects an 16-octet AES-ECB key to use for connection ID decryption. 4.3.2. Upon receipt of a QUIC packet, the load balancer reads the first octet to obtain the config rotation bits. It then decrypts the"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "in that order. The load balancer uses the server ID octets for routing. 4.4.3. When generating a routable connection ID, the server MUST choose a connection ID length between 17 and 20 octets. The server writes its", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "in that order. The load balancer uses the server ID octets for routing. 4.3.3. When generating a routable connection ID, the server MUST choose a connection ID length between 17 and 20 octets. The server writes its"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "routing bytes. The Server ID is unique to each server, and the routing bytes are global. For Obfuscated CID Routing, this consists of the Routing Bits, Divisor, and Modulus. The Modulus is unique to each server, but the others MUST be global. For Stream Cipher CID Routing, this consists of the Server ID, Server ID Length, Key, and Nonce Length. The Server ID is unique to each server, but the others MUST be global. The authentication token MUST", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "routing bytes. The Server ID is unique to each server, and the routing bytes are global. For Stream Cipher CID Routing, this consists of the Server ID, Server ID Length, Key, and Nonce Length. The Server ID is unique to each server, but the others MUST be global. The authentication token MUST"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "map to the same server. It could then apply analytical techniques to try to obtain the server encoding. The Stream and Block Cipher CID algorithms provide robust entropy to making any sort of linkage. The Obfuscated CID obscures the mapping and prevents trivial brute-force attacks to determine the routing parameters, but does not provide robust protection against sophisticated attacks. Were this analysis to obtain the server encoding, then on-path observers might apply this analysis to correlating different client", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "map to the same server. It could then apply analytical techniques to try to obtain the server encoding. The Stream and Block Cipher CID algorithms provide robust protection against any sort of linkage. The Plaintext CID algorithm makes no attempt to protect this encoding. Were this analysis to obtain the server encoding, then on-path observers might apply this analysis to correlating different client"}
{"id": "q-en-load-balancers-cceade5f6a96cd368704757a86428f03eb88903ed1f0e6060dfcf8c06a340628", "old_text": "To avoid this, the configuration agent SHOULD issue QUIC-LB configurations to mutually distrustful servers that have different keys (for the block cipher or stream cipher algorithms) or routing masks and divisors (for the obfuscated algorithm). The load balancers can distinguish these configurations by external IP address, or by assigning different values to the config rotation bits (config-rotation). Note that either solution has a privay impact; see multiple-configs. These techniques are not necessary for the plaintext algorithm, as it does not attempt to conceal the server ID.", "comments": "by popular demand in .\nAs discussed at IETF 108. Split from .\nThanks for filing this. I agree that this was the feedback, but I'll take it from the list to be sure.\nRemoving this seems fine to me.\nYes, do it.\nSeems fine.\nDo it.\nDo it.\nNick Harper concurred on the list\nYes, that works. Minor comment.", "new_text": "To avoid this, the configuration agent SHOULD issue QUIC-LB configurations to mutually distrustful servers that have different keys for encryption algorithms. The load balancers can distinguish these configurations by external IP address, or by assigning different values to the config rotation bits (config-rotation). Note that either solution has a privacy impact; see multiple-configs. These techniques are not necessary for the plaintext algorithm, as it does not attempt to conceal the server ID."}
{"id": "q-en-lxnm-5b6f81cfdb882a63b58a54f1d11990ce31f7a5aaf5496231969a8a85313b3039", "old_text": "6.3.2.1. This sub-tree defines the L2VPN service type, according to the several signalling options to exchange membership information between PEs of an L2VPN. The following signaling options are supported: Refers to the BGP control plane as described in RFC4761 and RFC6624. Refers to the BGP control plane as described in RFC7432 and RFC7209. Refers to LDP-signaled pseudowires RFC6074. Refers to L2TP-signaled pseudowires RFC6074. 6.3.2.2.", "comments": "Adding a description to signaling options to improve readability. I have pending to update L2TP after the other issue is solved URL\nThanks, Victor. Does the text reflect the latest version of signaling/discovery split?\nleaf l2tp-type { type identityref { base t-ldp-pwe-type; } description \"T-LDP PWE type.\"; }\nThe type should be taken from this registry: URL\nA new IANA-maintained module is created to fix this issue.", "new_text": "6.3.2.1. This sub-tree defines the L2VPN service type, according to the several signaling options to exchange membership information between PEs of an L2VPN. The following signaling options are supported: The service is a Multipoint VPLSs that use a BGP control plane as described in RFC4761 and RFC6624. For this L2VPN service, the model allows configuring the Route Distinguisher, the Route Targets, the PWE encapsulation type and the MTU configurations to allow MTU mismatch. The service is a Multipoint VPLSs that use also a BGP control plane but also includes the additional features and related parameters RFC7432 and RFC7209. The model allows the configuration of the EVPN service interface type, the local and remote VPWS Service Instance (VSI) RFC8214, the EVPN policies for handling MAC addresses and the Ethernet Segment Identifier (ESI) information. A Multipoint VPLSs that use a mesh of LDP-signaled Pseudowires RFC6074. The model allows configuring the T-LDP PWE type, MTU mismatch and the list of AC and PW bindings. The L2 VPN uses L2TP-signaled Pseudowires as described in RFC6074. The model allows configuring TBC. 6.3.2.2."}
{"id": "q-en-manifest-spec-ec27f01dcb4d81f13a96832401402d85a7dfadad432c245ba924981cdd246298", "old_text": "of the element and to ensure correct serialization for integrity and authenticity checks. 8.2. The Envelope contains each of the other primary constituent parts of", "comments": "adds a missing digest parameter corrects CDDL for suit-text removes dangling\nI have discovered this because the EAT build process pulls the SUIT CDDL in for validation using the cddl tool.\nNAME posted to the list the following: I think I found a little typo in the draft-ietf-suit-manifest-14 ... At the \"Appendix A. A. Full CDDL\" of the \"SUITSeverableMembersChoice\", the last two members are pointing to \"SUITCommand_Sequence\" are probably typos as following,\nIn draft-ietf-suit-manifest-04 In the example manifest image-digest parameter is set for component 0 in common sequence. The load sequence sets component-index to 1 and after directive-copy it specifies image-match condition. However, the image-digest parameter has not been set for component-index 1 causing the condition to fail as . Is it expected that parameters are implicitly copied from source component when copy-directive is executed? Should the image-digest be also specified for component-index 1 or perhaps for all components? Another observation in the example is that vendor-id and class-id parameters and corrsponding condition checks are specified for component 0, which seems to be the \"external storage\". Wouldn't it be better to specify the VID and CID globally for the entire device or for the component 1, which seems to be the one running the image? / common / 3:h'a2024782814100814101045856880c0014a40150fa6b4a5 3d5ad5fdfbe9de663e4d41ffe02501492af1425695e48bf429b2d51f2ab45038202582 000112233445566778899aabbccddeeff0123456789abcdeffedcba98765432100e198 7d001f602f6' / { / components / 2:h'82814100814101' / [ [h'00'] , [h'01'] ] /, / common-sequence / 4:h'880c0014a40150fa6b4a53d5ad5fdfbe9d e663e4d41ffe02501492af1425695e48bf429b2d51f2ab450382025820001122334455 66778899aabbccddeeff0123456789abcdeffedcba98765432100e1987d001f602f6' / [ / directive-set-component-index / 12,0 , / directive-override-parameters / 20,{ / vendor-id / 1:h'fa6b4a53d5ad5fdfbe9de663e4d41ffe' / fa6b4a53-d5ad-5fdf- be9d-e663e4d41ffe /, / class-id / 2:h'1492af1425695e48bf429b2d51f2ab45' / 1492af14-2569-5e48-bf42-9b2d51f2ab45 /, / image-digest / 3:[ / algorithm-id / 2 / sha256 /, / digest-bytes / h'00112233445566778899aabbccddeeff0123456789abcdeffedcba9876543210' ], / image-size / 14:34768, } , / condition-vendor-identifier / 1,F6 / nil / , / condition-class-identifier / 2,F6 / nil / ] /, } /, (...) / load / 11:h'880c0113a1160016f603f6' / [ / directive-set-component-index / 12,1 , / directive-set-parameters / 19,{ / source-component / 22:0 / [h'00'] /, } , / directive-copy / 22,F6 / nil / , / condition-image-match / 3,F6 / nil / ] /,", "new_text": "of the element and to ensure correct serialization for integrity and authenticity checks. All CBOR maps in the Manifest and manifest envelope MUST be encoded with the canonical CBOR ordering as defined in RFC8949. 8.2. The Envelope contains each of the other primary constituent parts of"}
{"id": "q-en-manifest-spec-ec27f01dcb4d81f13a96832401402d85a7dfadad432c245ba924981cdd246298", "old_text": "MUST still contain the suit-authentication-wrapper element, but the content MUST be a list containing only the SUIT_Digest. 8.4. The manifest contains:", "comments": "adds a missing digest parameter corrects CDDL for suit-text removes dangling\nI have discovered this because the EAT build process pulls the SUIT CDDL in for validation using the cddl tool.\nNAME posted to the list the following: I think I found a little typo in the draft-ietf-suit-manifest-14 ... At the \"Appendix A. A. Full CDDL\" of the \"SUITSeverableMembersChoice\", the last two members are pointing to \"SUITCommand_Sequence\" are probably typos as following,\nIn draft-ietf-suit-manifest-04 In the example manifest image-digest parameter is set for component 0 in common sequence. The load sequence sets component-index to 1 and after directive-copy it specifies image-match condition. However, the image-digest parameter has not been set for component-index 1 causing the condition to fail as . Is it expected that parameters are implicitly copied from source component when copy-directive is executed? Should the image-digest be also specified for component-index 1 or perhaps for all components? Another observation in the example is that vendor-id and class-id parameters and corrsponding condition checks are specified for component 0, which seems to be the \"external storage\". Wouldn't it be better to specify the VID and CID globally for the entire device or for the component 1, which seems to be the one running the image? / common / 3:h'a2024782814100814101045856880c0014a40150fa6b4a5 3d5ad5fdfbe9de663e4d41ffe02501492af1425695e48bf429b2d51f2ab45038202582 000112233445566778899aabbccddeeff0123456789abcdeffedcba98765432100e198 7d001f602f6' / { / components / 2:h'82814100814101' / [ [h'00'] , [h'01'] ] /, / common-sequence / 4:h'880c0014a40150fa6b4a53d5ad5fdfbe9d e663e4d41ffe02501492af1425695e48bf429b2d51f2ab450382025820001122334455 66778899aabbccddeeff0123456789abcdeffedcba98765432100e1987d001f602f6' / [ / directive-set-component-index / 12,0 , / directive-override-parameters / 20,{ / vendor-id / 1:h'fa6b4a53d5ad5fdfbe9de663e4d41ffe' / fa6b4a53-d5ad-5fdf- be9d-e663e4d41ffe /, / class-id / 2:h'1492af1425695e48bf429b2d51f2ab45' / 1492af14-2569-5e48-bf42-9b2d51f2ab45 /, / image-digest / 3:[ / algorithm-id / 2 / sha256 /, / digest-bytes / h'00112233445566778899aabbccddeeff0123456789abcdeffedcba9876543210' ], / image-size / 14:34768, } , / condition-vendor-identifier / 1,F6 / nil / , / condition-class-identifier / 2,F6 / nil / ] /, } /, (...) / load / 11:h'880c0113a1160016f603f6' / [ / directive-set-component-index / 12,1 , / directive-set-parameters / 19,{ / source-component / 22:0 / [h'00'] /, } , / directive-copy / 22,F6 / nil / , / condition-image-match / 3,F6 / nil / ] /,", "new_text": "MUST still contain the suit-authentication-wrapper element, but the content MUST be a list containing only the SUIT_Digest. The algorithms used in SUIT_Authentication are defined by the profiles declared in I-D.moran-suit-mti. 8.4. The manifest contains:"}
{"id": "q-en-mdns-ice-candidates-3eefd3b9871dfc894d55ee70f788535fae750aae9a3a30da2e810e1d7c906fa6", "old_text": "explanation for the overall connection rate drop is that hairpinning failed in some cases. One potential mitigation, as discussed in privacy, is to not conceal candidates created from RFC4941 IPv6 addresses. This permits connectivity even in large internal networks or where mDNS is disabled. Future versions of this document will include experimental data regarding this option. 5.2. As noted in description, ICE agents using the mDNS technique are", "comments": "Let's remove this sentence and add more information when we will have this additional experimental data\nThis area probably needs more experimentation in general. For instance, it is unclear how much we could partition such IPv6 addresses by origin like we ask for mDNS names. Given we already talk about this approach in the privacy section, these sentences do not feel mandatory to keep. No strong feelings either way though.\nI looked at the current text more closely and I see what you are saying, especially regarding different origins. We should probably talk about the separation of origins or browsing contexts more in the introduction though. Currently it only refers to the fingerprinting threat, which could be construed to simply mean a long-term identifier.\nI don't think we needed to nuke this whole section, just the future version of the document sentence.", "new_text": "explanation for the overall connection rate drop is that hairpinning failed in some cases. 5.2. As noted in description, ICE agents using the mDNS technique are"}
{"id": "q-en-mediatypes-e629ae3351752c24d479ea3dc313747ff14f535621de14359ac7ce57ec25b388", "old_text": "This document uses the Augmented BNF defined in RFC5234 and updated by RFC7405. 1.2. The OpenAPI Specification Media Types convey OpenAPI Specification (OAS) files as defined in oas for version 3.0.0 and above. Those files can be serialized in JSON or yaml. Since there are multiple OpenAPI Specification Specifications versions, those media- types support the \"version\" parameter. The following examples conveys the desire of a client to receive an OpenAPI Specification resource preferably in the following order: openapi 3.1 in yaml openapi 3.0 in yaml any openapi version in json 2. Security requirements for both media type and media type suffix registrations are discussed in Section 4.6 of MEDIATYPE. 3. This specification defines the following new Internet media types MEDIATYPE. 3.1. Type name: application", "comments": "introduce numbered sections for YAML NAME since each media type is defined in its IANA consideration, would it be ok for you if we just add a section for YAML? Feel free to PR with another proposal.\nNAME could you PTAL ? :)\nto be honest, i prefer definitions to have URIs and not just registrations. but if we have URIs, that's already a lot better than not having URIs! thanks!\nit's maybe just OCD (and because it's very useful for URL to have linkable references) but i think it's nice to have numbered sections for everything that's being defined in the spec. in this case, i suggest to have a numbered section for each of the four media types, and to have a brief explanation of what it identifies. that way, each definition is self-contained and can be easily referenced when necessary. NAME NAME", "new_text": "This document uses the Augmented BNF defined in RFC5234 and updated by RFC7405. 2. This section describes the information required to register the above media types. 2.1. The following information serves as the registration form for the \"application/yaml\" media type. Type name: application"}
{"id": "q-en-mediatypes-e629ae3351752c24d479ea3dc313747ff14f535621de14359ac7ce57ec25b388", "old_text": "Change controller: n/a 3.2. Type name: application", "comments": "introduce numbered sections for YAML NAME since each media type is defined in its IANA consideration, would it be ok for you if we just add a section for YAML? Feel free to PR with another proposal.\nNAME could you PTAL ? :)\nto be honest, i prefer definitions to have URIs and not just registrations. but if we have URIs, that's already a lot better than not having URIs! thanks!\nit's maybe just OCD (and because it's very useful for URL to have linkable references) but i think it's nice to have numbered sections for everything that's being defined in the spec. in this case, i suggest to have a numbered section for each of the four media types, and to have a brief explanation of what it identifies. that way, each definition is self-contained and can be easily referenced when necessary. NAME NAME", "new_text": "Change controller: n/a 2.2. The OpenAPI Specification Media Types convey OpenAPI document (OAS) files as defined in oas for version 3.0.0 and above. Those files can be serialized in JSON or yaml. Since there are multiple OpenAPI Specification versions, those media-types support the \"version\" parameter. The following examples conveys the desire of a client to receive an OpenAPI Specification resource preferably in the following order: openapi 3.1 in yaml openapi 3.0 in yaml any openapi version in json 2.2.1. The following information serves as the registration form for the \"application/openapi+json\" media type. Type name: application"}
{"id": "q-en-mediatypes-e629ae3351752c24d479ea3dc313747ff14f535621de14359ac7ce57ec25b388", "old_text": "Change controller: n/a 3.3. Type name: application", "comments": "introduce numbered sections for YAML NAME since each media type is defined in its IANA consideration, would it be ok for you if we just add a section for YAML? Feel free to PR with another proposal.\nNAME could you PTAL ? :)\nto be honest, i prefer definitions to have URIs and not just registrations. but if we have URIs, that's already a lot better than not having URIs! thanks!\nit's maybe just OCD (and because it's very useful for URL to have linkable references) but i think it's nice to have numbered sections for everything that's being defined in the spec. in this case, i suggest to have a numbered section for each of the four media types, and to have a brief explanation of what it identifies. that way, each definition is self-contained and can be easily referenced when necessary. NAME NAME", "new_text": "Change controller: n/a 2.2.2. The following information serves as the registration form for the \"application/openapi+yaml\" media type. Type name: application"}
{"id": "q-en-mediatypes-e629ae3351752c24d479ea3dc313747ff14f535621de14359ac7ce57ec25b388", "old_text": "Change controller: n/a 4. References 4.1. URIs [1] https://mailarchive.ietf.org/arch/browse/httpapi/ [2] https://github.com/ioggstream/draft-polli-rest-api-mediatypes ", "comments": "introduce numbered sections for YAML NAME since each media type is defined in its IANA consideration, would it be ok for you if we just add a section for YAML? Feel free to PR with another proposal.\nNAME could you PTAL ? :)\nto be honest, i prefer definitions to have URIs and not just registrations. but if we have URIs, that's already a lot better than not having URIs! thanks!\nit's maybe just OCD (and because it's very useful for URL to have linkable references) but i think it's nice to have numbered sections for everything that's being defined in the spec. in this case, i suggest to have a numbered section for each of the four media types, and to have a brief explanation of what it identifies. that way, each definition is self-contained and can be easily referenced when necessary. NAME NAME", "new_text": "Change controller: n/a 3. Security requirements for both media type and media type suffix registrations are discussed in Section 4.6 of MEDIATYPE. 4. This specification defines the following new Internet media types MEDIATYPE. IANA has updated the \"Media Types\" registry at https://www.iana.org/assignments/media-types [3] with the registration information provided below. 5. References 5.1. URIs [1] https://mailarchive.ietf.org/arch/browse/httpapi/ [2] https://github.com/ioggstream/draft-polli-rest-api-mediatypes [3] https://www.iana.org/assignments/media-types "}
{"id": "q-en-mls-architecture-5179c1163a32eea31c8037a3fd95516ec08246a5ab04a6f4a222ac213d763500", "old_text": "service, defines the types of credentials which may be used in a deployment and provides methods for: Issuing new credentials, Validating a credential against a reference identifier, and Validating whether or not two credentials represent the same client. A *Delivery Service*, described fully in delivery-service, provides methods for:", "comments": "WGLC comments on Operation Requirements section. Added several overlooked requirements\nDiscussed on 12/8 interim, and folks seemed fine with this. I.e., it's ready to merge.", "new_text": "service, defines the types of credentials which may be used in a deployment and provides methods for: Issuing new credentials with a relevant credential lifetime, Validating a credential against a reference identifier, Validating whether or not two credentials represent the same client, and Optionally revoking credentials which are no longer authorized. A *Delivery Service*, described fully in delivery-service, provides methods for:"}
{"id": "q-en-mls-architecture-5179c1163a32eea31c8037a3fd95516ec08246a5ab04a6f4a222ac213d763500", "old_text": "Delivering Welcome messages to new members of a group. Downloading KeyPackages for specific clients, and uploading new KeyPackages for a user's own clients. Additional services may or may not be required depending on the application design:", "comments": "WGLC comments on Operation Requirements section. Added several overlooked requirements\nDiscussed on 12/8 interim, and folks seemed fine with this. I.e., it's ready to merge.", "new_text": "Delivering Welcome messages to new members of a group. Uploading new KeyPackages for a user's own clients. Downloading KeyPackages for specific clients. Typically, KeyPackages are used once and consumed. Additional services may or may not be required depending on the application design:"}
{"id": "q-en-mls-architecture-5179c1163a32eea31c8037a3fd95516ec08246a5ab04a6f4a222ac213d763500", "old_text": "set of acceptable behavior in a group. These policies must be consistent between deployments for them to interoperate: A policy on when to send proposals and commits in plaintext instead of encrypted.", "comments": "WGLC comments on Operation Requirements section. Added several overlooked requirements\nDiscussed on 12/8 interim, and folks seemed fine with this. I.e., it's ready to merge.", "new_text": "set of acceptable behavior in a group. These policies must be consistent between deployments for them to interoperate: A policy on which ciphersuites are acceptable. A policy on any mandatory or forbidden MLS extensions. A policy on when to send proposals and commits in plaintext instead of encrypted."}
{"id": "q-en-mls-architecture-5179c1163a32eea31c8037a3fd95516ec08246a5ab04a6f4a222ac213d763500", "old_text": "When, and under what circumstances, a reinitialization proposal is allowed. When proposals from external senders are allowed. When external joiners are allowed. A policy for when two credentials represent the same client. Note that many credentials may be issued authenticating the same", "comments": "WGLC comments on Operation Requirements section. Added several overlooked requirements\nDiscussed on 12/8 interim, and folks seemed fine with this. I.e., it's ready to merge.", "new_text": "When, and under what circumstances, a reinitialization proposal is allowed. When proposals from external senders are allowed and how to authorize those proposals. When external joiners are allowed and how to authorize those external commits. Which other proposal types are allowed. A policy of when members should commit pending proposals in a group. A policy of how to protect and share the GroupInfo objects needed for external joins. A policy for when two credentials represent the same client. Note that many credentials may be issued authenticating the same"}
{"id": "q-en-mls-architecture-a28882e8ca395983f1f0ad87afdd3098b8078c24f60514cb761cbbe5f047d016", "old_text": "Finally, Clients should not be able to perform DoS attacks denial-of- service. ", "comments": "We need an IANA considerations section even if we make no requests of them. It's just part of the process and the section can be removed during the final publication phase.", "new_text": "Finally, Clients should not be able to perform DoS attacks denial-of- service. 5. This document makes no requests of IANA. "}
{"id": "q-en-mls-architecture-0dda54e89bd70dbe37f52dae7eaf39613f7d62df9303204d7e519c31dcbcd7b3", "old_text": "setting, HSMs and secure enclaves can be used to protect signature keys. 7. This document makes no requests of IANA.", "comments": "This PR adds a short paragraph referencing the various academic works that has gone into MLS. The references to the various works still need to be cleaned up, which I will do once the list is complete. In particular, each reference needs to be checked for correctness (eprint version vs. publicized version) and then included as informal reference in the yaml header of the document.\nIs there another RFC you can point to that links academic papers on the protocol? I'd expect TLS 1.3 to be a good example of this, but I don't see them do this. The references should be cited properly if we do include it (ie, added as citations at the top of the file)\nThe references are in the appendices. For instance URL The reader should refer to the following references for analysis of the TLS handshake: [DFGS15], [CHSV16], [DFGS16], [KW16], [Kraw16], [FGSW16], [LXZFH16], [FG17], and [BBK17].\nTLS 1.3 as cited by NAME was indeed the inspiration for this. The idea was to just have a small paragraph to point the reader in the right direction and then have the full bibliographic entry in the \"Informal References\" section. As noted above, I intend to properly format all the references once we agree on what references to include, as formatting them into the right yaml is a bit of a pain.\nThanks Konrad!", "new_text": "setting, HSMs and secure enclaves can be used to protect signature keys. 6.6. Various academic works have analyzed MLS and the different security guarantees it aims to provide. The security of large parts of the protocol has been analyzed by [BBN19] (draft 7), [ACDT21] (draft 11) and [AJM20] (draft 12). Individual components of various drafts of the MLS protocol have been analyzed in isolation and with differing adversarial models, for example, [BBR18], [ACDT19], [ACCKKMPPWY19], [AJM20] and [ACJM20] analyze the ratcheting tree as the sub-protocol of MLS that facilitates key agreement, while [BCK21] analyzes the key derivation paths in the ratchet tree and key schedule. Finally, [CHK19] analyzes the authentication and cross-group healing guarantees provided by MLS. 7. ACDT19: https://eprint.iacr.org/2019/1189 ACCKKMPPWY19: https://eprint.iacr.org/2019/1489 ACJM20: https://eprint.iacr.org/2020/752 AJM20: https://eprint.iacr.org/2020/1327 ACDT21: https://eprint.iacr.org/2021/1083 AHKM21: https://eprint.iacr.org/2021/1456 CHK19: https://eprint.iacr.org/2021/137 BCK21: https://eprint.iacr.org/2021/137 BBR18: https://hal.inria.fr/hal-02425247 BBN19: https://hal.laas.fr/INRIA/hal-02425229 8. This document makes no requests of IANA."}
{"id": "q-en-mls-protocol-166a72c4229ad77ca18a37855fcd1e45a78786a3d6ee8c483fc567000ba63a41", "old_text": "node for which this node has a private key * Decrypt the secret value for the parent of the copath node using the private key from the resolution node * Derive secret values for ancestors of that node using the KDF keyed with the decrypted secret Merge the updated secrets into the tree * Replace the public keys for nodes on the direct path with the received public keys * For", "comments": "Adding this check would be the first step in recognizing that an Update operation may have maliciously excluded a person from the group.\nThis would go better in the \"Ratchet Tree Updates\" section, right after \"Derive secret values for ancestors of that node using the KDF keyed with the decrypted secret\". That would align better with where in the code you want the checks to be. Other than that, LGTM.\nFixed\nThanks Michael !\nHere's the Weekly Digest for : - - Last week 4 issues were created. Of these, 1 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 4 comments. - - Last week, 9 pull requests were created, updated or merged. Last week, 8 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 4 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "node for which this node has a private key * Decrypt the secret value for the parent of the copath node using the private key from the resolution node * Derive secret values for ancestors of that node using the KDF keyed with the decrypted secret * The recipient SHOULD verify that the received public keys agree with the public keys derived from the new node_secret values Merge the updated secrets into the tree * Replace the public keys for nodes on the direct path with the received public keys * For"}
{"id": "q-en-mls-protocol-6e765a7cfe90668bd3353c9fc958f5da23854bdfc592c41cfaad2f8fe50d36cc", "old_text": "9.1. [[ OPEN ISSUE: Direct initialization is currently undefined. A client can create a group by initializing its own state to reflect a group including only itself, then adding the initial members. This has computation and communication complexity O(N log N) instead of the O(N) complexity of direct initialization. ]] 9.2.", "comments": "This is something we've been meaning to do for a while, and just haven't gotten around to. The Init message directly creates a group with a specified member list. The cost of group creation is O(N) DH operations for the sender, and O(1) for receivers.\nThis now has an OPEN ISSUE regarding the fact that we'll need to add PKE encryption for that message, as agreed out of band, let's merge this and add encryption in the next iteration.\nHere's the Weekly Digest for : This week, 8 issues were created. Of these, 4 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by The issue most discussed this week has been: :speaker: , by It received 9 comments. This week, no pull requests has been proposed by the users. This week, 4 users have contributed to this repository. They are , , , and . This week, no user has starred this repository. This week, there have been 4 commits in the repository. These are: :hammerandwrench: Merge pull request from rozbb/mlsplaintext-signature Multiple clarifications around Framing by :hammerandwrench: Merge pull request from rozbb/hpke-encrypt Clarify how HPKE ciphertexts are computed by :hammerandwrench: by :hammerand_wrench: by # RELEASES This week, no releases were published. That's all for this week, please watch :eyes: and star :star: to receive next weekly updates. :smiley:\nHere's the Weekly Digest for : - - Last week 5 issues were created. Of these, 1 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by - - Last week, 9 pull requests were created, updated or merged. Last week, 3 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 5 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 8 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nWe have a long-standing OPEN ISSUE on creating a group with a single Init message instead of N Add messages, which would change the work of initialization from O(N log N) to O(N). I would propose roughly the following: Then the processing would be: Find the welcome that's for you Initialize a one-member tree from the Welcome Use the UserInitKeys to populate the leaves of the tree Use the DirectPath to populate the creator's direct path (as in Update)\nFixed by URL\nSimilarly to the for a new member, this is a PKE operation: the encryption of this message is missing from the description. This message needs to be encrypted to each client otherwise it leaks all public keys of the tree which would otherwise be protected under the common framing. A secondary question appears as a consequence. There is static key reuse for the leaves until they update, we designed HPKE so that it should be fine but I would still add an [OPEN ISSUE] for it.", "new_text": "9.1. A group can always be created by initializing a one-member group and using adding members individually. For cases where the initial list of members is known, the Init message allows a group to be created more efficiently. The creator of the group constructs an Init message as follows: Fetch a UserInitKey for each member (including the creator) Identify a protocol version and cipher suite that is supported by all proposed members. Construct a ratchet tree with its leaves populated with the public keys and credentials from the UserInitKeys of the members, and all other nodes blank. Generate a fresh leaf key pair for the first leaf Compute its direct path in this ratchet tree Each member of the newly-created group initializes its state from the Init message as follows: Note the group ID, protocol version, and cipher suite in use Construct a ratchet tree as above Update the cached ratchet tree by replacing nodes in the direct path from the first leaf using the direct path Update the cached ratchet tree by replacing nodes in the direct path from the first leaf using the information contained in the \"path\" attribute The update secret for this interaction, used with an all-zero init secret to generate the first epoch secret, is the \"path_secret[i+1]\" derived from the \"path_secret[i]\" associated to the root node. The members learn the relevant path secrets by decrypting one of the encrypted path secrets in the DirectPath and working back to the root (as in normal DirectPath processing). [[ OPEN ISSUE: This approach leaks the initial contents of the tree to the Delivery Service, unlike the sequential-Add case. ]] [[ OPEN ISSUE: It might be desireable for the group creator to be able to \"pre-warm\" the tree, by providing values for some nodes not on its direct path. This would violate the tree invariant, so we would need to figure out what mitigations would be necessary. ]] 9.2."}
{"id": "q-en-mls-protocol-beda0eea6a340cdabd0c5d942dd91c0927e15bac1cc0063e839ef47abb1dc0f7", "old_text": "information about a client that any existing member can use to add this client to the group asynchronously. A ClientInitKey object specifies what ciphersuites a client supports, as well as providing public keys that the client can use for key derivation and signing. The client's identity key is intended to be stable throughout the lifetime of the group; there is no mechanism to change it. Init keys are intended to be used a very limited number of times, potentially once. (see init-key-reuse). ClientInitKeys also contain an identifier chosen by the client, which the client MUST assure uniquely identifies a given ClientInitKey object among the set of ClientInitKeys created by this client. The init_keys array MUST have the same length as the cipher_suites array, and each entry in the init_keys array MUST be a public key for the asymmetric encryption scheme defined in the cipher_suites array and used in the HPKE construction for TreeKEM. The whole structure is signed using the client's identity key. A ClientInitKey object with an invalid signature field MUST be considered malformed. The input to the signature computation comprises all of the fields except for the signature field. 8.", "comments": "Here's the Weekly Digest for : - - Last week 3 issues were created. Of these, 1 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 4 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 6 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week, no issues were created. - - Last week, 3 pull requests were created, updated or merged. Last week, 2 pull requests were updated. :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 2 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by - - Last week there were no commits. - - Last week there were no contributors. - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "information about a client that any existing member can use to add this client to the group asynchronously. A ClientInitKey object specifies a ciphersuite that the client supports, as well as providing a public key that others can use for key agreement. The client's identity key is intended to be stable throughout the lifetime of the group; there is no mechanism to change it. Init keys are intended to be used a very limited number of times, ideally only once. (See init-key-reuse). Clients MAY generate and publish multiple ClientInitKey objects to support multiple ciphersuites, or to reduce the likelihood of init key reuse. ClientInitKeys contain an identifier chosen by the client, which the client MUST assure uniquely identifies a given ClientInitKey object among the set of ClientInitKeys created by this client. The value for init_key MUST be a public key for the asymmetric encryption scheme defined by cipher_suite. The whole structure is signed using the client's identity key. A ClientInitKey object with an invalid signature field MUST be considered malformed. The input to the signature computation comprises all of the fields except for the signature field. 8."}
{"id": "q-en-mls-protocol-beda0eea6a340cdabd0c5d942dd91c0927e15bac1cc0063e839ef47abb1dc0f7", "old_text": "The creator of the group constructs an Init message as follows: Fetch a ClientInitKey for each member (including the creator) Identify a protocol version and cipher suite that is supported by all proposed members.", "comments": "Here's the Weekly Digest for : - - Last week 3 issues were created. Of these, 1 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 4 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 6 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week, no issues were created. - - Last week, 3 pull requests were created, updated or merged. Last week, 2 pull requests were updated. :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 2 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by - - Last week there were no commits. - - Last week there were no contributors. - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "The creator of the group constructs an Init message as follows: Fetch one or more ClientInitKeys for each member (including the creator) Identify a protocol version and cipher suite that is supported by all proposed members."}
{"id": "q-en-mls-protocol-beda0eea6a340cdabd0c5d942dd91c0927e15bac1cc0063e839ef47abb1dc0f7", "old_text": "needs to initialize a GroupContext object that can be updated to the current state using the Add message. This information is encrypted for the new member using HPKE. The recipient key pair for the HPKE encryption is the one included in the indicated ClientInitKey, corresponding to the indicated ciphersuite. The \"add_key_nonce\" field contains the key and nonce used to encrypt the corresponding Add message; if it is not encrypted, then this field MUST be set to the null optional value. In the description of the tree as a list of nodes, the \"credential\" field for a node MUST be populated if and only if that node is a leaf", "comments": "Here's the Weekly Digest for : - - Last week 3 issues were created. Of these, 1 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 4 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 6 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week, no issues were created. - - Last week, 3 pull requests were created, updated or merged. Last week, 2 pull requests were updated. :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 2 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by - - Last week there were no commits. - - Last week there were no contributors. - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "needs to initialize a GroupContext object that can be updated to the current state using the Add message. This information is encrypted for the new member using HPKE. The recipient key pair for the HPKE encryption is the one included in the indicated ClientInitKey. The \"add_key_nonce\" field contains the key and nonce used to encrypt the corresponding Add message; if it is not encrypted, then this field MUST be set to the null optional value. In the description of the tree as a list of nodes, the \"credential\" field for a node MUST be populated if and only if that node is a leaf"}
{"id": "q-en-mls-protocol-beda0eea6a340cdabd0c5d942dd91c0927e15bac1cc0063e839ef47abb1dc0f7", "old_text": "direct path of the new node Set the leaf node in the tree at position \"index\" to a new node containing the public key from the ClientInitKey in the Add corresponding to the ciphersuite in use, as well as the credential under which the ClientInitKey was signed The \"update_secret\" resulting from this change is an all-zero octet string of length Hash.length.", "comments": "Here's the Weekly Digest for : - - Last week 3 issues were created. Of these, 1 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 1 pull request was updated. :yellowheart: , by Last week, 4 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 6 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there was 1 contributor. :bustinsilhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week, no issues were created. - - Last week, 3 pull requests were created, updated or merged. Last week, 2 pull requests were updated. :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 2 issues were created. Of these, 0 issues have been closed and 2 issues are still open. :greenheart: , by :greenheart: , by - - Last week, 2 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by - - Last week there were no commits. - - Last week there were no contributors. - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "direct path of the new node Set the leaf node in the tree at position \"index\" to a new node containing the public key from the ClientInitKey in the Add, as well as the credential under which the ClientInitKey was signed The \"update_secret\" resulting from this change is an all-zero octet string of length Hash.length."}
{"id": "q-en-mls-protocol-0770df6073d55cd6e3490c9d9165f593ef12812cde0decbc2c7662a78e631d4e", "old_text": "A credential (only for leaf nodes) The conditions under which each of these values must or must not be present are laid out in views.", "comments": "It seems like there's a major thing missing here, namely how the sender of a Commit transmits the new node signatures to the other members of the group. We need something like that before this lands. The most obvious approach would be to add a field to DirectPathNode, as you have with RatchetNode.\nIt would also be handy to include in the intermediate node an indication of which leaf signed it. Otherwise you have to do \"trial verifications\" (cf. trial decryption), which is expensive.\nWas this addressed? it's mentioned that the signature should be transmitted in \"Synchronizing Views of the Tree\" but there's no signature in the transmitted DirectPathNode\nNAME changes the strategy for signing the tree, everything should be sorted out there.\nHere's the Weekly Digest for : - - Last week 8 issues were created. Of these, 0 issues have been closed and 8 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 4 issues were created. Of these, 0 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 14 pull requests were created, updated or merged. Last week, 8 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 6 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 3 contributors. :bustinsilhouette: :bustinsilhouette: :bustinsilhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 11 issues were created. Of these, 1 issues have been closed and 10 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 11 pull requests were created, updated or merged. Last week, 6 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "A credential (only for leaf nodes) A signature over the content of the node The conditions under which each of these values must or must not be present are laid out in views."}
{"id": "q-en-mls-protocol-0770df6073d55cd6e3490c9d9165f593ef12812cde0decbc2c7662a78e631d4e", "old_text": "Zero or more encrypted copies of the path secret corresponding to the node The path secret value for a given node is encrypted for the subtree corresponding to the parent's non-updated child, i.e., the child on the copath of the leaf node. There is one encrypted path secret for", "comments": "It seems like there's a major thing missing here, namely how the sender of a Commit transmits the new node signatures to the other members of the group. We need something like that before this lands. The most obvious approach would be to add a field to DirectPathNode, as you have with RatchetNode.\nIt would also be handy to include in the intermediate node an indication of which leaf signed it. Otherwise you have to do \"trial verifications\" (cf. trial decryption), which is expensive.\nWas this addressed? it's mentioned that the signature should be transmitted in \"Synchronizing Views of the Tree\" but there's no signature in the transmitted DirectPathNode\nNAME changes the strategy for signing the tree, everything should be sorted out there.\nHere's the Weekly Digest for : - - Last week 8 issues were created. Of these, 0 issues have been closed and 8 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 4 issues were created. Of these, 0 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 14 pull requests were created, updated or merged. Last week, 8 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 6 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 3 contributors. :bustinsilhouette: :bustinsilhouette: :bustinsilhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 11 issues were created. Of these, 1 issues have been closed and 10 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 11 pull requests were created, updated or merged. Last week, 6 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "Zero or more encrypted copies of the path secret corresponding to the node A signature over the node content The path secret value for a given node is encrypted for the subtree corresponding to the parent's non-updated child, i.e., the child on the copath of the leaf node. There is one encrypted path secret for"}
{"id": "q-en-mls-protocol-0770df6073d55cd6e3490c9d9165f593ef12812cde0decbc2c7662a78e631d4e", "old_text": "ratchet tree and the members' ClientInitKeys. The hash of a tree is the hash of its root node, which we define recursively, starting with the leaves. The hash of a leaf node is the hash of a \"LeafNodeHashInput\" object: The content within the leaf of a ratchet tree is composed of a \"ClientInitKey\" when the leaf is populated. The \"info\" field is equal to the null optional value when the leaf is blank (i.e., no member occupies that leaf). The intermediate nodes contain less information, the hash of a parent node (including the root) is the hash of a \"ParentNodeHashInput\" struct: The \"left_hash\" and \"right_hash\" fields hold the hashes of the node's left and right children, respectively. The \"public_key\" field holds the hash of the public key stored at this node, represented as an \"optional\" object, which is null if and only if the node is blank. 7.4.", "comments": "It seems like there's a major thing missing here, namely how the sender of a Commit transmits the new node signatures to the other members of the group. We need something like that before this lands. The most obvious approach would be to add a field to DirectPathNode, as you have with RatchetNode.\nIt would also be handy to include in the intermediate node an indication of which leaf signed it. Otherwise you have to do \"trial verifications\" (cf. trial decryption), which is expensive.\nWas this addressed? it's mentioned that the signature should be transmitted in \"Synchronizing Views of the Tree\" but there's no signature in the transmitted DirectPathNode\nNAME changes the strategy for signing the tree, everything should be sorted out there.\nHere's the Weekly Digest for : - - Last week 8 issues were created. Of these, 0 issues have been closed and 8 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 4 issues were created. Of these, 0 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 14 pull requests were created, updated or merged. Last week, 8 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 6 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 3 contributors. :bustinsilhouette: :bustinsilhouette: :bustinsilhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 11 issues were created. Of these, 1 issues have been closed and 10 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 11 pull requests were created, updated or merged. Last week, 6 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "ratchet tree and the members' ClientInitKeys. The hash of a tree is the hash of its root node, which we define recursively, starting with the leaves. While hashes at the nodes are used to check the integrity of the subtrees, signatures are required to provide authentication and group agreement. Siganatures are especially important in the case of newcomers and MUST be verified when joining. All nodes in the tree MUST be signed to provide authentication and group agreement. Elements of the ratchet tree are called \"RatchetNode\" objects and contain optionally a \"ClientInitKey\" when at the leaves or an optional \"ParentNode\" above. When computing the hash of a parent node AB the \"ParentNodeHash\" structure is used: The \"left_hash\" and \"right_hash\" fields hold the hashes of the node's left (A) and right (B) children, respectively. The signature within the \"ParentNode\" is computed over the its prefix within the serialized \"ParentNodeHash\" struct to cover all information about the sub-tree. The \"committer_index\" is required for a member to determine the signing key needed to perform the signature verification. To compute the hash of a leaf node is the hash of a \"LeafNodeHash\" object: Note that unlike a ParentNode, a ClientInitKey already contains a signature. 7.4."}
{"id": "q-en-mls-protocol-0770df6073d55cd6e3490c9d9165f593ef12812cde0decbc2c7662a78e631d4e", "old_text": "the \"tree\" array divided by two. Construct a new group state using the information in the GroupInfo object. The new member's position in the tree is \"index\", as defined above. Identify the lowest node at which the direct paths from \"index\" and \"signer_index\" overlap. Set private keys for that node and", "comments": "It seems like there's a major thing missing here, namely how the sender of a Commit transmits the new node signatures to the other members of the group. We need something like that before this lands. The most obvious approach would be to add a field to DirectPathNode, as you have with RatchetNode.\nIt would also be handy to include in the intermediate node an indication of which leaf signed it. Otherwise you have to do \"trial verifications\" (cf. trial decryption), which is expensive.\nWas this addressed? it's mentioned that the signature should be transmitted in \"Synchronizing Views of the Tree\" but there's no signature in the transmitted DirectPathNode\nNAME changes the strategy for signing the tree, everything should be sorted out there.\nHere's the Weekly Digest for : - - Last week 8 issues were created. Of these, 0 issues have been closed and 8 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 6 pull requests were created, updated or merged. Last week, 1 pull request was opened. :greenheart: , by Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 2 pull requests were merged. :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 4 issues were created. Of these, 0 issues have been closed and 4 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 14 pull requests were created, updated or merged. Last week, 8 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 6 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 10 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 3 contributors. :bustinsilhouette: :bustinsilhouette: :bustinsilhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 11 issues were created. Of these, 1 issues have been closed and 10 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 11 pull requests were created, updated or merged. Last week, 6 pull requests were opened. :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 2 commits. :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .", "new_text": "the \"tree\" array divided by two. Construct a new group state using the information in the GroupInfo object and verify the integrity of the tree using the \"tree_hash\". The new member's position in the tree is \"index\", as defined above. Identify the lowest node at which the direct paths from \"index\" and \"signer_index\" overlap. Set private keys for that node and"}
{"id": "q-en-mls-protocol-71f5a3575723a39850865f314dbfde2234aec87f952260d69d13043027e6ffbf", "old_text": "An AEAD encryption algorithm The ciphersuite's Diffie-Hellman group is used to instantiate an HPKE I-D.irtf-cfrg-hpke instance for the purpose of public-key encryption. The ciphersuite must specify an algorithm \"Derive-Key-Pair\" that maps", "comments": "It seems like we have a few separable issues here: Whether we require a single signature scheme for the whole group Whether signature schemes are included in the ciphersuite Whether we have an MTI ciphersuite / signature algorithm What new ciphersuites should be defined, and what algorithms should go in them Let's discuss this at the interim and maybe break this PR into a few bits\nNote that specification required registries imply that there is a designated expert pool. I will add some text to create one. The text is very similar to the TLS DE pool text from with some tweaks that, hopefully, address the misunderstandings uncovered during non-standards track registry requests. Note the process is: (assuming that the IESG eventually approves this draft) When the IESG approves this draft, the responsible AD assigns some DEs (like 2-3 suggested by the WG chairs). These DEs are in charge of all registrations regardless of whether the draft comes from the WG or elsewhere as all requests go to the MLS DEs mailing list.\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 7 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 3 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 4 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThis seems to accurately reflect the consensus we had at the last interim, and there's nothing I can't live with otherwise. I will probably file a follow-up to convert the Derive-Key-Pair operations to use HKDF directly, as discussed, avoiding the need for intermediate values or truncation of SHA-512.", "new_text": "An AEAD encryption algorithm A signature algorithm The ciphersuite's Diffie-Hellman group is used to instantiate an HPKE I-D.irtf-cfrg-hpke instance for the purpose of public-key encryption. The ciphersuite must specify an algorithm \"Derive-Key-Pair\" that maps"}
{"id": "q-en-mls-protocol-71f5a3575723a39850865f314dbfde2234aec87f952260d69d13043027e6ffbf", "old_text": "Ciphersuites are represented with the CipherSuite type. HPKE public keys are opaque values in a format defined by the underlying Diffie- Hellman protocol (see the Ciphersuites section of the HPKE specification for more information): 6.1.1. This ciphersuite uses the following primitives: Hash function: SHA-256 AEAD: AES-128-GCM When HPKE is used with this ciphersuite, it uses the following algorithms: KEM: 0x0002 = DHKEM(Curve25519) KDF: 0x0001 = HKDF-SHA256 AEAD: 0x0001 = AES-GCM-128 Given an octet string X, the private key produced by the Derive-Key- Pair operation is SHA-256(X). (Recall that any 32-octet string is a valid Curve25519 private key.) The corresponding public key is X25519(SHA-256(X), 9). Implementations SHOULD use the approach specified in RFC7748 to", "comments": "It seems like we have a few separable issues here: Whether we require a single signature scheme for the whole group Whether signature schemes are included in the ciphersuite Whether we have an MTI ciphersuite / signature algorithm What new ciphersuites should be defined, and what algorithms should go in them Let's discuss this at the interim and maybe break this PR into a few bits\nNote that specification required registries imply that there is a designated expert pool. I will add some text to create one. The text is very similar to the TLS DE pool text from with some tweaks that, hopefully, address the misunderstandings uncovered during non-standards track registry requests. Note the process is: (assuming that the IESG eventually approves this draft) When the IESG approves this draft, the responsible AD assigns some DEs (like 2-3 suggested by the WG chairs). These DEs are in charge of all registrations regardless of whether the draft comes from the WG or elsewhere as all requests go to the MLS DEs mailing list.\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 7 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 3 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 4 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThis seems to accurately reflect the consensus we had at the last interim, and there's nothing I can't live with otherwise. I will probably file a follow-up to convert the Derive-Key-Pair operations to use HKDF directly, as discussed, avoiding the need for intermediate values or truncation of SHA-512.", "new_text": "Ciphersuites are represented with the CipherSuite type. HPKE public keys are opaque values in a format defined by the underlying Diffie- Hellman protocol (see the Ciphersuites section of the HPKE specification for more information). The signature algorithm specified in the ciphersuite is the mandatory algorithm to be used for the signatutes in MLSPlaintext and the tree signatures. It can be different from the signature algorithm specified in credential fielf of ClientInitKeys. The ciphersuites are defined in section mls-ciphersuites. 6.1.1. Depending on the Diffie-Hellman group of the ciphersuite, different rules apply to private key derivation and public key verification. 6.1.1.1. Given an octet string X, the private key produced by the Derive-Key- Pair operation is SHA-256(X) (Recall that any 32-octet string is a valid X25519 private key). The corresponding public key is X25519(SHA-256(X), 9). Implementations SHOULD use the approach specified in RFC7748 to"}
{"id": "q-en-mls-protocol-71f5a3575723a39850865f314dbfde2234aec87f952260d69d13043027e6ffbf", "old_text": "curves, they SHOULD perform the additional checks specified in Section 7 of 6.1.2. This ciphersuite uses the following primitives: Hash function: SHA-256 AEAD: AES-128-GCM When HPKE is used with this ciphersuite, it uses the following algorithms: KEM: 0x0001 = DHKEM(P-256) KDF: 0x0001 = HKDF-SHA256 AEAD: 0x0001 = AES-GCM-128 Given an octet string X, the private key produced by the Derive-Key- Pair operation is SHA-256(X), interpreted as a big-endian integer.", "comments": "It seems like we have a few separable issues here: Whether we require a single signature scheme for the whole group Whether signature schemes are included in the ciphersuite Whether we have an MTI ciphersuite / signature algorithm What new ciphersuites should be defined, and what algorithms should go in them Let's discuss this at the interim and maybe break this PR into a few bits\nNote that specification required registries imply that there is a designated expert pool. I will add some text to create one. The text is very similar to the TLS DE pool text from with some tweaks that, hopefully, address the misunderstandings uncovered during non-standards track registry requests. Note the process is: (assuming that the IESG eventually approves this draft) When the IESG approves this draft, the responsible AD assigns some DEs (like 2-3 suggested by the WG chairs). These DEs are in charge of all registrations regardless of whether the draft comes from the WG or elsewhere as all requests go to the MLS DEs mailing list.\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 7 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 3 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 4 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThis seems to accurately reflect the consensus we had at the last interim, and there's nothing I can't live with otherwise. I will probably file a follow-up to convert the Derive-Key-Pair operations to use HKDF directly, as discussed, avoiding the need for intermediate values or truncation of SHA-512.", "new_text": "curves, they SHOULD perform the additional checks specified in Section 7 of 6.1.1.2. Given an octet string X, the private key produced by the Derive-Key- Pair operation is SHA-512(X) truncated to 448 bits (Recall that any 56-octet string is a valid X448 private key). The corresponding public key is X448(SHA-512(X), 5). Implementations SHOULD use the approach specified in RFC7748 to calculate the Diffie-Hellman shared secret. Implementations MUST check whether the computed Diffie-Hellman shared secret is the all- zero value and abort if so, as described in Section 6 of RFC7748. If implementers use an alternative implementation of these elliptic curves, they SHOULD perform the additional checks specified in Section 7 of 6.1.1.3. Given an octet string X, the private key produced by the Derive-Key- Pair operation is SHA-256(X), interpreted as a big-endian integer."}
{"id": "q-en-mls-protocol-71f5a3575723a39850865f314dbfde2234aec87f952260d69d13043027e6ffbf", "old_text": "14. This document requests the creation of the following new IANA registries: MLS Ciphersuites All of these registries should be under a heading of \"Message Layer Security\", and administered under a Specification Required policy RFC8126. 14.1. The \"MLS Ciphersuites\" registry lists identifiers for suites of cryptographic algorithms defined for use with MLS. These are two- byte values, so the maximum possible value is 0xFFFF = 65535. Values in the range 0xF000 - 0xFFFF are reserved for vendor-internal usage. Template: Value: The two-byte identifier for the ciphersuite Name: The name of the ciphersuite Reference: Where this algorithm is defined The initial contents for this registry are as follows: [[ Note to RFC Editor: Please replace \"XXXX\" above with the number assigned to this RFC. ]] ", "comments": "It seems like we have a few separable issues here: Whether we require a single signature scheme for the whole group Whether signature schemes are included in the ciphersuite Whether we have an MTI ciphersuite / signature algorithm What new ciphersuites should be defined, and what algorithms should go in them Let's discuss this at the interim and maybe break this PR into a few bits\nNote that specification required registries imply that there is a designated expert pool. I will add some text to create one. The text is very similar to the TLS DE pool text from with some tweaks that, hopefully, address the misunderstandings uncovered during non-standards track registry requests. Note the process is: (assuming that the IESG eventually approves this draft) When the IESG approves this draft, the responsible AD assigns some DEs (like 2-3 suggested by the WG chairs). These DEs are in charge of all registrations regardless of whether the draft comes from the WG or elsewhere as all requests go to the MLS DEs mailing list.\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :speaker: , by It received 1 comments. - - Last week, 7 pull requests were created, updated or merged. Last week, 2 pull requests were opened. :greenheart: , by :greenheart: , by Last week, 4 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there were 3 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustinsilhouette: - - Last week there were 2 stagazers. :star: :star: You all are the stars! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nHere's the Weekly Digest for : - - Last week 3 issues were created. Of these, 0 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by - - Last week, 4 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 1 pull request was merged. :purpleheart: , by - - Last week there was 1 commit. :hammerandwrench: by - - Last week there was 1 contributor. :bustin_silhouette: - - Last week there were no stargazers. - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .\nThis seems to accurately reflect the consensus we had at the last interim, and there's nothing I can't live with otherwise. I will probably file a follow-up to convert the Derive-Key-Pair operations to use HKDF directly, as discussed, avoiding the need for intermediate values or truncation of SHA-512.", "new_text": "14. This document requests the creation of the following new IANA registries: MLS Ciphersuites (mls-ciphersuites). All of these registries should be under a heading of \"Message Layer Security\", and assignments are made via the Specification Required policy RFC8126. See de for additional information about the MLS Designated Experts (DEs). 14.1. A ciphersuite is a combination of a protocol version and the set of cryptographic algorithms that should be used. Ciphersuite names follow the naming convention: Where VALUE is represented as two 8bit octets: This specification defines the following ciphersuites for use with MLS 1.0. The KEM/DEM constructions used for HPKE are defined by HPKE. The corresponding AEAD algorithms AEAD_AES_128_GCM and AEAD_AES_256_GCM, are defined in RFC5116. AEAD_CHACHA20_POLY1305 is defined in RFC7539. The corresponding hash algorithms are defined in SHS. It is advisable to keep the number of ciphersuites low to increase the chances clients can interoperate in a federated environment, therefore the ciphersuites only inlcude modern, yet well-established algorithms. Depending on their requirements, clients can choose between two security levels (roughly 128-bit and 256-bit). Within the security levels clients can choose between faster X25519/X448 curves and FIPS 140-2 compliant curves for Diffie-Hellman key negotiations. Additionally clients that run predominantly on mobile processors can choose ChaCha20Poly1305 over AES-GCM for performance reasons. Since ChaCha20Poly1305 is not listed by FIPS 140-2 it is not paired with FIPS 140-2 compliant curves. The security level of symmetric encryption algorithms and hash functions is paired with the security level of the curves. The mandatory-to-implement ciphersuite for MLS 1.0 is \"MLS10_128_HPKE25519_AES128GCM_SHA256_Ed25519\" which uses Curve25519, HKDF over SHA2-256 and AES-128-GCM for HPKE, and AES- 128-GCM with Ed25519 for symmetric encryption and signatures. Values with the first byte 255 (decimal) are reserved for Private Use. New ciphersuite values are assigned by IANA as described in iana- considerations. 14.2. [[ OPEN ISSUE: pick DE mailing address. Maybe mls-des@ or mls-de- pool. ]] Specification Required RFC8126 registry requests are registered after a three-week review period on the MLS DEs' mailing list: TBD@ietf.org [1], on the advice of one or more of the MLS DEs. However, to allow for the allocation of values prior to publication, the MLS DEs may approve registration once they are satisfied that such a specification will be published. Registration requests sent to the MLS DEs mailing list for review SHOULD use an appropriate subject (e.g., \"Request to register value in MLS Bar registry\"). Within the review period, the MLS DEs will either approve or deny the registration request, communicating this decision to the MLS DEs mailing list and IANA. Denials SHOULD include an explanation and, if applicable, suggestions as to how to make the request successful. Registration requests that are undetermined for a period longer than 21 days can be brought to the IESG's attention for resolution using the iesg@ietf.org [2] mailing list. Criteria that SHOULD be applied by the MLS DEs includes determining whether the proposed registration duplicates existing functionality, whether it is likely to be of general applicability or useful only for a single application, and whether the registration description is clear. For example, the MLS DEs will apply the ciphersuite-related advisory found in ciphersuites. IANA MUST only accept registry updates from the MLS DEs and SHOULD direct all requests for registration to the MLS DEs' mailing list. It is suggested that multiple MLS DEs be appointed who are able to represent the perspectives of different applications using this specification, in order to enable broadly informed review of registration decisions. In cases where a registration decision could be perceived as creating a conflict of interest for a particular MLS DE, that MLS DE SHOULD defer to the judgment of the other MLS DEs. 15. References 15.1. URIs [1] mailto:TBD@ietf.org [2] mailto:iesg@ietf.org "}
{"id": "q-en-mls-protocol-ef143ef6b440d2ab9d0c3d1baf5aba9e5d67dc469dc472111b6db889c669d26d", "old_text": "D.ietf-trans-rfc6962-bis.) The _direct path_ of a root is the empty list, and of any other node is the concatenation of that node with the direct path of its parent. The _copath_ of a node is the list of siblings of nodes in its direct path. The _frontier_ of a tree is the list of heads of the maximal full subtrees of the tree, ordered from left to right. For example, in the below tree: The direct path of C is (C, CD, ABCD) The copath of C is (D, AB, EFG)", "comments": "You can also strike this sentence further down: \" In particular, for the leaf node, there are no encrypted secrets, since a leaf node has no children.\"\nNote that this severs the connection between the HPKE key in the leaf and the parent of the leaf. Whereas before, the leaf HPKE key was derived from a path secret, and the parent HPKE key was derived from the next path secret. Now the parent path secret is set directly, and the leaf HPKE key is set completely independently, with no dependency on a path secret. So we're losing a bit of internal structure in the tree. But this seems OK because that structure was never observable to the group -- only the holder of the leaf knows the leaf path secret, so nobody else could verify that the leaf was related to its parent.", "new_text": "D.ietf-trans-rfc6962-bis.) The _direct path_ of a root is the empty list, and of any other node is the concatenation of that node's parent along with the parent's direct path. The _copath_ of a node is the node's sibling concatenated with the list of siblings of all the nodes in its direct path. The _frontier_ of a tree is the list of heads of the maximal full subtrees of the tree, ordered from left to right. For example, in the below tree: The direct path of C is (CD, ABCD, ABCDEFG) The copath of C is (D, AB, EFG)"}
{"id": "q-en-mls-protocol-ef143ef6b440d2ab9d0c3d1baf5aba9e5d67dc469dc472111b6db889c669d26d", "old_text": "The path secret value for a given node is encrypted for the subtree corresponding to the parent's non-updated child, that is, the child on the copath of the leaf node. There is one encrypted path secret for each public key in the resolution of the non-updated child. In particular, for the leaf node, there are no encrypted secrets, since a leaf node has no children. The recipient of a path update processes it with the following steps:", "comments": "You can also strike this sentence further down: \" In particular, for the leaf node, there are no encrypted secrets, since a leaf node has no children.\"\nNote that this severs the connection between the HPKE key in the leaf and the parent of the leaf. Whereas before, the leaf HPKE key was derived from a path secret, and the parent HPKE key was derived from the next path secret. Now the parent path secret is set directly, and the leaf HPKE key is set completely independently, with no dependency on a path secret. So we're losing a bit of internal structure in the tree. But this seems OK because that structure was never observable to the group -- only the holder of the leaf knows the leaf path secret, so nobody else could verify that the leaf was related to its parent.", "new_text": "The path secret value for a given node is encrypted for the subtree corresponding to the parent's non-updated child, that is, the child on the copath of the leaf node. There is one encrypted path secret for each public key in the resolution of the non-updated child. The recipient of a path update processes it with the following steps:"}
{"id": "q-en-mls-protocol-c6032d342710a51b3e7f0758bf1a67e75054d589519c0fd71dcb0f8ca70d0aa1", "old_text": "unoccupied leaf, or appended to the right edge of the tree if all leaves are occupied. Create an initial, partial GroupInfo object reflecting the following values: Group ID: The group ID for the group Epoch: The epoch ID for the next epoch Tree: The group's ratchet tree after the commit has been applied Prior confirmed transcript hash: The confirmed transcript hash for the current state of the group (not the provisional state) Create a DirectPath using the new tree (which includes any new members). The GroupContext for this operation uses the \"group_id\", \"epoch\", \"tree\", and \"prior_confirmed_transcript_hash\"", "comments": "This PR adapts the Welcome logic so that the new joiners get their path secrets directly, rather than from a DirectPath. This cleans things up in a few ways. The new joiners no longer have to have the context for the previous epoch (which was used to encrypt the DirectPath). And we can get away with one HPKE operation per new joiner instead of two (one for the EncryptedKeyPackage and one for the DirectPath). The major cost is that there's a bit more complexity in the TreeKEM. Instead of just producing and consuming DirectPaths, you now need to know which path secrets go with which nodes. You also need tree math to identify the common ancestor of two leaves, but there's a simple closed-form formula for that, expressed in the Python code here.\nNAME - Curious if this seems implementable to you. Clearly it's possible, since I did it in Go :) But it also needs to make sense to other people.", "new_text": "unoccupied leaf, or appended to the right edge of the tree if all leaves are occupied. Create a DirectPath using the new tree (which includes any new members). The GroupContext for this operation uses the \"group_id\", \"epoch\", \"tree\", and \"prior_confirmed_transcript_hash\""}
{"id": "q-en-mls-protocol-c6032d342710a51b3e7f0758bf1a67e75054d589519c0fd71dcb0f8ca70d0aa1", "old_text": "the \"confirmation\" value in the MLSPlaintext. Sign the MLSPlaintext using the current epoch's GroupContext as context. Complete the GroupInfo by populating the following fields: Confirmed transcript hash: The confirmed transcript hash including the current Commit object Interim transcript hash: The interim transcript hash including the current Commit object Confirmation: The confirmation from the MLSPlaintext Sign the GroupInfo using the member's private signing key Encrypt the GroupInfo using the key and nonce derived from the \"init_secret\" for the current epoch (see welcoming-new-members) For each new member in the group, compute an EncryptedKeyPackage object that encapsulates the \"init_secret\" for the current epoch. Construct a Welcome message from the encrypted GroupInfo object and the encrypted key packages.", "comments": "This PR adapts the Welcome logic so that the new joiners get their path secrets directly, rather than from a DirectPath. This cleans things up in a few ways. The new joiners no longer have to have the context for the previous epoch (which was used to encrypt the DirectPath). And we can get away with one HPKE operation per new joiner instead of two (one for the EncryptedKeyPackage and one for the DirectPath). The major cost is that there's a bit more complexity in the TreeKEM. Instead of just producing and consuming DirectPaths, you now need to know which path secrets go with which nodes. You also need tree math to identify the common ancestor of two leaves, but there's a simple closed-form formula for that, expressed in the Python code here.\nNAME - Curious if this seems implementable to you. Clearly it's possible, since I did it in Go :) But it also needs to make sense to other people.", "new_text": "the \"confirmation\" value in the MLSPlaintext. Sign the MLSPlaintext using the current epoch's GroupContext as context. Update the tree in the provisional state by applying the direct path Construct a GroupInfo reflecting the new state: Group ID, epoch, tree, confirmed transcript hash, and interim transcript hash from the new state The confirmation from the MLSPlaintext object Sign the GroupInfo using the member's private signing key Encrypt the GroupInfo using the key and nonce derived from the \"epoch_secret\" for the new epoch (see welcoming-new-members) For each new member in the group: Identify the lowest common ancestor in the tree of the new member's leaf node and the member sending the Commit Compute the path secret corresponding to the commonn ancestor node Compute an EncryptedKeyPackage object that encapsulates the \"init_secret\" for the current epoch and the path secret for the common ancestor. Construct a Welcome message from the encrypted GroupInfo object and the encrypted key packages."}
{"id": "q-en-mls-protocol-c6032d342710a51b3e7f0758bf1a67e75054d589519c0fd71dcb0f8ca70d0aa1", "old_text": "the new state is the \"prior_confirmed_transcript_hash\" in the GroupInfo object. Process the \"path\" field in the GroupInfo to update the new group state: Apply the DirectPath to the tree, as described in synchronizing-views-of-the-tree. Define \"commit_secret\" as the value \"path_secret[n+1]\" derived from the \"path_secret[n]\" value assigned to the root node. Use the \"epoch_secret\" from the KeyPackage object to generate the epoch secret and other derived secrets for the current epoch.", "comments": "This PR adapts the Welcome logic so that the new joiners get their path secrets directly, rather than from a DirectPath. This cleans things up in a few ways. The new joiners no longer have to have the context for the previous epoch (which was used to encrypt the DirectPath). And we can get away with one HPKE operation per new joiner instead of two (one for the EncryptedKeyPackage and one for the DirectPath). The major cost is that there's a bit more complexity in the TreeKEM. Instead of just producing and consuming DirectPaths, you now need to know which path secrets go with which nodes. You also need tree math to identify the common ancestor of two leaves, but there's a simple closed-form formula for that, expressed in the Python code here.\nNAME - Curious if this seems implementable to you. Clearly it's possible, since I did it in Go :) But it also needs to make sense to other people.", "new_text": "the new state is the \"prior_confirmed_transcript_hash\" in the GroupInfo object. Update the leaf at index \"index\" with the private key corresponding to the public key in the node. Identify the lowest common ancestor of the leaves at \"index\" and at \"GroupInfo.signer_index\". Set the private key for this node to the private key derived from the \"path_secret\" in the KeyPackage object. For each parent of the common ancestor, up to the root of the tree, derive a new path secret and set the private key for the node to the private key derived from the path secret. The private key MUST be the private key that correspondns to the public key in the node. Use the \"epoch_secret\" from the KeyPackage object to generate the epoch secret and other derived secrets for the current epoch."}
{"id": "q-en-mls-protocol-6ac0704a7f25241b4dbdd498b98090544e68acf68315e25178d5e3b01fee46f9", "old_text": "A hash function A Diffie-Hellman finite-field group or elliptic curve group An AEAD encryption algorithm A signature algorithm The ciphersuite's Diffie-Hellman group is used to instantiate an HPKE I-D.irtf-cfrg-hpke instance for the purpose of public-key encryption. The ciphersuite must specify an algorithm \"Derive-Key-Pair\" that maps octet strings with length Hash.length to HPKE key pairs. Ciphersuites are represented with the CipherSuite type. HPKE public keys are opaque values in a format defined by the underlying Diffie- Hellman protocol (see the Ciphersuites section of the HPKE specification for more information). The signature algorithm specified in the ciphersuite is the mandatory algorithm to be used for signatures in MLSPlaintext and the tree", "comments": "The first commit is editorial. I removed all the DH terminology I could find and replaced it with KEM terminology. The second commit is related to URL It would massively reduce complexity of implementations if people were able to rely on HPKE to do all private key parsing, key validation, public key computation, etc. Currently, all the information in the deleted CFRG and NIST sections is either redundant (performed by HPKE) or outdated and performed correctly by HPKE (e.g., old P-256 DH secret representation). Instead of having all this duplicated, it would be easier to keep it in one place. An issue with the new definition of : Repeatedly calling can be computationally expensive. Each call requires computing the group hash (which, okay, it could be cached) and serializing a struct. It would be nice to do something that's cheaper but still ties context into the secret key generation.\nUpdate: This now uses HPKE's function to do all the heavy lifting", "new_text": "A hash function A key encapsulation mechanism (KEM) An AEAD encryption algorithm A signature algorithm The ciphersuite's KEM used to instantiate an HPKE I-D.irtf-cfrg-hpke instance for the purpose of public-key encryption. Each ciphersuite has a \"Derive-Key-Pair\" function that maps octet strings of length \"Nsk\" (a KEM-specific constant defined by HPKE) to HPKE key pairs. This function is defined to be HPKE's \"DeriveKeyPair\" function. Ciphersuites are represented with the CipherSuite type. HPKE public keys are opaque values in a format defined by the underlying protocol (see the Cryptographic Dependencies section of the HPKE specification for more information). The signature algorithm specified in the ciphersuite is the mandatory algorithm to be used for signatures in MLSPlaintext and the tree"}
{"id": "q-en-mls-protocol-6ac0704a7f25241b4dbdd498b98090544e68acf68315e25178d5e3b01fee46f9", "old_text": "The ciphersuites are defined in section mls-ciphersuites. Depending on the Diffie-Hellman group of the ciphersuite, different rules apply to private key derivation and public key verification. For all ciphersuites defined in this document, the Derive-Key-Pair function begins by deriving a \"key pair secret\" of appropriate length, then converting it to a private key in the required group. The ciphersuite specifies the required length and the conversion. 6.1.1. For X25519, the key pair secret is 32 octets long. No conversion is required, since any 32-octet string is a valid X25519 private key. The corresponding public key is X25519(SHA-256(X), 9). For X448, the key pair secret is 56 octets long. No conversion is required, since any 56-octet string is a valid X448 private key. The corresponding public key is X448(SHA-256(X), 5). Implementations MUST use the approach specified in RFC7748 to calculate the Diffie-Hellman shared secret. Implementations MUST check whether the computed Diffie-Hellman shared secret is the all- zero value and abort if so, as described in Section 6 of RFC7748. If implementers use an alternative implementation of these elliptic curves, they MUST perform the additional checks specified in Section 7 of 6.1.2. For P-256, the key pair secret is 32 octets long. For P-521, the key pair secret is 66 octets long. In either case, the private key derived from a key pair secret is computed by interpreting the key pair secret as a big-endian integer. ECDH calculations for these curves (including parameter and key generation as well as the shared secret calculation) are performed according to IEEE1363 using the ECKAS-DH1 scheme with the identity map as key derivation function (KDF), so that the shared secret is the x-coordinate of the ECDH shared secret elliptic curve point represented as an octet string. Note that this octet string (Z in IEEE 1363 terminology) as output by FE2OSP, the Field Element to Octet String Conversion Primitive, has constant length for any given field; leading zeros found in this octet string MUST NOT be truncated. (Note that this use of the identity KDF is a technicality. The complete picture is that ECDH is employed with a non-trivial KDF because MLS does not directly use this secret for anything other than for computing other secrets.) Clients MUST validate remote public values by ensuring that the point is a valid point on the elliptic curve. The appropriate validation procedures are defined in Section 4.3.7 of X962 and alternatively in Section 5.6.2.3 of keyagreement. This process consists of three steps: (1) verify that the value is not the point at infinity (O), (2) verify that for Y = (x, y) both integers are in the correct interval, (3) ensure that (x, y) is a correct solution to the elliptic curve equation. For these curves, implementers do not need to verify membership in the correct subgroup. 6.2. A member of a group authenticates the identities of other", "comments": "The first commit is editorial. I removed all the DH terminology I could find and replaced it with KEM terminology. The second commit is related to URL It would massively reduce complexity of implementations if people were able to rely on HPKE to do all private key parsing, key validation, public key computation, etc. Currently, all the information in the deleted CFRG and NIST sections is either redundant (performed by HPKE) or outdated and performed correctly by HPKE (e.g., old P-256 DH secret representation). Instead of having all this duplicated, it would be easier to keep it in one place. An issue with the new definition of : Repeatedly calling can be computationally expensive. Each call requires computing the group hash (which, okay, it could be cached) and serializing a struct. It would be nice to do something that's cheaper but still ties context into the secret key generation.\nUpdate: This now uses HPKE's function to do all the heavy lifting", "new_text": "The ciphersuites are defined in section mls-ciphersuites. 6.2. A member of a group authenticates the identities of other"}
{"id": "q-en-mls-protocol-e384eecaf0358bebd2d8057a72b468d3947f9b0550363e1191f592aab51539c9", "old_text": "Note that, as a PSK may have a different lifetime than an update, it does not necessarily provide the same Forward Secrecy (FS) or Post- Compromise Security (PCS) guarantees than a Commit message. 7.10.", "comments": "A couple of minor editorial changes to the current draft, namely: There was a missing grave accent in one of the paragraphs of the Commit section causing some formatting issues Before: ! After: ! Fix typo Pre-Shared Keys section [...], it does not necessarily provide the same [...] guarantees ~than~ as a Commit message. Resolve remaining reference to priorepoch In the Sequencing section there was a reference to a field that was introduced in 2fa6ed3c6afc7d308905255b4cabe154a48d1e4b. However, in 69e12cd61378aae4a81199c28a16e2802666b6e3, it was removed from the struct where it was added. That entire struct was then removed in 27c9c28f9634a06a9b7921ea9c15e276f8c9ff6f, after which the trail runs cold. So I wasn't able to figure out exactly what it must be based on what on what it originally was. Hence, I replaced it with something that makes sense in context. I based the new text on the following line found in the (current) Commit section: Rewrite paragraph on the new_member SenderType I found the phrase \" In such cases\" after the sentence \"Proposals with types other than Add MUST NOT be sent with this sender type\" slightly confusing and awkward. To improve this I rewrote the paragraph.\nCouple of nits on your nits", "new_text": "Note that, as a PSK may have a different lifetime than an update, it does not necessarily provide the same Forward Secrecy (FS) or Post- Compromise Security (PCS) guarantees as a Commit message. 7.10."}
{"id": "q-en-mls-protocol-e384eecaf0358bebd2d8057a72b468d3947f9b0550363e1191f592aab51539c9", "old_text": "in MLSPlaintext. The \"new_member\" SenderType is used for clients proposing that they themselves be added. For this ID type the sender value MUST be zero. Proposals with types other than Add MUST NOT be sent with this sender type. In such cases, the MLSPlaintext MUST be signed with the private key corresponding to the KeyPackage in the Add message. Recipients MUST verify that the MLSPlaintext carrying the Proposal message is validly signed with this key. The \"preconfigured\" SenderType is reserved for signers that are pre- provisioned to the clients within a group. If proposals with these", "comments": "A couple of minor editorial changes to the current draft, namely: There was a missing grave accent in one of the paragraphs of the Commit section causing some formatting issues Before: ! After: ! Fix typo Pre-Shared Keys section [...], it does not necessarily provide the same [...] guarantees ~than~ as a Commit message. Resolve remaining reference to priorepoch In the Sequencing section there was a reference to a field that was introduced in 2fa6ed3c6afc7d308905255b4cabe154a48d1e4b. However, in 69e12cd61378aae4a81199c28a16e2802666b6e3, it was removed from the struct where it was added. That entire struct was then removed in 27c9c28f9634a06a9b7921ea9c15e276f8c9ff6f, after which the trail runs cold. So I wasn't able to figure out exactly what it must be based on what on what it originally was. Hence, I replaced it with something that makes sense in context. I based the new text on the following line found in the (current) Commit section: Rewrite paragraph on the new_member SenderType I found the phrase \" In such cases\" after the sentence \"Proposals with types other than Add MUST NOT be sent with this sender type\" slightly confusing and awkward. To improve this I rewrote the paragraph.\nCouple of nits on your nits", "new_text": "in MLSPlaintext. The \"new_member\" SenderType is used for clients proposing that they themselves be added. For this ID type the sender value MUST be zero and the Proposal type MUST be Add. The MLSPlaintext MUST be signed with the private key corresponding to the KeyPackage in the Add message. Recipients MUST verify that the MLSPlaintext carrying the Proposal message is validly signed with this key. The \"preconfigured\" SenderType is reserved for signers that are pre- provisioned to the clients within a group. If proposals with these"}
{"id": "q-en-mls-protocol-e384eecaf0358bebd2d8057a72b468d3947f9b0550363e1191f592aab51539c9", "old_text": "The \"path\" field of a Commit message MUST be populated if the Commit covers at least one Update or Remove proposal, i.e., if the length of the \"updates\" or \"removes vectors is greater than zero. The \"path\"field MUST also be populated if the Commit covers no proposals at all (i.e., if all three proposal vectors are empty). The \"path\"field MAY be omitted if the Commit covers only Add proposals. In pseudocode, the logic for whether the \"path` field is required is as follows: To summarize, a Commit can have three different configurations, with different uses:", "comments": "A couple of minor editorial changes to the current draft, namely: There was a missing grave accent in one of the paragraphs of the Commit section causing some formatting issues Before: ! After: ! Fix typo Pre-Shared Keys section [...], it does not necessarily provide the same [...] guarantees ~than~ as a Commit message. Resolve remaining reference to priorepoch In the Sequencing section there was a reference to a field that was introduced in 2fa6ed3c6afc7d308905255b4cabe154a48d1e4b. However, in 69e12cd61378aae4a81199c28a16e2802666b6e3, it was removed from the struct where it was added. That entire struct was then removed in 27c9c28f9634a06a9b7921ea9c15e276f8c9ff6f, after which the trail runs cold. So I wasn't able to figure out exactly what it must be based on what on what it originally was. Hence, I replaced it with something that makes sense in context. I based the new text on the following line found in the (current) Commit section: Rewrite paragraph on the new_member SenderType I found the phrase \" In such cases\" after the sentence \"Proposals with types other than Add MUST NOT be sent with this sender type\" slightly confusing and awkward. To improve this I rewrote the paragraph.\nCouple of nits on your nits", "new_text": "The \"path\" field of a Commit message MUST be populated if the Commit covers at least one Update or Remove proposal, i.e., if the length of the \"updates\" or \"removes\" vectors is greater than zero. The \"path\" field MUST also be populated if the Commit covers no proposals at all (i.e., if all three proposal vectors are empty). The \"path\" field MAY be omitted if the Commit covers only Add proposals. In pseudocode, the logic for whether the \"path\" field is required is as follows: To summarize, a Commit can have three different configurations, with different uses:"}
{"id": "q-en-mls-protocol-e384eecaf0358bebd2d8057a72b468d3947f9b0550363e1191f592aab51539c9", "old_text": "12. Each Commit message is premised on a given starting state, indicated in its \"prior_epoch\" field. If the changes implied by a Commit messages are made starting from a different state, the results will be incorrect. This need for sequencing is not a problem as long as each time a group member sends a Commit message, it is based on the most current", "comments": "A couple of minor editorial changes to the current draft, namely: There was a missing grave accent in one of the paragraphs of the Commit section causing some formatting issues Before: ! After: ! Fix typo Pre-Shared Keys section [...], it does not necessarily provide the same [...] guarantees ~than~ as a Commit message. Resolve remaining reference to priorepoch In the Sequencing section there was a reference to a field that was introduced in 2fa6ed3c6afc7d308905255b4cabe154a48d1e4b. However, in 69e12cd61378aae4a81199c28a16e2802666b6e3, it was removed from the struct where it was added. That entire struct was then removed in 27c9c28f9634a06a9b7921ea9c15e276f8c9ff6f, after which the trail runs cold. So I wasn't able to figure out exactly what it must be based on what on what it originally was. Hence, I replaced it with something that makes sense in context. I based the new text on the following line found in the (current) Commit section: Rewrite paragraph on the new_member SenderType I found the phrase \" In such cases\" after the sentence \"Proposals with types other than Add MUST NOT be sent with this sender type\" slightly confusing and awkward. To improve this I rewrote the paragraph.\nCouple of nits on your nits", "new_text": "12. Each Commit message is premised on a given starting state, indicated by the \"epoch\" field of the enclosing MLSPlaintext message. If the changes implied by a Commit messages are made starting from a different state, the results will be incorrect. This need for sequencing is not a problem as long as each time a group member sends a Commit message, it is based on the most current"}
{"id": "q-en-mls-protocol-73f63927eeee2f7ca32b4325672bcab6716b677aadd4e760b21171eb33732dc5", "old_text": "above), then it MUST be populated. Otherwise, the sender MAY omit the \"path\" field at its discretion. If populating the \"path\" field: Create a DirectPath using the new tree (which includes any new members). The GroupContext for this operation uses the \"group_id\", \"epoch\", \"tree_hash\", and \"confirmed_transcript_hash\" values in the initial GroupContext object. Assign this DirectPath to the \"path\" field in the Commit. Apply the DirectPath to the tree, as described in synchronizing-views-of-the-tree. Define \"commit_secret\" as the value \"path_secret[n+1]\" derived from the \"path_secret[n]\" value assigned to the root node.", "comments": "This is a simple rename of to to be more descriptive of the action being performed by the message. I'll open a separate PR after this PIR is merged with updates to the spec where it uses when instead it should say (since not every actually ratchets forward key material, as I understand). Some examples of where these clarifications should happen are: Line 1203 should instead say \"each MLS UpdatePath message\" The sections \"Ratchet Tree Evolution\" and \"Sychronizing Views of the Tree\" should describe the process of generating the key material for an message cc NAME\nDirectPath corresponds to the graph theory term \"direct path\" of a leaf with the root\n\"Direct path\" is still a thing, and the object in question here operates along a direct path, but I think \"UpdatePath\" indicates more clearly what it's doing.", "new_text": "above), then it MUST be populated. Otherwise, the sender MAY omit the \"path\" field at its discretion. If populating the \"path\" field: Create a UpdatePath using the new tree (which includes any new members). The GroupContext for this operation uses the \"group_id\", \"epoch\", \"tree_hash\", and \"confirmed_transcript_hash\" values in the initial GroupContext object. Assign this UpdatePath to the \"path\" field in the Commit. Apply the UpdatePath to the tree, as described in synchronizing-views-of-the-tree. Define \"commit_secret\" as the value \"path_secret[n+1]\" derived from the \"path_secret[n]\" value assigned to the root node."}
{"id": "q-en-mls-protocol-73f63927eeee2f7ca32b4325672bcab6716b677aadd4e760b21171eb33732dc5", "old_text": "the ratchet tree the provisional GroupContext, to update the ratchet tree and generate the \"commit_secret\": Apply the DirectPath to the tree, as described in synchronizing-views-of-the-tree, and store \"key_package\" at the Committer's leaf.", "comments": "This is a simple rename of to to be more descriptive of the action being performed by the message. I'll open a separate PR after this PIR is merged with updates to the spec where it uses when instead it should say (since not every actually ratchets forward key material, as I understand). Some examples of where these clarifications should happen are: Line 1203 should instead say \"each MLS UpdatePath message\" The sections \"Ratchet Tree Evolution\" and \"Sychronizing Views of the Tree\" should describe the process of generating the key material for an message cc NAME\nDirectPath corresponds to the graph theory term \"direct path\" of a leaf with the root\n\"Direct path\" is still a thing, and the object in question here operates along a direct path, but I think \"UpdatePath\" indicates more clearly what it's doing.", "new_text": "the ratchet tree the provisional GroupContext, to update the ratchet tree and generate the \"commit_secret\": Apply the UpdatePath to the tree, as described in synchronizing-views-of-the-tree, and store \"key_package\" at the Committer's leaf."}
{"id": "q-en-mls-protocol-216879bd2ab6f3fb8e088cd2c27583bf6fa5be2aa527dc60718ef8abc17e0894", "old_text": "A key package that is prepublished by a client, which other clients can use to introduce the client to a new group. A long-lived signing key pair used to authenticate the sender of a message. Terminology specific to tree computations is described in ratchet- trees.", "comments": "Rotating keys is important and in some cases, Signature Keys are not going to be very \"long-term\". Instead they're going to be rotated periodically. I thus propose we simply remove the assumption that Signature Keys are of a \"long-term\" nature (or otherwise \"long-lived\"). The second change in this PR is that I added a few words indicating explicitly that the identity must not change when updating a KeyPackage.", "new_text": "A key package that is prepublished by a client, which other clients can use to introduce the client to a new group. A signing key pair used to authenticate the sender of a message. Terminology specific to tree computations is described in ratchet- trees."}
{"id": "q-en-mls-protocol-216879bd2ab6f3fb8e088cd2c27583bf6fa5be2aa527dc60718ef8abc17e0894", "old_text": "Provider (SP) as described in I-D.ietf-mls-architecture. In particular, we assume the SP provides the following services: A long-term signature key provider which allows clients to authenticate protocol messages in a group. A broadcast channel, for each group, which will relay a message to all members of a group. For the most part, we assume that this", "comments": "Rotating keys is important and in some cases, Signature Keys are not going to be very \"long-term\". Instead they're going to be rotated periodically. I thus propose we simply remove the assumption that Signature Keys are of a \"long-term\" nature (or otherwise \"long-lived\"). The second change in this PR is that I added a few words indicating explicitly that the identity must not change when updating a KeyPackage.", "new_text": "Provider (SP) as described in I-D.ietf-mls-architecture. In particular, we assume the SP provides the following services: A signature key provider which allows clients to authenticate protocol messages in a group. A broadcast channel, for each group, which will relay a message to all members of a group. For the most part, we assume that this"}
{"id": "q-en-mls-protocol-216879bd2ab6f3fb8e088cd2c27583bf6fa5be2aa527dc60718ef8abc17e0894", "old_text": "as well as providing a public key that others can use for key agreement. The client's signature key can be updated throughout the lifetime of the group by sending a new KeyPackage with a new signature key; the new signature key MUST be validated by the authentication service. When used as InitKeys, KeyPackages are intended to be used only once", "comments": "Rotating keys is important and in some cases, Signature Keys are not going to be very \"long-term\". Instead they're going to be rotated periodically. I thus propose we simply remove the assumption that Signature Keys are of a \"long-term\" nature (or otherwise \"long-lived\"). The second change in this PR is that I added a few words indicating explicitly that the identity must not change when updating a KeyPackage.", "new_text": "as well as providing a public key that others can use for key agreement. The client's signature key can be updated throughout the lifetime of the group by sending a new KeyPackage with a new Credential. However, the identity MUST be the same in both Credentials and the new Credential MUST be validated by the authentication service. When used as InitKeys, KeyPackages are intended to be used only once"}
{"id": "q-en-mls-protocol-216879bd2ab6f3fb8e088cd2c27583bf6fa5be2aa527dc60718ef8abc17e0894", "old_text": "creator, because they are derived from an authenticated key exchange protocol. Subsequent leaf keys are known only by their owner. Note that the long-term signature keys used by the protocol MUST be distributed by an \"honest\" authentication service for clients to authenticate their legitimate peers. 14.2.", "comments": "Rotating keys is important and in some cases, Signature Keys are not going to be very \"long-term\". Instead they're going to be rotated periodically. I thus propose we simply remove the assumption that Signature Keys are of a \"long-term\" nature (or otherwise \"long-lived\"). The second change in this PR is that I added a few words indicating explicitly that the identity must not change when updating a KeyPackage.", "new_text": "creator, because they are derived from an authenticated key exchange protocol. Subsequent leaf keys are known only by their owner. Note that the signature keys used by the protocol MUST be distributed by an \"honest\" authentication service for clients to authenticate their legitimate peers. 14.2."}
{"id": "q-en-mls-protocol-30f896e650c352998f4fa43661c99781f130867d028d9585447d67ed367fc00c", "old_text": "This information is aggregated in a \"PublicGroupState\" object as follows: \"struct { CipherSuite cipher_suite; opaque group_id<0..255>; uint64 epoch; opaque tree_hash<0..255>; opaque interim_transcript_hash<0..255>; Extension extensions<0..2^32-1>; HPKEPublicKey external_pub; } PublicGroupState; \" Note that the \"tree_hash\" field is used the same way as in the Welcome message. The full tree can be included via the \"ratchet_tree\" extension ratchet-tree-extension. The information above are not deemed public data in general, but applications can choose to make them available to new members in order to allow External Commits. External Commits work like regular Commits, with a few differences:", "comments": "The unsigned PublicGroupState created by leads to an attack on the new joiner. The sender of a GroupPublicState can associate any HPKE it wants with the group, and thus learn the without being a member of the group. (The UpdatePath and the resulting partly defend against this attack, though they are subverted if the attacker has compromised any single key used in the Update Path.) To guard against this attack, this PR adds a signature over the HPKE public key in the PublicGroupState.\nIn light of the feedback on the mailing list from NAME re: deniability, I have upgraded this PR to sign the whole PublicGroupState.\nOutstanding issue: One detail that is not fully covered by is the strategy new joiners should apply to determine who the latest committer is and make sure that both the and the last Commit are signed by the same member.\nThe last node to send an UpdatePath is the last one to set the root. So you can follow parent_hash to find which leaf it is. But like I said on the IETF meeting call, I don't think we should actually require that PublicGroupState be signed by the last updater. There are possible operational/sync issues if the PublicGroupState isn't fate-shared with the Commit, but that's a practical issue, not a security issue. And in any case, it's not a breaking change, so we can fix it post-draft-11 if we decide we need to.\nPlease fix the link on the repo page to display diffs between the current editor's draft and the WG draft. Now, it should PRs comparing back to the -00 version.\nNAME Any change you could have a look at this when you have some time ? : )\nYeah, I'm gonna fix the CI and I can take a look at this as well. I tried to get it done before the interim but, well, excuses excuses\nAhah... ;) No problem!! Thanks !\nNAME -- last call on this, or i'm just going to close it\nSorry. As you will have guessed, this fell through the cracks. I'll try this week to fix it out and close it out if I don't get to it this weekend :/", "new_text": "This information is aggregated in a \"PublicGroupState\" object as follows: Note that the \"tree_hash\" field is used the same way as in the Welcome message. The full tree can be included via the \"ratchet_tree\" extension ratchet-tree-extension. The signature MUST verify using the public key taken from the credential in the leaf node at position \"signer_index\". The signature covers the following structure, comprising all the fields in the PublicGroupState above \"signer_index\": This signature authenticates the HPKE public key, so that the joiner knows that the public key was provided by a member of the group. The fields that are not signed are included in the key schedule via the GroupContext object. If the joiner is provided an inaccurate data for these fields, then its external Commit will have an incorrect \"confirmation_tag\" and thus be rejected. The information in a PublicGroupState is not deemed public in general, but applications can choose to make it available to new members in order to allow External Commits. External Commits work like regular Commits, with a few differences:"}
{"id": "q-en-mls-protocol-e43a9d2badbf0c4e63955688b49141402cb607cabfb560d50be22d68fe2feec8", "old_text": "client being added to new groups by exhausting all available InitKeys. 16. This document requests the creation of the following new IANA", "comments": "In preparation for feature-freeze / post-first-WGLC living, this PR removes a few issues that have been discussed and pretty much resolved. It leaves a few OPEN ISSUES that could benefit from analysis during the feature freeze.", "new_text": "client being added to new groups by exhausting all available InitKeys. 15.5. It is possible for a malicious member of a group to \"fragment\" the group by crafting an invalid UpdatePath. Recall that an UpdatePath encrypts a sequence of path secrets to different subtrees of the group's ratchet trees. These path secrets should be derived in a sequence as described in ratchet-tree-evolution, but the UpdatePath syntax allows the sender to encrypt arbitrary, unrelated secrets. The syntax also does not guarantee that the encrypted path secret encrypted for a given node corresponds to the public key provided for that node. Both of these types of corruption will cause processing of a Commit to fail for some members of the group. If the public key for a node does not match the path secret, then the members that decrypt that path secret will reject the commit based on this mismatch. If the path secret sequence is incorrect at some point, then members that can decrypt nodes before that point will compute a different public key for the mismatched node than the one in the UpdatePath, which also causes the Commit to fail. Applications SHOULD provide mechanisms for failed commits to be reported, so that group members who were not able to recognize the error themselves can reject the commit and roll back to a previous state if necessary. Even with such an error reporting mechanism in place, however, it is still possible for members to get locked out of the group by a malformed commit. Since malformed Commits can only be recognized by certain members of the group, in an asynchronous application, it may be the case that all members that could detect a fault in a Commit are offline. In such a case, the Commit will be accepted by the group, and the resulting state possibly used as the basis for further Commits. When the affected members come back online, they will reject the first commit, and thus be unable to catch up with the group. Applications can address this risk by requiring certain members of the group to acknowledge successful processing of a Commit before the group regards the Commit as accepted. The minimum set of acknowledgements necessary to verify that a Commit is well-formed comprises an acknowledgement from one member per node in the UpdatePath, that is, one member from each subtree rooted in the copath node corresponding to the node in the UpdatePath. 16. This document requests the creation of the following new IANA"}
{"id": "q-en-mls-protocol-19a0189a526f87d237a1ff89d73a5e1e486a41317cd44f9e7b6b608991ead235", "old_text": "committer's contribution to the group and provides PCS with regard to the committer. An \"add-only\" Commit that references only Add proposals, in which the path is optional. Such a commit provides PCS with regard to the committer only if the path field is present. A \"full\" Commit that references proposals of any type, which provides FS with regard to any removed members and PCS for the", "comments": "Discussed at 20210526 Interim. Merge.\nGood point about allowing PSKs in a partial Commit!", "new_text": "committer's contribution to the group and provides PCS with regard to the committer. A \"partial\" Commit that references Add, PreSharedKey, or ReInit proposals but where the path is empty. Such a commit doesn't provide PCS with regard to the committer. A \"full\" Commit that references proposals of any type, which provides FS with regard to any removed members and PCS for the"}
{"id": "q-en-mls-protocol-b0b4fa264be90a07469d69d8243f78fef87f8341cbb9801a6d703daad5240318", "old_text": "A signature algorithm MLS uses draft-07 of HPKE I-D.irtf-cfrg-hpke for public-key encryption. The \"DeriveKeyPair\" function associated to the KEM for the ciphersuite maps octet strings to HPKE key pairs.", "comments": "draft-08 is the latest HPKE draft, which should be stable. It is also what's being used for interop testing right now.\nDiscussed 20210526 Interim: Merge.", "new_text": "A signature algorithm MLS uses draft-08 of HPKE I-D.irtf-cfrg-hpke for public-key encryption. The \"DeriveKeyPair\" function associated to the KEM for the ciphersuite maps octet strings to HPKE key pairs."}
{"id": "q-en-mls-protocol-0e8b5fd14795f3f8f89ea40025f245d3dd43f9b5f53994689b1fab6977c47a08", "old_text": "External Commits work like regular Commits, with a few differences: External Commits MUST reference an Add Proposal that adds the issuing new member to the group External Commits MUST contain a \"path\" field (and is therefore a \"full\" Commit)", "comments": "NAME NAME r?\nDiscussed at 20211004 interim, remove resynch with a single op (external join + remove in one op). NAME research why resynch was dropped.\nA good point was raised by Jonathon Hoyland during the MLS IETF 109 meeting regarding possible concerns in using external commits for resync, particularly in the case of Alice adding/removing herself. Richard noted that this is a feature in the case that Alice is no longer synchronized with the group and therefore can use an external commit to add herself back in, removing the previous version. As opposed to any newcomer joining with an external commit, the case of Alice re-joining presents a potential security issue. Namely, as currently specified (in my reading of the draft), an existing group member, Bob, has no means to distinguish between the following cases: 1) Alice needs to resync and therefore performs an external commit and removes her prior version. 2) Alice\u2019s signature keys are compromised (it is not necessary for the adversary to compromise any group state). The adversary performs an external commit in Alice\u2019s name, and then removes her prior version and impersonates her to the group. One might hope that Alice notices that she is removed and communicates this to the group members OOB, but it is also possible that that she assumes some other reason for the removal, is offline, or simply is not active enough to take action for a fairly long compromise window. Even if she tries to use an external commit to get back into the group and then removes the adversary-as-Alice, there is no means for other group members distinguish the real Alice from the adversary-as-Alice and the process could be circular (until new valid identity keys are issued). While a newcomer is a fresh source to be trusted or not, Alice has been \u201chealing\u201d along with the group and the above option (2) allows the adversary to bypass all of that. The source of the problem is that when Alice re-syncs, she is not providing any validation of being the same/previous identity, so it is easy for other group members to accept that nothing more than a resync has taken place. Thus, a fairly straightforward solution is to require PSK use in cases where an external commit is used for resync. By enabling a PSK derived from a previous epoch during which Alice was part of the group to be injected with the external commit, Alice provides some proof of prior group membership and we avoid the total reset. This is not quite PCS in that the attacker is active following compromise, but it is a linked case. As such it is important that a conscious decision is made regarding this (either as a slight change before the feature-freeze in line with the other changes that the editors have proposed, or as a working group decision to close the issue as out-of-scope).\nJust to think this with what has been discussed on the mailing list: will give members a way to do the distinction mentioned above.\nNAME - I don't think that's actually the case; it would only allow the members a way to recognize that Alice is replacing herself. I think the right answer here is what Britta suggests, plus a little: We should enumerate a few specific constraints on the proposals that can be carried inline in an external Commit: There MUST be an Add proposal for the signer There MUST be an ExternalInit proposal There MUST NOT be any Update proposals If a Remove proposal is present, then: The identity of the removed node MUST be the same as the identity in the Add KeyPackage There MUST be a PSK proposal of type , referencing an earlier epoch of this group In other words, you can commit things that existing members sent (by reference), but the joiner-initiated proposals must only (a) add the joiner, and (b) optionally remove the old instance of the joiner, with the PSK assurance NAME suggests.\nComing back to this after a few weeks, I'm actually not sure this is a problem worth solving. The attack scenario (2) exists whether or not a member can resync without proving prior membership. As long as (a) the group allows external Commits, and (b) the group does not require a new appearance of an existing identity to present a PSK from an earlier epoch, then the attacker can still do the attack by first joining, and then removing the old Alice. The only difference in the case where the external commit also does the resync is that the two happen together. It also doesn't seem unreasonable to regard the ability to resync from loss of all state except the signature key as a feature, not a bug. Of course, that position would entail accepting that compromise of signature keys would be sufficient to impersonate the user, but that doesn't seem all that surprising, at least to me. So this seems like a question of group policy to me. Either the group requires ReInit PSKs on resync (and you can't recover from all-but-signature-key state loss), or it doesn't (and signature keys are the last resort). This can be documented in the Security Considerations, in a new PR. As far as , I think it's probably still worth adding the precision, but I would change the PSK part to MAY if not remove it entirely.\nThere is also a practical issue with using PSKs to prove past membership, in that if there have been joins since Alice lost state, the new joiners won't have the PSK. This is of course a general issue with the proof-of-past-membership PSK, but it is particularly salient here.", "new_text": "External Commits work like regular Commits, with a few differences: The proposals included by value in an External Commit MUST meet the following conditions: There MUST be a single Add proposal that adds the new issuing new member to the group There MUST be a single ExternalInit proposal There MUST NOT be any Update proposals If a Remove proposal is present, then the \"credential\" and \"endpoint_id\" of the removed leaf MUST be the same as the corresponding values in the Add KeyPackage. The proposals included by reference in an External Commit MUST meet the following conditions: There MUST NOT be any ExternalInit proposals External Commits MUST contain a \"path\" field (and is therefore a \"full\" Commit)"}
{"id": "q-en-mls-protocol-0e8b5fd14795f3f8f89ea40025f245d3dd43f9b5f53994689b1fab6977c47a08", "old_text": "public key for the credential in the \"leaf_key_package\" of the \"path\" field. An external commit MUST reference no more than one ExternalInit proposal, and the ExternalInit proposal MUST be supplied by value, not by reference. When processing a Commit, both existing and new members MUST use the external init secret as described in external-initialization. The sender type for the MLSPlaintext encapsulating the External Commit MUST be \"new_member\" If the Add Proposal is also issued by the new member, its member SenderType MUST be \"new_member\" 11.2.2.", "comments": "NAME NAME r?\nDiscussed at 20211004 interim, remove resynch with a single op (external join + remove in one op). NAME research why resynch was dropped.\nA good point was raised by Jonathon Hoyland during the MLS IETF 109 meeting regarding possible concerns in using external commits for resync, particularly in the case of Alice adding/removing herself. Richard noted that this is a feature in the case that Alice is no longer synchronized with the group and therefore can use an external commit to add herself back in, removing the previous version. As opposed to any newcomer joining with an external commit, the case of Alice re-joining presents a potential security issue. Namely, as currently specified (in my reading of the draft), an existing group member, Bob, has no means to distinguish between the following cases: 1) Alice needs to resync and therefore performs an external commit and removes her prior version. 2) Alice\u2019s signature keys are compromised (it is not necessary for the adversary to compromise any group state). The adversary performs an external commit in Alice\u2019s name, and then removes her prior version and impersonates her to the group. One might hope that Alice notices that she is removed and communicates this to the group members OOB, but it is also possible that that she assumes some other reason for the removal, is offline, or simply is not active enough to take action for a fairly long compromise window. Even if she tries to use an external commit to get back into the group and then removes the adversary-as-Alice, there is no means for other group members distinguish the real Alice from the adversary-as-Alice and the process could be circular (until new valid identity keys are issued). While a newcomer is a fresh source to be trusted or not, Alice has been \u201chealing\u201d along with the group and the above option (2) allows the adversary to bypass all of that. The source of the problem is that when Alice re-syncs, she is not providing any validation of being the same/previous identity, so it is easy for other group members to accept that nothing more than a resync has taken place. Thus, a fairly straightforward solution is to require PSK use in cases where an external commit is used for resync. By enabling a PSK derived from a previous epoch during which Alice was part of the group to be injected with the external commit, Alice provides some proof of prior group membership and we avoid the total reset. This is not quite PCS in that the attacker is active following compromise, but it is a linked case. As such it is important that a conscious decision is made regarding this (either as a slight change before the feature-freeze in line with the other changes that the editors have proposed, or as a working group decision to close the issue as out-of-scope).\nJust to think this with what has been discussed on the mailing list: will give members a way to do the distinction mentioned above.\nNAME - I don't think that's actually the case; it would only allow the members a way to recognize that Alice is replacing herself. I think the right answer here is what Britta suggests, plus a little: We should enumerate a few specific constraints on the proposals that can be carried inline in an external Commit: There MUST be an Add proposal for the signer There MUST be an ExternalInit proposal There MUST NOT be any Update proposals If a Remove proposal is present, then: The identity of the removed node MUST be the same as the identity in the Add KeyPackage There MUST be a PSK proposal of type , referencing an earlier epoch of this group In other words, you can commit things that existing members sent (by reference), but the joiner-initiated proposals must only (a) add the joiner, and (b) optionally remove the old instance of the joiner, with the PSK assurance NAME suggests.\nComing back to this after a few weeks, I'm actually not sure this is a problem worth solving. The attack scenario (2) exists whether or not a member can resync without proving prior membership. As long as (a) the group allows external Commits, and (b) the group does not require a new appearance of an existing identity to present a PSK from an earlier epoch, then the attacker can still do the attack by first joining, and then removing the old Alice. The only difference in the case where the external commit also does the resync is that the two happen together. It also doesn't seem unreasonable to regard the ability to resync from loss of all state except the signature key as a feature, not a bug. Of course, that position would entail accepting that compromise of signature keys would be sufficient to impersonate the user, but that doesn't seem all that surprising, at least to me. So this seems like a question of group policy to me. Either the group requires ReInit PSKs on resync (and you can't recover from all-but-signature-key state loss), or it doesn't (and signature keys are the last resort). This can be documented in the Security Considerations, in a new PR. As far as , I think it's probably still worth adding the precision, but I would change the PSK part to MAY if not remove it entirely.\nThere is also a practical issue with using PSKs to prove past membership, in that if there have been joins since Alice lost state, the new joiners won't have the PSK. This is of course a general issue with the proof-of-past-membership PSK, but it is particularly salient here.", "new_text": "public key for the credential in the \"leaf_key_package\" of the \"path\" field. When processing a Commit, both existing and new members MUST use the external init secret as described in external-initialization. The sender type for the MLSPlaintext encapsulating the External Commit MUST be \"new_member\" In other words, External Commits come in two \"flavors\" - a \"join\" commit that adds the sender to the group or a \"resync\" commit that replaces a member's prior appearance with a new one. Note that the \"resync\" operation allows an attacker that has compromised a member's signature private key to introduce themselves into the group and remove the prior, legitimate member in a single Commit. Without resync, this can still be done, but requires two operations, the external Commit to join and a second Commit to remove the old appearance. Applications for whom this distinction is salient can choose to disallow external commits that contain a Remove, or to allow such resync commits only if they contain a \"reinit\" PSK proposal that demonstrates the joining member's presence in a prior epoch of the group. With the latter approach, the attacke would need to compromise the PSK as well as the signing key, but the application will need to ensure that continuing, non-resync'ing members have the required PSK. 11.2.2."}
{"id": "q-en-mls-protocol-0e8b5fd14795f3f8f89ea40025f245d3dd43f9b5f53994689b1fab6977c47a08", "old_text": "guaranteed by a digital signature on each message from the sender's signature key. 15.3. Post-compromise security is provided between epochs by members", "comments": "NAME NAME r?\nDiscussed at 20211004 interim, remove resynch with a single op (external join + remove in one op). NAME research why resynch was dropped.\nA good point was raised by Jonathon Hoyland during the MLS IETF 109 meeting regarding possible concerns in using external commits for resync, particularly in the case of Alice adding/removing herself. Richard noted that this is a feature in the case that Alice is no longer synchronized with the group and therefore can use an external commit to add herself back in, removing the previous version. As opposed to any newcomer joining with an external commit, the case of Alice re-joining presents a potential security issue. Namely, as currently specified (in my reading of the draft), an existing group member, Bob, has no means to distinguish between the following cases: 1) Alice needs to resync and therefore performs an external commit and removes her prior version. 2) Alice\u2019s signature keys are compromised (it is not necessary for the adversary to compromise any group state). The adversary performs an external commit in Alice\u2019s name, and then removes her prior version and impersonates her to the group. One might hope that Alice notices that she is removed and communicates this to the group members OOB, but it is also possible that that she assumes some other reason for the removal, is offline, or simply is not active enough to take action for a fairly long compromise window. Even if she tries to use an external commit to get back into the group and then removes the adversary-as-Alice, there is no means for other group members distinguish the real Alice from the adversary-as-Alice and the process could be circular (until new valid identity keys are issued). While a newcomer is a fresh source to be trusted or not, Alice has been \u201chealing\u201d along with the group and the above option (2) allows the adversary to bypass all of that. The source of the problem is that when Alice re-syncs, she is not providing any validation of being the same/previous identity, so it is easy for other group members to accept that nothing more than a resync has taken place. Thus, a fairly straightforward solution is to require PSK use in cases where an external commit is used for resync. By enabling a PSK derived from a previous epoch during which Alice was part of the group to be injected with the external commit, Alice provides some proof of prior group membership and we avoid the total reset. This is not quite PCS in that the attacker is active following compromise, but it is a linked case. As such it is important that a conscious decision is made regarding this (either as a slight change before the feature-freeze in line with the other changes that the editors have proposed, or as a working group decision to close the issue as out-of-scope).\nJust to think this with what has been discussed on the mailing list: will give members a way to do the distinction mentioned above.\nNAME - I don't think that's actually the case; it would only allow the members a way to recognize that Alice is replacing herself. I think the right answer here is what Britta suggests, plus a little: We should enumerate a few specific constraints on the proposals that can be carried inline in an external Commit: There MUST be an Add proposal for the signer There MUST be an ExternalInit proposal There MUST NOT be any Update proposals If a Remove proposal is present, then: The identity of the removed node MUST be the same as the identity in the Add KeyPackage There MUST be a PSK proposal of type , referencing an earlier epoch of this group In other words, you can commit things that existing members sent (by reference), but the joiner-initiated proposals must only (a) add the joiner, and (b) optionally remove the old instance of the joiner, with the PSK assurance NAME suggests.\nComing back to this after a few weeks, I'm actually not sure this is a problem worth solving. The attack scenario (2) exists whether or not a member can resync without proving prior membership. As long as (a) the group allows external Commits, and (b) the group does not require a new appearance of an existing identity to present a PSK from an earlier epoch, then the attacker can still do the attack by first joining, and then removing the old Alice. The only difference in the case where the external commit also does the resync is that the two happen together. It also doesn't seem unreasonable to regard the ability to resync from loss of all state except the signature key as a feature, not a bug. Of course, that position would entail accepting that compromise of signature keys would be sufficient to impersonate the user, but that doesn't seem all that surprising, at least to me. So this seems like a question of group policy to me. Either the group requires ReInit PSKs on resync (and you can't recover from all-but-signature-key state loss), or it doesn't (and signature keys are the last resort). This can be documented in the Security Considerations, in a new PR. As far as , I think it's probably still worth adding the precision, but I would change the PSK part to MAY if not remove it entirely.\nThere is also a practical issue with using PSKs to prove past membership, in that if there have been joins since Alice lost state, the new joiners won't have the PSK. This is of course a general issue with the proof-of-past-membership PSK, but it is particularly salient here.", "new_text": "guaranteed by a digital signature on each message from the sender's signature key. The signature keys held by group members are critical to the security of MLS against active attacks. If a member's signature key is compromised, then an attacker can create KeyPackages impersonating the member; depending on the application, this can then allow the attacker to join the group with the compromised member's identity. For example, if a group has enabled external parties to join via external commits, then an attacker that has compromised a member's signature key could use an external commit to insert themselves into the group - even using a \"resync\"-style external commit to replace the compromised member in the group. Applications can mitigate the risks of signature key compromise using pre-shared keys. If a group requires joiners to know a PSK in addition to authenticating with a credential, then in order to mount an impersonation attack, the attacker would need to compromise the relevant PSK as well as the victim's signature key. The cost of this mitigation is that the application needs some external arrangement that ensures that the legitimate members of the group to have the required PSKs. 15.3. Post-compromise security is provided between epochs by members"}
{"id": "q-en-mls-protocol-ab3aafa1ca3797eb4a5a273f0f7d7a3ec42a2d499a3904baf1fdd133f739a856", "old_text": "Tree hash: The root hash of the above ratchet tree Confirmed transcript hash: the zero-length octet string Interim transcript hash: the zero-length octet string Init secret: a fresh random value of size \"KDF.Nh\" For each member, construct an Add proposal from the KeyPackage for that member (see add)", "comments": "This PR adds a RequiredCapabilities extension that allows the members of a group to impose a requirement that new joiners support certain capabilities. The extension is carried in the GroupContext and verified at Add time (including on external commits). In the interim 2021-10-04, we briefly discussed whether this mechanism needed to be included in the core protocol. Because of its interactions with Add and GroupContextExtensions proposals, it seems worth having in the main protocol.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nMany protocols which have an extensibility mechanism have a way to indicate that understanding an extension is mandatory for certain behaviors. For MLS, this would mean having a way to require a member understand an MLS extension before it can join a group. The three extension types defined currently are: MLS ciphersuites, MLS extension types, and MLS credential types. One logical way to add mandatory extensions would be to pick one of these types (the MLS extensions type) and use the high order bit or a specific range to indicate that a member must understand and implement the extension to join a group which uses that extension. The IANA values could be changed as follows: Values from 0x8000-0xffff indicate understanding the extension is required 0x7f00 - 0x7fff Reserved for Private Use (not required) 0xff00 - 0xfff Reserved for Private Use (required) If a required ciphersuite or credential becomes necessary, a new extension type could be created to add an extension in the KeyPackage or GroupInfo which would be required and would indicate support for the mandatory ciphersuites or credentials. In order to prevent some other entity from adding an MLS client that does not support a mandatory extension, we already have the KeyPackage Client Capabilities, which lists which extensions are supported by the client.\nNote that though this might seem \"now or never\", I don't think it really is. Consider TLS by way of analogy -- it has no mechanism for specifying that extensions are required. But if a server gets a ClientHello that is missing some extension that it requires, it can just abort the handshake. Likewise in MLS, the member adding a new joiner to the group can examine that joiner's KeyPackage to see if they support the extensions needed for the group. So I think you could implement this as a non-mandatory GroupContext extension, which would state the extensions required by the group so that all members apply the same requirements for new joiners.\nWhile you can implement this \"out-of-band\", that makes it implicit rather than explicit behavior.\nDiscussed at 20211004 interim, liked this functionality. But, instead of IANA registry use field (or extension) in group context to indicate which extensions are required. NAME to generate PR.\nDiscussed with NAME and incorporated this functionality into PR", "new_text": "Tree hash: The root hash of the above ratchet tree Confirmed transcript hash: The zero-length octet string Interim transcript hash: The zero-length octet string Init secret: A fresh random value of size \"KDF.Nh\" Extensions: Any values of the creator's choosing For each member, construct an Add proposal from the KeyPackage for that member (see add)"}
{"id": "q-en-mls-protocol-ab3aafa1ca3797eb4a5a273f0f7d7a3ec42a2d499a3904baf1fdd133f739a856", "old_text": "10.1. A new group may be tied to an already existing group for the purpose of re-initializing the existing group, or to branch into a sub-group. Re-initializing an existing group may be used, for example, to", "comments": "This PR adds a RequiredCapabilities extension that allows the members of a group to impose a requirement that new joiners support certain capabilities. The extension is carried in the GroupContext and verified at Add time (including on external commits). In the interim 2021-10-04, we briefly discussed whether this mechanism needed to be included in the core protocol. Because of its interactions with Add and GroupContextExtensions proposals, it seems worth having in the main protocol.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nMany protocols which have an extensibility mechanism have a way to indicate that understanding an extension is mandatory for certain behaviors. For MLS, this would mean having a way to require a member understand an MLS extension before it can join a group. The three extension types defined currently are: MLS ciphersuites, MLS extension types, and MLS credential types. One logical way to add mandatory extensions would be to pick one of these types (the MLS extensions type) and use the high order bit or a specific range to indicate that a member must understand and implement the extension to join a group which uses that extension. The IANA values could be changed as follows: Values from 0x8000-0xffff indicate understanding the extension is required 0x7f00 - 0x7fff Reserved for Private Use (not required) 0xff00 - 0xfff Reserved for Private Use (required) If a required ciphersuite or credential becomes necessary, a new extension type could be created to add an extension in the KeyPackage or GroupInfo which would be required and would indicate support for the mandatory ciphersuites or credentials. In order to prevent some other entity from adding an MLS client that does not support a mandatory extension, we already have the KeyPackage Client Capabilities, which lists which extensions are supported by the client.\nNote that though this might seem \"now or never\", I don't think it really is. Consider TLS by way of analogy -- it has no mechanism for specifying that extensions are required. But if a server gets a ClientHello that is missing some extension that it requires, it can just abort the handshake. Likewise in MLS, the member adding a new joiner to the group can examine that joiner's KeyPackage to see if they support the extensions needed for the group. So I think you could implement this as a non-mandatory GroupContext extension, which would state the extensions required by the group so that all members apply the same requirements for new joiners.\nWhile you can implement this \"out-of-band\", that makes it implicit rather than explicit behavior.\nDiscussed at 20211004 interim, liked this functionality. But, instead of IANA registry use field (or extension) in group context to indicate which extensions are required. NAME to generate PR.\nDiscussed with NAME and incorporated this functionality into PR", "new_text": "10.1. The configuration of a group imposes certain requirements on clients in the group. At a minimum, all members of the group need to support the ciphersuite and protocol version in use. Additional requirements can be imposed by including a \"required_capabilities\" extension in the GroupContext. This extension lists the extensions and proposal types that must be supported by all members of the group. For new members, it is enforced by existing members during the application of Add commits. Existing members should of course be in compliance already. In order to ensure this continues to be the case even as the group's extensions can be updated, a GroupContextExtensions proposal is invalid if it contains a \"required_capabilities\" extension that requires capabililities not supported by all current members. 10.2. A new group may be tied to an already existing group for the purpose of re-initializing the existing group, or to branch into a sub-group. Re-initializing an existing group may be used, for example, to"}
{"id": "q-en-mls-protocol-ab3aafa1ca3797eb4a5a273f0f7d7a3ec42a2d499a3904baf1fdd133f739a856", "old_text": "object must be a randomly sampled nonce of length \"KDF.Nh\" to avoid key re-use. 10.1.1. If a client wants to create a subgroup of an existing group, they MAY choose to include a \"PreSharedKeyID\" in the \"GroupSecrets\" object of", "comments": "This PR adds a RequiredCapabilities extension that allows the members of a group to impose a requirement that new joiners support certain capabilities. The extension is carried in the GroupContext and verified at Add time (including on external commits). In the interim 2021-10-04, we briefly discussed whether this mechanism needed to be included in the core protocol. Because of its interactions with Add and GroupContextExtensions proposals, it seems worth having in the main protocol.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nMany protocols which have an extensibility mechanism have a way to indicate that understanding an extension is mandatory for certain behaviors. For MLS, this would mean having a way to require a member understand an MLS extension before it can join a group. The three extension types defined currently are: MLS ciphersuites, MLS extension types, and MLS credential types. One logical way to add mandatory extensions would be to pick one of these types (the MLS extensions type) and use the high order bit or a specific range to indicate that a member must understand and implement the extension to join a group which uses that extension. The IANA values could be changed as follows: Values from 0x8000-0xffff indicate understanding the extension is required 0x7f00 - 0x7fff Reserved for Private Use (not required) 0xff00 - 0xfff Reserved for Private Use (required) If a required ciphersuite or credential becomes necessary, a new extension type could be created to add an extension in the KeyPackage or GroupInfo which would be required and would indicate support for the mandatory ciphersuites or credentials. In order to prevent some other entity from adding an MLS client that does not support a mandatory extension, we already have the KeyPackage Client Capabilities, which lists which extensions are supported by the client.\nNote that though this might seem \"now or never\", I don't think it really is. Consider TLS by way of analogy -- it has no mechanism for specifying that extensions are required. But if a server gets a ClientHello that is missing some extension that it requires, it can just abort the handshake. Likewise in MLS, the member adding a new joiner to the group can examine that joiner's KeyPackage to see if they support the extensions needed for the group. So I think you could implement this as a non-mandatory GroupContext extension, which would state the extensions required by the group so that all members apply the same requirements for new joiners.\nWhile you can implement this \"out-of-band\", that makes it implicit rather than explicit behavior.\nDiscussed at 20211004 interim, liked this functionality. But, instead of IANA registry use field (or extension) in group context to indicate which extensions are required. NAME to generate PR.\nDiscussed with NAME and incorporated this functionality into PR", "new_text": "object must be a randomly sampled nonce of length \"KDF.Nh\" to avoid key re-use. 10.2.1. If a client wants to create a subgroup of an existing group, they MAY choose to include a \"PreSharedKeyID\" in the \"GroupSecrets\" object of"}
{"id": "q-en-mls-protocol-ab3aafa1ca3797eb4a5a273f0f7d7a3ec42a2d499a3904baf1fdd133f739a856", "old_text": "11.1.1. An Add proposal requests that a client with a specified KeyPackage be added to the group. The proposer of the Add does not control where in the group's ratchet tree the new member is added. Instead, the sender of the Commit", "comments": "This PR adds a RequiredCapabilities extension that allows the members of a group to impose a requirement that new joiners support certain capabilities. The extension is carried in the GroupContext and verified at Add time (including on external commits). In the interim 2021-10-04, we briefly discussed whether this mechanism needed to be included in the core protocol. Because of its interactions with Add and GroupContextExtensions proposals, it seems worth having in the main protocol.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nMany protocols which have an extensibility mechanism have a way to indicate that understanding an extension is mandatory for certain behaviors. For MLS, this would mean having a way to require a member understand an MLS extension before it can join a group. The three extension types defined currently are: MLS ciphersuites, MLS extension types, and MLS credential types. One logical way to add mandatory extensions would be to pick one of these types (the MLS extensions type) and use the high order bit or a specific range to indicate that a member must understand and implement the extension to join a group which uses that extension. The IANA values could be changed as follows: Values from 0x8000-0xffff indicate understanding the extension is required 0x7f00 - 0x7fff Reserved for Private Use (not required) 0xff00 - 0xfff Reserved for Private Use (required) If a required ciphersuite or credential becomes necessary, a new extension type could be created to add an extension in the KeyPackage or GroupInfo which would be required and would indicate support for the mandatory ciphersuites or credentials. In order to prevent some other entity from adding an MLS client that does not support a mandatory extension, we already have the KeyPackage Client Capabilities, which lists which extensions are supported by the client.\nNote that though this might seem \"now or never\", I don't think it really is. Consider TLS by way of analogy -- it has no mechanism for specifying that extensions are required. But if a server gets a ClientHello that is missing some extension that it requires, it can just abort the handshake. Likewise in MLS, the member adding a new joiner to the group can examine that joiner's KeyPackage to see if they support the extensions needed for the group. So I think you could implement this as a non-mandatory GroupContext extension, which would state the extensions required by the group so that all members apply the same requirements for new joiners.\nWhile you can implement this \"out-of-band\", that makes it implicit rather than explicit behavior.\nDiscussed at 20211004 interim, liked this functionality. But, instead of IANA registry use field (or extension) in group context to indicate which extensions are required. NAME to generate PR.\nDiscussed with NAME and incorporated this functionality into PR", "new_text": "11.1.1. An Add proposal requests that a client with a specified KeyPackage be added to the group. The proposer of the Add MUST validate the KeyPackage in the same way as receipients are required to do below. The proposer of the Add does not control where in the group's ratchet tree the new member is added. Instead, the sender of the Commit"}
{"id": "q-en-mls-protocol-ab3aafa1ca3797eb4a5a273f0f7d7a3ec42a2d499a3904baf1fdd133f739a856", "old_text": "in the tree, for the second Add, the next empty leaf to the right, etc. If necessary, extend the tree to the right until it has at least index + 1 leaves", "comments": "This PR adds a RequiredCapabilities extension that allows the members of a group to impose a requirement that new joiners support certain capabilities. The extension is carried in the GroupContext and verified at Add time (including on external commits). In the interim 2021-10-04, we briefly discussed whether this mechanism needed to be included in the core protocol. Because of its interactions with Add and GroupContextExtensions proposals, it seems worth having in the main protocol.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nMany protocols which have an extensibility mechanism have a way to indicate that understanding an extension is mandatory for certain behaviors. For MLS, this would mean having a way to require a member understand an MLS extension before it can join a group. The three extension types defined currently are: MLS ciphersuites, MLS extension types, and MLS credential types. One logical way to add mandatory extensions would be to pick one of these types (the MLS extensions type) and use the high order bit or a specific range to indicate that a member must understand and implement the extension to join a group which uses that extension. The IANA values could be changed as follows: Values from 0x8000-0xffff indicate understanding the extension is required 0x7f00 - 0x7fff Reserved for Private Use (not required) 0xff00 - 0xfff Reserved for Private Use (required) If a required ciphersuite or credential becomes necessary, a new extension type could be created to add an extension in the KeyPackage or GroupInfo which would be required and would indicate support for the mandatory ciphersuites or credentials. In order to prevent some other entity from adding an MLS client that does not support a mandatory extension, we already have the KeyPackage Client Capabilities, which lists which extensions are supported by the client.\nNote that though this might seem \"now or never\", I don't think it really is. Consider TLS by way of analogy -- it has no mechanism for specifying that extensions are required. But if a server gets a ClientHello that is missing some extension that it requires, it can just abort the handshake. Likewise in MLS, the member adding a new joiner to the group can examine that joiner's KeyPackage to see if they support the extensions needed for the group. So I think you could implement this as a non-mandatory GroupContext extension, which would state the extensions required by the group so that all members apply the same requirements for new joiners.\nWhile you can implement this \"out-of-band\", that makes it implicit rather than explicit behavior.\nDiscussed at 20211004 interim, liked this functionality. But, instead of IANA registry use field (or extension) in group context to indicate which extensions are required. NAME to generate PR.\nDiscussed with NAME and incorporated this functionality into PR", "new_text": "in the tree, for the second Add, the next empty leaf to the right, etc. Validate the KeyPackage: Verify that the signature on the KeyPackage is valid using the public key in the KeyPackage's credential Verify that the following fields in the KeyPackage are unique among the members of the group (including any other members added in the same Commit): \"(credential.identity, endpoint_id)\" tuple \"credential.signature_key\" \"hpke_init_key\" Verify that the KeyPackage is compatible with the group's parameters. The ciphersuite and protocol version of the KeyPackage must match those in use in the group. If the GroupContext has a \"required_capabilities\" extension, then the required extensions and proposals MUST be listed in the KeyPackage's \"capabilities\" extension. If necessary, extend the tree to the right until it has at least index + 1 leaves"}
{"id": "q-en-mls-protocol-ab3aafa1ca3797eb4a5a273f0f7d7a3ec42a2d499a3904baf1fdd133f739a856", "old_text": "\"struct { Extension extensions<0..2^32-1>; } GroupContextExtensions; \" A member of the group applies a GroupContextExtensions proposal by removing all of the existing extensions from the GroupContext object for the group and replacing them with the list of extensions in the proposal. (This is a wholesale replacement, not a merge. An extension is only carried over if the sender of the proposal includes it in the new list.) Note that once the GroupContext is updated, its inclusion in the confirmation_tag by way of the key schedule will confirm that all members of the group agree on the extensions in use. 11.1.9.", "comments": "This PR adds a RequiredCapabilities extension that allows the members of a group to impose a requirement that new joiners support certain capabilities. The extension is carried in the GroupContext and verified at Add time (including on external commits). In the interim 2021-10-04, we briefly discussed whether this mechanism needed to be included in the core protocol. Because of its interactions with Add and GroupContextExtensions proposals, it seems worth having in the main protocol.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nMany protocols which have an extensibility mechanism have a way to indicate that understanding an extension is mandatory for certain behaviors. For MLS, this would mean having a way to require a member understand an MLS extension before it can join a group. The three extension types defined currently are: MLS ciphersuites, MLS extension types, and MLS credential types. One logical way to add mandatory extensions would be to pick one of these types (the MLS extensions type) and use the high order bit or a specific range to indicate that a member must understand and implement the extension to join a group which uses that extension. The IANA values could be changed as follows: Values from 0x8000-0xffff indicate understanding the extension is required 0x7f00 - 0x7fff Reserved for Private Use (not required) 0xff00 - 0xfff Reserved for Private Use (required) If a required ciphersuite or credential becomes necessary, a new extension type could be created to add an extension in the KeyPackage or GroupInfo which would be required and would indicate support for the mandatory ciphersuites or credentials. In order to prevent some other entity from adding an MLS client that does not support a mandatory extension, we already have the KeyPackage Client Capabilities, which lists which extensions are supported by the client.\nNote that though this might seem \"now or never\", I don't think it really is. Consider TLS by way of analogy -- it has no mechanism for specifying that extensions are required. But if a server gets a ClientHello that is missing some extension that it requires, it can just abort the handshake. Likewise in MLS, the member adding a new joiner to the group can examine that joiner's KeyPackage to see if they support the extensions needed for the group. So I think you could implement this as a non-mandatory GroupContext extension, which would state the extensions required by the group so that all members apply the same requirements for new joiners.\nWhile you can implement this \"out-of-band\", that makes it implicit rather than explicit behavior.\nDiscussed at 20211004 interim, liked this functionality. But, instead of IANA registry use field (or extension) in group context to indicate which extensions are required. NAME to generate PR.\nDiscussed with NAME and incorporated this functionality into PR", "new_text": "\"struct { Extension extensions<0..2^32-1>; } GroupContextExtensions; \" A member of the group applies a GroupContextExtensions proposal with the following steps: If the new extensions include a \"required_capabilities\" extension, verify that all members of the group support the required capabilities (including those added in the same commit, and excluding those removed). Remove all of the existing extensions from the GroupContext object for the group and replacing them with the list of extensions in the proposal. (This is a wholesale replacement, not a merge. An extension is only carried over if the sender of the proposal includes it in the new list.) Note that once the GroupContext is updated, its inclusion in the confirmation_tag by way of the key schedule will confirm that all members of the group agree on the extensions in use. 11.1.9."}
{"id": "q-en-mls-protocol-d06fd4713ee4fefe311ee787cf3dce73849a62f41ee123c746766768f1471517", "old_text": "DeriveSecret takes its Secret argument from the incoming arrow When processing a handshake message, a client combines the following information to derive new epoch secrets:", "comments": "Right now, the vector is described in a couple of ways. This PR standardizes on \"of length \".", "new_text": "DeriveSecret takes its Secret argument from the incoming arrow \"0\" represents an all-zero byte string of length \"KDF.Nh\". When processing a handshake message, a client combines the following information to derive new epoch secrets:"}
{"id": "q-en-mls-protocol-d06fd4713ee4fefe311ee787cf3dce73849a62f41ee123c746766768f1471517", "old_text": "in the \"PreSharedKeyID\" object. Specifically, \"psk_secret\" is computed as follows: Here \"0\" represents the all-zero vector of length KDF.Nh. The \"index\" field in \"PSKLabel\" corresponds to the index of the PSK in the \"psk\" array, while the \"count\" field contains the total number of PSKs. In other words, the PSKs are chained together with KDF.Extract", "comments": "Right now, the vector is described in a couple of ways. This PR standardizes on \"of length \".", "new_text": "in the \"PreSharedKeyID\" object. Specifically, \"psk_secret\" is computed as follows: Here \"0\" represents the all-zero vector of length \"KDF.Nh\". The \"index\" field in \"PSKLabel\" corresponds to the index of the PSK in the \"psk\" array, while the \"count\" field contains the total number of PSKs. In other words, the PSKs are chained together with KDF.Extract"}
{"id": "q-en-mls-protocol-d06fd4713ee4fefe311ee787cf3dce73849a62f41ee123c746766768f1471517", "old_text": "If not populating the \"path\" field: Set the \"path\" field in the Commit to the null optional. Define \"commit_secret\" as the all- zero vector of the same length as a \"path_secret\" value would be. In this case, the new ratchet tree is the same as the provisional ratchet tree. Derive the \"psk_secret\" as specified in pre-shared-keys, where the order of PSKs in the derivation corresponds to the order of", "comments": "Right now, the vector is described in a couple of ways. This PR standardizes on \"of length \".", "new_text": "If not populating the \"path\" field: Set the \"path\" field in the Commit to the null optional. Define \"commit_secret\" as the all- zero vector of length \"KDF.Nh\" (the same length as a \"path_secret\" value would be). In this case, the new ratchet tree is the same as the provisional ratchet tree. Derive the \"psk_secret\" as specified in pre-shared-keys, where the order of PSKs in the derivation corresponds to the order of"}
{"id": "q-en-mls-protocol-d06fd4713ee4fefe311ee787cf3dce73849a62f41ee123c746766768f1471517", "old_text": "from the \"path_secret[n]\" value assigned to the root node. If the \"path\" value is not populated: Define \"commit_secret\" as the all-zero vector of the same length as a \"path_secret\" value would be. Update the confirmed and interim transcript hashes using the new Commit, and generate the new GroupContext.", "comments": "Right now, the vector is described in a couple of ways. This PR standardizes on \"of length \".", "new_text": "from the \"path_secret[n]\" value assigned to the root node. If the \"path\" value is not populated: Define \"commit_secret\" as the all-zero vector of length \"KDF.Nh\" (the same length as a \"path_secret\" value would be). Update the confirmed and interim transcript hashes using the new Commit, and generate the new GroupContext."}
{"id": "q-en-mls-protocol-062cad8d8e5874d2e86e9d6a613eda8f1dc81c2a65fdfa3a4a7dc835184c879d", "old_text": "process the Commit (i.e., not including any members being added or removed by the Commit). If there are multiple proposals that apply to the same leaf, the committer chooses one and includes only that one in the Commit, considering the rest invalid. The committer MUST prefer any Remove", "comments": "Any Update proposal for the sender is redundant if the Commit includes a .\nThis is true, but they're also harmless, since the information in the will overwrite all of their effects. Note that: A requirement to this effect would only be binding on a committer. The receiver of a Commit MUST process all of the referenced proposals, as prescribed in the Commit. A is required if a Commit covers Updates or Removes; if a Commit covers a self-Update, then it must have a . So there's no reasn to condition on \"if a is included\". This seems fine for some advisory text, but not an absolute prohibition.\nThe way I read Section 11.2 is that the receiver should verify that the committer has followed the rules, e.g. that the commit contains all valid proposals, etc., even if that is not explicit in the list of things the receiver of a commit should do. So any requirement on the committer would also affect the receiver. So if we mandate that a committer can't include self-updates if the path is populated, then it would be implicit that the receiver checks that the committer actually followed the rules when processing a commit. I'm not sure I understand your second point. My reasoning would be that if a committer wants to do an update, they would just include a path rather than an update proposal, which is redundant anyway.\nSorry NAME I missed that there was still conversation going on here before merging . On the latter point: All I'm saying is that there is no case where there is both (a) a self-update and (b) no . So you're always in the bad scenario :) In our Wire chat, you made a good point that repeated proposals for the same leaf are invalid, and self-Updates are effectively repeats of the . Reopening, will file a fresh PR to adjust the text.", "new_text": "process the Commit (i.e., not including any members being added or removed by the Commit). The sender of a Commit SHOULD NOT include any Update proposals that the sender themselves generated. If sender generated one or more Update proposals during an epoch, then it SHOULD instead update its leaf and direct path by sending \"path\" field in the Commit. If there are multiple proposals that apply to the same leaf, the committer chooses one and includes only that one in the Commit, considering the rest invalid. The committer MUST prefer any Remove"}
{"id": "q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017", "old_text": "on PSKs. Each PSK in MLS has a type that designates how it was provisioned. External PSKs are provided by the application, while recovery and re- init PSKs are derived from the MLS key schedule and used in cases where it is necessary to authenticate a member's participation in a prior group state. In particular, in addition to external PSK types, a PSK derived from within MLS may be used in the following cases: Re-Initialization: If during the lifetime of a group, the group members decide to switch to a more secure ciphersuite or newer protocol version, a PSK can be used to carry entropy from the old group forward into a new group with the desired parameters. Branching: A PSK may be used to bootstrap a subset of current group members into a new group. This applies if a subset of current group members wish to branch based on the current group state. The injection of one or more PSKs into the key schedule is signaled in two ways: 1) as a \"PreSharedKey\" proposal, and 2) in the \"GroupSecrets\" object of a Welcome message sent to new members added in that epoch. On receiving a Commit with a \"PreSharedKey\" proposal or a GroupSecrets object with the \"psks\" field set, the receiving Client", "comments": "This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)", "new_text": "on PSKs. Each PSK in MLS has a type that designates how it was provisioned. External PSKs are provided by the application, while resumption PSKs are derived from the MLS key schedule and used in cases where it is necessary to authenticate a member's participation in a prior epoch. The injection of one or more PSKs into the key schedule is signaled in two ways: Existing members are informed via PreSharedKey proposals covered by a Commit, and new members added in the Commit are informed via GroupSecrets object in the Welcome message corresponding to the Commit. To ensure that existing and new members compute the same PSK input to the key schedule, the Commit and GroupSecrets objects MUST indicate the same set of PSKs, in the same order. On receiving a Commit with a \"PreSharedKey\" proposal or a GroupSecrets object with the \"psks\" field set, the receiving Client"}
{"id": "q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017", "old_text": "8.7. The main MLS key schedule provides a \"resumption_secret\" which can provide extra security in some cross-group operations. The application SHOULD specify an upper limit on the number of past epochs for which the \"resumption_secret\" may be stored. There are two ways in which a \"resumption_secret\" can be used: to re- initialize the group with different parameters, or to create a sub- group of an existing group as detailed in pre-shared-keys. Resumption keys are distinguished from exporter keys in that they have specific use inside the MLS protocol, whereas the use of exporter secrets may be decided by an external application. They are thus derived separately to avoid key material reuse. 8.8.", "comments": "This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)", "new_text": "8.7. The main MLS key schedule provides a \"resumption_secret\" that is used as a PSK to inject entropy from one epoch into another. This functionality is used in the reinitialization and branching processes described in reinitialization and sub-group-branching, but may be used by applications for other purposes. Some uses of resumption PSKs might call for the use of PSKs from historical epochs. The application SHOULD specify an upper limit on the number of past epochs for which the \"resumption_secret\" may be stored. 8.8."}
{"id": "q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017", "old_text": "Transmit the Welcome message to the other new members The recipient of a Welcome message processes it as described in joining-via-welcome-message. In principle, the above process could be streamlined by having the creator directly create a tree and choose a random value for first epoch's epoch secret. We follow the steps above because it removes unnecessary choices, by which, for example, bad randomness could be introduced. The only choices the creator makes here are its own KeyPackage, the leaf secret from which the Commit is built, and the intermediate key pairs along the direct path to the root. 10.1.", "comments": "This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)", "new_text": "Transmit the Welcome message to the other new members The recipient of a Welcome message processes it as described in joining-via-welcome-message. If application context informs the recipient that the Welcome should reflect the creation of a new group (for example, due to a branch or reinitialization), then the recipient MUST verify that the epoch value in the GroupInfo is equal to 1. In principle, the above process could be streamlined by having the creator directly create a tree and choose a random value for first epoch's epoch secret. We follow the steps above because it removes unnecessary choices, by which, for example, bad randomness could be introduced. The only choices the creator makes here are its own KeyPackage and the leaf secret from which the Commit is built. 10.1."}
{"id": "q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017", "old_text": "10.2. A new group may be tied to an already existing group for the purpose of re-initializing the existing group, or to branch into a sub-group. Re-initializing an existing group may be used, for example, to restart the group with a different ciphersuite or protocol version. Branching may be used to bootstrap a new group consisting of a subset of current group members, based on the current group state. In both cases, the \"psk_nonce\" included in the \"PreSharedKeyID\" object must be a randomly sampled nonce of length \"KDF.Nh\" to avoid key re-use. 10.2.1. If a client wants to create a subgroup of an existing group, they MAY choose to include a \"PreSharedKeyID\" in the \"GroupSecrets\" object of the Welcome message choosing the \"psktype\" \"branch\", the \"group_id\" of the group from which a subgroup is to be branched, as well as an epoch within the number of epochs for which a \"resumption_secret\" is kept. 11.", "comments": "This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)", "new_text": "10.2. A group may be reinitialized by creating a new group with the same membership and different parameters, and linking it to the old group via a resumption PSK. The members of a group reinitialize it using the following steps: A member of the old group sends a ReInit proposal (see reinit) A member of the old group sends a Commit covering the ReInit proposal A member of the old group sends a Welcome message for the new group that matches the ReInit The \"group_id\", \"version\", and \"cipher_suite\" fields in the Welcome message MUST be the same as the corresponding fields in the ReInit proposal. The \"epoch\" in the Welcome message MUST be 1 The Welcome MUST specify a PreSharedKey of type \"resumption\" with usage \"reinit\". The \"group_id\" must match the old group, and the \"epoch\" must indicate the epoch after the Commit covering the ReInit. The \"psk_nonce\" included in the \"PreSharedKeyID\" of the resumption PSK MUST be a randomly sampled nonce of length \"KDF.Nh\", for the KDF defined by the new grou's ciphersuite. Note that these three steps may be done by the same group member or different members. For example, if a group member sends a commit with an inline ReInit proposal (steps 1 and 2), but then goes offline, another group member may send the corresponding Welcome. This flexibility avoids situations where a group gets stuck between steps 2 and 3. Resumption PSKs with usage \"reinit\" MUST NOT be used in other contexts. A PreSharedKey proposal with type \"resumption\" and usage \"reinit\" MUST be considered invalid. 10.3. A new group can be formed from a subset of an existing group's members, using the same parameters as the old group. The creator of the group indicates this situation by including a PreSharedKey of type \"resumption\" with usage \"branch\" in the Welcome message that creates the branched subgroup. A client receiving a Welcome including a PreSharedKey of type \"resumption\" with usage \"branch\" MUST verify that the new group reflects a subgroup branched from the referenced group. The \"version\" and \"ciphersuite\" values in the Welcome MUST be the same as those used by the old group. Each KeyPackage in a leaf node of the new group's tree MUST be a leaf in the old group's tree at the epoch indicated in the PreSharedKey. In addition, to avoid key re-use, the \"psk_nonce\" included in the \"PreSharedKeyID\" object MUST be a randomly sampled nonce of length \"KDF.Nh\". Resumption PSKs with usage \"branch\" MUST NOT be used in other contexts. A PreSharedKey proposal with type \"resumption\" and usage \"branch\" MUST be considered invalid. 11."}
{"id": "q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017", "old_text": "11.1.5. A ReInit proposal represents a request to re-initialize the group with different parameters, for example, to increase the version number or to change the ciphersuite. The re-initialization is done by creating a completely new group and shutting down the old one. A member of the group applies a ReInit proposal by waiting for the committer to send the Welcome message and by checking that the \"group_id\" and the parameters of the new group corresponds to the ones specified in the proposal. The Welcome message MUST specify exactly one pre-shared key with \"psktype = reinit\", and with \"psk_group_id\" and \"psk_epoch\" equal to the \"group_id\" and \"epoch\" of the existing group after the Commit containing the \"reinit\" Proposal was processed. The Welcome message may specify the inclusion of other pre-shared keys with a \"psktype\" different from \"reinit\". If a ReInit proposal is included in a Commit, it MUST be the only proposal referenced by the Commit. If other non-ReInit proposals", "comments": "This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)", "new_text": "11.1.5. A ReInit proposal represents a request to reinitialize the group with different parameters, for example, to increase the version number or to change the ciphersuite. The reinitialization is done by creating a completely new group and shutting down the old one. A member of the group applies a ReInit proposal by waiting for the committer to send the Welcome message that matches the ReInit, according to the criteria in reinitialization. If a ReInit proposal is included in a Commit, it MUST be the only proposal referenced by the Commit. If other non-ReInit proposals"}
{"id": "q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017", "old_text": "If a ReInit proposal was part of the Commit, the committer MUST create a new group with the parameters specified in the ReInit proposal, and with the same members as the original group. The Welcome message MUST include a \"PreSharedKeyID\" with \"psktype\" \"reinit\" and with \"psk_group_id\" and \"psk_epoch\" corresponding to the current group and the epoch after the commit was processed. A member of the group applies a Commit message by taking the following steps:", "comments": "This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)", "new_text": "If a ReInit proposal was part of the Commit, the committer MUST create a new group with the parameters specified in the ReInit proposal, and with the same members as the original group. The Welcome message MUST include a \"PreSharedKeyID\" with the following parameters: \"psktype\": \"resumption\" \"usage\": \"reinit\" \"group_id\": The group ID for the current group \"epoch\": The epoch that the group will be in after this Commit A member of the group applies a Commit message by taking the following steps:"}
{"id": "q-en-mls-protocol-e9b46d68509c3cab0914ca3ee67971eb4213f54b2b24598263f25e47c916e017", "old_text": "If the Commit included a ReInit proposal, the client MUST NOT use the group to send messages anymore. Instead, it MUST wait for a Welcome message from the committer and check that The \"version\", \"cipher_suite\" and \"extensions\" fields of the new group corresponds to the ones in the \"ReInit\" proposal, and that the \"version\" is greater than or equal to that of the original group. The \"psks\" field in the Welcome message includes a \"PreSharedKeyID\" with \"psktype\" = \"reinit\", and \"psk_epoch\" and \"psk_group_id\" equal to the epoch and group ID of the original group after processing the Commit. The confirmation tag value confirms that the members of the group have arrived at the same state of the group:", "comments": "This PR reflects further discussion on and replaces . The high-level impacts are: Add clearer, more prescriptive definitions of the re-init and branch operations Use a single syntax for all resumption PSKs, but also have a parameter to distinguish cases Allow an usage to cover any other cases defined by applications Once lands, we should also add some text to the protocol overview to describe these operations. Pasting some text here for usage in that future PR...\nOne question regarding the diagram in the PR description. In the re-init case, shouldn't the \"B\" group start at epoch ? At least that's what the PR specifies in the spec.\nI think the re-init diagram is fine because the spec says The in the Welcome message MUST be 1 However in the branch case (and application case) diagram I don't think there should be a ReInit proposal? (At least it is not said in the spec). Also, it is not entirely clear how to construct a in the branch case (or application case): what values should have and ?\nAh, sorry I misread. You are right and the re-init diagram is fine the way it is.\nMinor typo in the text above: \"allow a entropy\" -> \"allow entropy\".\nFrom my reading, it looks like the creator of an epoch must send the exact same message to each new user. However, in some systems, it is more efficient to send a different message to each user, only containing the intended for them, rather than the for all new members. As far as I can tell, no changes are required on the recipient side. I'm running some performance testing, trying to add thousands of members at a time, and the messages are hundreds of kilobytes large. In a federated environment, this could be copied to multiple servers, with the vast majority of that being unnecessary data.\nGood point NAME I filed . In the case, you would do something like send a Welcome per home server?\nThanks. With the way that Matrix currently works, it would probably be best to send a separate message per user, since the current endpoint for sending messages directly to devices requires specifying the message separately for each recipient, and implementations (AFAIK) currently don't do deduplication. In the future, it might be possible to do it differently.\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nRight now, we treat the re-init and branch PSK cases as distinct cases, as opposed to just having a general \"export PSK from one group+epoch, insert it into another\" function. What I would call a \"resumption PSK\", following TLS. Even though we have the distinction, the content of the PSK itself is the same: (group ID, epoch, resumption secret). So the only difference is that the PreSharedKeyID has a bit that signals the intent of injecting this resumption PSK. Do we really need that bit? If there's not a compelling need, it would streamline the API and allow for more flexibility if we just had the notion of a resumption PSK.\nKey separation by domain is usually a good idea. If we derive a key, the purpose should be \"baked in\", i.e. included in the derivation to avoid potential collision across protocol functionalities/domains. I'm not sure I understand how that makes the API more complex. If it's really bad, we should consider the trade-off, but if it's not too much trouble, I'd prefer we keep the key separation. Can you elaborate a bit on the API complexity?\nI'm mostly concerned about being too specific in the \"purpose\" definitions, and thus making applications make a choice that doesn't really make sense. For example, take the obvious case of TLS-like resumption, where I take a key from the last epoch of a prior instance of the group and use it as input to a new appearance of the group. Is that a re-init? There wasn't necessarily a ReInit proposal. Is it a branch? \"Branch\" isn't even defined in the spec. Better to just talk about \"resumption\" as the general act of exporting a secret from one group and inserting it into another.\nThe scenarios you describe seem to indicate problems in the spec that don't have anything to do with the derivation of the resumption secret. If you want to enable re-init secrets without a proposal, that's fine and we can define in the spec which key to use for that purpose (probably the re-init key). Similarly, if branch isn't well defined in the spec, we should probably define it better. In the end, for everything else that you want to do that's not (properly) defined in the spec, you can just use an exporter secret. I don't see that weakening the domain separation of the re-init and branch key derivation is the right way to solve these problems. Is the code complexity that bad?\nWe could go off and try to define all the possible uses of a resumption PSK, but (a) that's a lot more work, (b) it seems unlikely to succeed, (c) even if you do succeed, you'll end up with a more complicated API for applications (bc N names for the same technical act) and (d) it's not clear to me what the benefit is. When you talk about domain separation here, what is the failure case that this prevents?\nWhat we're trying to prevent is that participants end up with the same keys after executing a different part/functionality of the protocol. If party A thinks, they are doing a branch, while party B thinks they're doing a Re-Init and they end up with the same key, then that's bad. If an implementer wants to do proprietary things, i.e. things that are not described in the spec, they have to use an exporter key and make sure their keys are properly separated.\nIt depends on what you mean by a \"different functionality of the protocol\". The only difference between the branch and reinit cases is that in the reinit case, you quit using the old group. You could also imagine injecting a resumption PSK into an independent group with its own history, whose membership overlaps with the group from which the PSK was drawn. Is the functionality \"linking one group/epoch to a prior group/epoch\" or is it \"reinit / branch / ...\"? The former seems more at the right level of granularity to me, and leads to the approach here and in . If you're really hard over here, how about a compromise: Unify the syntax, but add a distinguisher alongside the group ID and epoch, which has specified values for the branch and reinit cases, and which the application MAY set for other cases.\nI see where you're coming from and if you're convinced it is the more useful approach, we can make the trade-off and go with (although we might as well rename the key from \"resumption\" to something like \"group-link\" or something). However, I do want to note that we have a trade-off here. Previously, designating a PSK \"branch\" or \"re-init\" signaled intent on the side of the sender and arriving at the same PSK meant agreement on said intent. For example, using a re-init key would imply that the new group is meant to replace the old group. As far as I understand that is no longer the case with .\nI understand the concern from a cryptographic standpoint, it's nice to have a type that indicates that the group should be shut down. From an engineering perspective, this all feels pretty vague. It's not clear at all how exactly a group will be shut down and a new one will be created, the protocol doesn't give any specific guidance. I don't think this is a huge concern, as long as we don't paint ourselves in a corner when we want to upgrade groups to newer MLS versions in the future.\nEven if we want to have everyone agree that the group should be shut down, the ReInit proposal provides that. Committing the ReInit puts it in the transcript, which means everyone agrees on it. So everyone who exports a PSK from that epoch should know it's a \"dead epoch\". In fact, the vs. distinction seems duplicative here, since you shouldn't be able to get a PSK from a \"live\" epoch or a PSK from a dead one. iff live; iff dead. Now, there may be other sub-cases within live or dead; as noted, not all resumption PSK cases map clearly to or . But it seems premature to put in stub functionality without some taxonomy and requirements. I would propose we do for now, and if there turns out to be a need to be PSK use cases that really need segregating, it should be straightforward extension to add another PSKType.\nFair enough. I see that the trade-off is probably ok. I'd be ok with merging . The proposal should indeed ensure that the streams don't get crossed, so that all that remains would be ease of provability at that point. I still feel uncomfortable about not properly separating keys, though. Regarding the last two arguments: the \"for now\" solution is a bit moot, since with the WGLC at hand, it's almost certainly not going to change until then the extension argument goes the other way as well, as you could just as easily have an extension that implements and gets rid of the need for the distinction between resumption PSKs. If anything, we should opt to be conservative in the spec and allow users to loosen the key separation via an extension at their own risk.\nI understand that it's unclear how the linking of groups is going to take place in a practical setting, but I don't see that that's an argument to weaken the cryptographic basis of the mechanism. If we don't figure out how to do it in practice, then that's fine, as we can just ignore that part of the protocol. If we do, however, we will want to get proper security guarantees whenever we link two groups in this way. Again, I think is ok. But it's because we probably get good enough guarentees and not because the whole mechanism is a secondary concern due to there being unsolved higher level engineering challenges.\nI have recently implemented some of this to get a better feel for it. While we already knew the \"reinit\" case was quite underspecified, I now also realized that there might be another issue with it: A member can shut down a group before a new one is created and there is no way to force the creation of the new group within the protocol. I think it would be safer to first create a new group (with whatever new parameters) and to tear down the old one in that process. Since this is too complex for the protocol it should be solved at the application layer. In other words, I think is indeed fine.\nCommenting on here, to keep discussion from splitting to the PR. There are a few issues here that could use clarify, and some confusion in the interpretation of the proposals. First - as NAME pointed out, if anything is unclear/underspecified, then that should be clarified and cleaned up vs. removal as the default. From the confusion above, it is clear that some pieces are not sufficiently specified. Second - There is a core distinction between how PSKs are used in MLS vs. TLS that we need to be careful of. In TLS, these are used strictly for the same two members communicating again after a finished session. It is not used again within the same session nor outside of the protocol (exporter keys are for that). Namely, the uses are: Group-internal use (resumption by an identical member set of the two parties) In MLS, these assumptions change by the very fact that the group may have more than 2 members. For the protocol-external case we have exporter keys as well. For PSK use, we have the following: Group-internal use (resumption by an identical member set) Subgroup-internal use (resumption by a proper subset of members) Session-internal use (since continuous KE implies that keying material may be injected) While the session-internal use is not the focus of this discussion, it has been previously (i.e. proving knowledge of prior group state). This contrasts against TLS were the handshake takes place only once Point 2 is notable in that it is impossible for a proper subset of TLS session members to use session information in a separate session (i.e. that would imply only one communication party with no partner). In MLS, this is possible, and we need to be careful about mis-aligning TLS for precedent. To avoid potential forks, etc., and clarity of key source, any group-subset PSK use really ought to be denoted (currently a 'branch'). NAME is correct that there could be subtle attacks on this if key use is not properly separated. In addition to the above there seems to be confusion on what resumption is linked to (this is consequently something that needs more clarity in the spec.). To be precise in alignment of terms, using a PSK in TLS for communication in a new session is aligned to using a PSK in MLS for a new group, i.e. it being a continuous session. Thus we need to align discussion here between \"alive\" and \"dead\" sessions in TLS to \"alive\" and \"dead\" groups in MLS - not epochs. We can get a reinit PSK from a live group, i.e. if a member wants to force a restart for any reason and instantiates a new group with the reinit secret. This signals to all members joining that group that the prior group should be terminated and not considered secure/alive after the point of reinit use. It is an edge case, but possible. Similarly and more commonly, we could get a branch PSK from a dead group, i.e. if a new (sub-)group wants members to prove knowledge from some prior connection. Thus it follows that we can branch from both \"alive\" and \"dead\" groups. This may be helpful in terms of authentication. I am favorable to NAME 's PSKUsage suggestion, i.e. that if there are other desired uses apart from the above cases the application can provide for those. The \"bleeding\" of key separation across different uses is something that is concerning, and should involve closer scrutiny before inclusion in the specification. If anything, more separation is safer at this stage, vs consolidation.\nNAME Good point about the differences from the two-party case. I think your taxonomy is not quite complete, though. You could also imagine cases where the epoch into which the resumption PSK is injected has members that were not in the group in the epoch in which it was extracted (and these members were provided the PSK some other way). For instance, in the session-internal case where there have been some Adds in the meantime. To try to factor this a little differently, it seems like you have at least three independent columns in the truth table here comparing the extraction epoch to the injection epoch: Same group or different group (\"group\" is the term that matches TLS \"session\"; linear sequence of epochs) Some members have been removed Some members have been added So you have at least 8 cases to cover to start with assuming you care about both of those axes (same/different group and membership). It's not clear to me why those are the two axes that one cares about. On the one hand, why is it not important to distinguish ciphersuite changes as well, or changes in extensions? On the other hand, why do we need to agree on membership changes here, given that the key schedule already confirms that we agree on the membership? Basically, I don't understand (a) what cases need to be distinguished, (b) why do these cases need to be distinguished, and (c) why the required separation properties aren't provided by some other mechanism already. I would also note that you don't get the benefits of key separation unless the recipients know how to verify that the usage is right for the injection context. (Otherwise, a malicious PSK sender can send the wrong value.) For the same/different group, that's easy enough to do based on key schedule continuity. But membership is problematic. For example, if you're changing the ciphersuite between extraction and injection, then all of the KeyPackages and Credentials will be different -- what do you compare to verify equality? I can update to add PSKUsage, but the cases we put there need to at least be enforceable, and ideally fit into some principled taxonomy (even if all the cases in the taxonomy aren't enforceable).\nFor the record, distinguishing on ciphersuite changes was brought up earlier in the WG - I am in the support if bringing it back on the discussion table is an option... Separately though, for most other changes during the lifespan of a group, proposals basically perform a similar action for distinguishing between keys and separating changes from epoch to epoch. The difference with the PSK is there is no \"one way\" to interpret a PSK's use (as compared to e.g. a ciphersuite change). This also means that it can be more challenging to provably separate out misuse. [BTW: I am assuming in this discussion you mean ciphersuite change in the context of the listed components, since reinit is actually used for ciphersuite key changes per line 1598.] You are quite correct that there is a group-external use, i.e. the \"different group\" case. However, anything outside of the group/session is an external/exporter key use. Namely, the MLS protocol spec is about a single session and anything that is used in other groups/sessions is an external key (similar to TLS export from session). The distinction between PSK usage and external/exporter keys (whether the latter is used for another application or injected into a different group entirely) is that PSK use is within the analysis bounds of the given session. To this point, there is a gray area on item 2 (subgroup-internal use / resumption by a proper subset of members). This is probably the only one I could see an argument for also pushing as an exporter key use. The issue of some members added between PSK usage is really reducible to two cases: a) a member was already in the set and the PSK use appears as item 1 or 3 (group-internal use or session-internal use) or b) a member was added and the PSK offers no security beyond that of an exporter key inject. The potential value proposition is for existing members, there is no loss for a new member, and the it is better to offer the value as possible than to deny it based on the weakest link. The issue of some members being removed between PSK usage is similarly reducible. If the ejected member does not have the current group state then knowing a PSK should not provide value. It is similar to knowing an exporter key but having no place to export it into. The goal of PSK usage is to strengthen an existent system vs. being the sole point of security for that. If our KDF is done correctly, then combing current group state and the PSK will not be problematic (former unknown but latter known to the ejected member).\nSo NAME and I had some offline discussion about this, and it led me to the conclusion that what I'm really grumpy about is a need for more clarity in the branch and reinit definitions. We don't need a full taxonomy of uses if (a) we can say \"use this PSK usage for this specific protocol feature\", and (b) we have some escape hatch that allows for usages of resumption PSKs outside those specific protocol features. I have filed reflecting this theory. Hopefully folks find it more agreeable.\nThat seems like a well-reasoned solution.\nFixed with and\nLooks good! Although I'm not sure in the branching keys we want to have the exact same key packages as in the main group.I think this looks fine generally (just one comment)", "new_text": "If the Commit included a ReInit proposal, the client MUST NOT use the group to send messages anymore. Instead, it MUST wait for a Welcome message from the committer meeting the requirements of reinitialization. The confirmation tag value confirms that the members of the group have arrived at the same state of the group:"}
{"id": "q-en-mls-protocol-43b9eebe04393b51e60d44b3e5f3f931b60a7c07d1f62494f9aee9edd1393917", "old_text": "in the credential field of the KeyPackage objects in the leaves of the tree (including the InitKeys used to add new members). The ciphersuites are defined in section mls-ciphersuites. 6.2.", "comments": "Signatures as specified in the MLS protocol document are ambiguous, in the sense that signatures produced for one purpose in the protocol can be (maliciously) used in a different place of the protocol. Indeed, we found that for example, a can have the same serialization as , which proves that one signature could be accepted for different structures. The collisions we have found so far are quite artificial, so it is not super clear how to exploit this, but at least it makes it impossible to have provable security. The solution is pretty simple, we simply prefix each signed value with a label, in a fashion similar to . That way, when calling you can give the structure name in the argument.\nSomething that is still bothering me is the structure: struct { select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } } MLSPlaintextTBS; Since the first is done on a field defined after it, this structure is not parsable. We could say that's not a problem since we only need to serialize it, but for security purposes I would recommend to put it at the end, like this: struct { WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } } MLSPlaintextTBS; What do you think about it?\nNice one! Crafting collisions is probably not straightforward, but eliminating the the risk altogether is the way to go.\nIt is actually quite easy to craft collisions, I made a script do find them: URL (I wanted to check whether there was accidentally a way do disambiguate the structures)\nIntroduced in .\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nFrom what I can see, there is no reason not to sign the context in the case of an external init. I'm not sure why it got taken out, but while implementing external inits, verification went through even when including the context.\nAs I commented in , it's not even clear that we need this context any more for members any more, given for MLSPlaintext and the AEAD of MLSCiphertext. So I would propose we just remove the context added to MLSPlaintextTBS altogether. NAME IIRC you were the one that proposed adding the context. WDYT?\nThis was fixed in\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*", "new_text": "in the credential field of the KeyPackage objects in the leaves of the tree (including the InitKeys used to add new members). To disambiguate different signatures used in MLS, each signed value is prefixed by a label as shown below: Here, the functions \"Signature.Sign\" and \"Signature.Verify\" are defined by the signature algorithm. The ciphersuites are defined in section mls-ciphersuites. 6.2."}
{"id": "q-en-mls-protocol-43b9eebe04393b51e60d44b3e5f3f931b60a7c07d1f62494f9aee9edd1393917", "old_text": "The value for hpke_init_key MUST be a public key for the asymmetric encryption scheme defined by cipher_suite. The whole structure is signed using the client's signature key. A KeyPackage object with an invalid signature field MUST be considered malformed. The input to the signature computation comprises all of the fields except for the signature field. KeyPackage objects MUST contain at least two extensions, one of type \"capabilities\", and one of type \"lifetime\". The \"capabilities\"", "comments": "Signatures as specified in the MLS protocol document are ambiguous, in the sense that signatures produced for one purpose in the protocol can be (maliciously) used in a different place of the protocol. Indeed, we found that for example, a can have the same serialization as , which proves that one signature could be accepted for different structures. The collisions we have found so far are quite artificial, so it is not super clear how to exploit this, but at least it makes it impossible to have provable security. The solution is pretty simple, we simply prefix each signed value with a label, in a fashion similar to . That way, when calling you can give the structure name in the argument.\nSomething that is still bothering me is the structure: struct { select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } } MLSPlaintextTBS; Since the first is done on a field defined after it, this structure is not parsable. We could say that's not a problem since we only need to serialize it, but for security purposes I would recommend to put it at the end, like this: struct { WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } } MLSPlaintextTBS; What do you think about it?\nNice one! Crafting collisions is probably not straightforward, but eliminating the the risk altogether is the way to go.\nIt is actually quite easy to craft collisions, I made a script do find them: URL (I wanted to check whether there was accidentally a way do disambiguate the structures)\nIntroduced in .\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nFrom what I can see, there is no reason not to sign the context in the case of an external init. I'm not sure why it got taken out, but while implementing external inits, verification went through even when including the context.\nAs I commented in , it's not even clear that we need this context any more for members any more, given for MLSPlaintext and the AEAD of MLSCiphertext. So I would propose we just remove the context added to MLSPlaintextTBS altogether. NAME IIRC you were the one that proposed adding the context. WDYT?\nThis was fixed in\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*", "new_text": "The value for hpke_init_key MUST be a public key for the asymmetric encryption scheme defined by cipher_suite. The whole structure is signed using the client's signature key. A KeyPackage object with an invalid signature field MUST be considered malformed. The signature is computed by the function \"SignWithLabel\" with a label \"KeyPackage\" and a content comprising of all of the fields except for the signature field. KeyPackage objects MUST contain at least two extensions, one of type \"capabilities\", and one of type \"lifetime\". The \"capabilities\""}
{"id": "q-en-mls-protocol-43b9eebe04393b51e60d44b3e5f3f931b60a7c07d1f62494f9aee9edd1393917", "old_text": "The \"signature\" field in an MLSPlaintext object is computed using the signing private key corresponding to the public key, which was authenticated by the credential at the leaf of the tree indicated by the sender field. The signature covers the plaintext metadata and message content, which is all of MLSPlaintext except for the \"signature\", the \"confirmation_tag\" and \"membership_tag\" fields. If the sender is a member of the group, the signature also covers the GroupContext for the current epoch, so that signatures are specific to a given group and epoch. The \"membership_tag\" field in the MLSPlaintext object authenticates the sender's membership in the group. For an MLSPlaintext with a", "comments": "Signatures as specified in the MLS protocol document are ambiguous, in the sense that signatures produced for one purpose in the protocol can be (maliciously) used in a different place of the protocol. Indeed, we found that for example, a can have the same serialization as , which proves that one signature could be accepted for different structures. The collisions we have found so far are quite artificial, so it is not super clear how to exploit this, but at least it makes it impossible to have provable security. The solution is pretty simple, we simply prefix each signed value with a label, in a fashion similar to . That way, when calling you can give the structure name in the argument.\nSomething that is still bothering me is the structure: struct { select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } } MLSPlaintextTBS; Since the first is done on a field defined after it, this structure is not parsable. We could say that's not a problem since we only need to serialize it, but for security purposes I would recommend to put it at the end, like this: struct { WireFormat wireformat; opaque groupid; uint64 epoch; Sender sender; opaque authenticateddata; ContentType contenttype; select (MLSPlaintextTBS.contenttype) { case application: opaque applicationdata; case proposal: Proposal proposal; case commit: Commit commit; } select (URL) { case member: GroupContext context; case preconfigured: case newmember: struct{}; } } MLSPlaintextTBS; What do you think about it?\nNice one! Crafting collisions is probably not straightforward, but eliminating the the risk altogether is the way to go.\nIt is actually quite easy to craft collisions, I made a script do find them: URL (I wanted to check whether there was accidentally a way do disambiguate the structures)\nIntroduced in .\ns currently have two usages in the spec: Pre-published key packages that you can download and put in a leaf when adding a new member leaf node content that you update when doing a full Currently, things have been designed to use only one structure for both uses, leading to some weirdness such as: and are useful in the usage 1, but are parameters of the group in usage 2. is a good name in usage 1, but would better be named in usage 2. some data is important in usage 2 and have no meaning in usage 1, hence they were designed as \"mandatory-when-updating extensions\" (such as , and we were also discussing about adding the in to help prevent cross-group attacks) How about using different structures for these? // Essentially identical to KeyPackage struct { ProtocolVersion version; CipherSuite ciphersuite; HPKEPublicKey hpkeinitkey; opaque endpointid; Credential credential; Extension extensions; opaque signature; } PrePublishedKeyPackage; struct { HPKEPublicKey hpkekey; opaque endpointid; Credential credential; opaque parenthash; opaque groupid; Extension extensions; opaque signature; } LeafNodeContent; enum { reserved(0), prepublishedkeypackage(1), leafnodecontent(2), (255) } LeafNodeType; struct LeafNode { // This is not included in the signature, but with the \"unambiguous signature\" PR that's not a problem LeafNodeType type; select(URL) { case prepublishedkeypackage: PrePublishedKeyPackage data; case leafnode_content: LeafNodeContent data; } } What would you think about refactoring like this?\nIf we're doing this, I'd suggest including a dedicated public key to encrypt Welcome messages to for the . NAME suggested this at some point. It gives us better FS guarantees, because the receiver of the can delete the corresponding private key immediately after decryption, while they might have to keep the other key around for a bit until they can actually issue an update. This might also help clarify .\nTies into as well because is an example of an extension that is required and should be a field instead\nNAME one concern I have is how is that field in created?\nWhen sending a full , the participant includes a new in the (it replaces the field). It computes the signature with where: corresponds to all the fields of except (the usual thing) comes from\nWhat about the scenario where Alice pulls down Bob's key package to add him to the group? In that case Alice can't do the conversion between formats on Bob's behalf.\nAlice don't have to do the conversion, this is why either contain a (when adding someone to the group) or a (when updating your own key package).\nAh somehow I missed that part in your original post. Makes sense\nI like this general direction. It solves a few things. Instead of having leaf nodes contain either a prepublished thing or a normal leaf node, how about having the prepublished thing wrap the leaf node thing? As a bonus, we get separation between the key pair used for Welcome encryption and the key pair that goes in the leaf for TreeKEM. Then when Alice adds Bob, she extracts his LeafNodeContent and sends that in the Add. That doesn't let the other members of the group verify that Alice did the right thing w.r.t. expiry, but I'm not heartbroken about that. We might also be able to remove the extensions field from the outer (KeyPackage) wrapping if we turn the extension into KeyPackage field. I expect we probably need leaf extensions, but would be open to making into a field.\nWhy not keep sending the KeyPackage in the Add? That wouldn't cost much and would allow everyone to validate the lifetime. Note, that the lifetime is crucial in achieving PCS for pre-published key material. If I compromise Bob and get hold of the private key material of a bunch of his KeyPackages, I can add \"Bob\" to any group I want as long as the Credential inside the LeafNodeContent remains valid and Bob doesn't revoke the LeafNodeContent somehow. Also, we'll have to do this if we want to enable any member to send the Welcome message as you proposed here: URL\nI agree that including inside is quite clean, but something is bothering me with this approach: I'd like that the is signed by the participant occupying that leaf, to protect against cross-group attacks. In the first message of this issue I did this by including in , but in fact we don't need to store it in the leaf, we just need to include it in the . In any case, if we want to include in the signature, we need a way to differentiate between leaves that triggered a commit and the ones that are \"fresh\". And for that I think using a tagged-union struct is the cleanest way to do it.\nIt always feels a bit lazy to use an extension (and implementations have to be careful to check for their presence), but we could use a extension in the same way that we use a extension. You will find both of them only on LeafContents that were updated at least once. Another result of including GroupId in the KeyPackage is that the signature will explicitly sign the and I'm not sure if that doesn't completely destroy any deniability guarantees we can still get from MLS.\nI don't think there's really a good way to get the into the leaf. Because of prepublication, it would have to be optional, but in order to prevent attacks, you need rules to decide whether it's required in a given situation. It seems like the best you can do is basically TOFU: When you first get the tree, you can't validate whether the groupid should be in a given leaf or not Once you're a member, you require that groupid is sent in Update and UpdatePath.leafkey_package I could live with a solution of that character, but given it's a bit complex, might prefer to leave it out unless we get a useful property from it. NAME - Re: Sending KeyPackage in Add - Yep, that makes sense. So you would send KeyPackage in Add, and LeafNode in Update.\nI tried to worked through what would be necessary to implement this. It looks like it would require some fairly invasive changes to be made to rationalize the document's structure before making this change. But those changes should probably be made anyway. Proposal: Preparatory PR: Rationalize layout Move \"Cryptograhic Objects\" to before \"Ratchet Trees\" Update \"Ratchet Tree Nodes\" to define ParentNode and LeafNode structs, with LeafNode initially just equal to KeyPackage Move subsections of \"Key Packages\" to under \"Ratchet Trees\" Parent Hash Tree Hashes Update Paths Move Group State to its own top-level section Main PR: Change LeafNode to have the required content: HPKE key for TreeKEM Credential Parent hash Capabilities Extensions Signature Change KeyPackage to refer to LeafNode for internals, plus: HPKE key for Welcome Lifetime Extensions Signature Add continues to send KeyPackage Update and Commit send LeafNode instead of KeyPackage Remove and Sender refer to LeafNodeRef instead of KeyPackageRef We should probably do the preparatory PR regardless of this issue.\nDiscussion on working call: General agreement on the approach Should sign group_id when possible Include in TBS if we at least have a flag for pre-published / in-group Or we can store it Or we could have independent structs for pre-published / in-group leaves Do we need extensions at both levels? Capabilities / lifetime should probably be promoted to fields\nFrom what I can see, there is no reason not to sign the context in the case of an external init. I'm not sure why it got taken out, but while implementing external inits, verification went through even when including the context.\nAs I commented in , it's not even clear that we need this context any more for members any more, given for MLSPlaintext and the AEAD of MLSCiphertext. So I would propose we just remove the context added to MLSPlaintextTBS altogether. NAME IIRC you were the one that proposed adding the context. WDYT?\nThis was fixed in\nHere's the Weekly Digest for : - - Last week 9 issues were created. Of these, 6 issues have been closed and 3 issues are still open. :greenheart: , by :greenheart: , by :greenheart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :heart: , by :speaker: , by It received 2 comments. - - Last week, 13 pull requests were created, updated or merged. Last week, 3 pull requests were updated. :yellowheart: , by :yellowheart: , by :yellowheart: , by Last week, 10 pull requests were merged. :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by :purpleheart: , by - - Last week there were 38 commits. :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by :hammerandwrench: by - - Last week there were 2 contributors. :bustinsilhouette: :bustin_silhouette: - - Last week there was 1 stargazer. :star: You are the star! :star2: - - Last week there were no releases. - - That's all for last week, please :eyes: Watch and :star: Star the repository to receive next weekly updates. :smiley: You can also .*", "new_text": "The \"signature\" field in an MLSPlaintext object is computed using the signing private key corresponding to the public key, which was authenticated by the credential at the leaf of the tree indicated by the sender field. The signature is computed using \"SignWithLabel\" with label \"\"MLSPlaintextTBS\"\" and with a content that covers the plaintext metadata and message content, which is all of MLSPlaintext except for the \"signature\", the \"confirmation_tag\" and \"membership_tag\" fields. If the sender is a member of the group, the content also covers the GroupContext for the current epoch, so that signatures are specific to a given group and epoch. The \"membership_tag\" field in the MLSPlaintext object authenticates the sender's membership in the group. For an MLSPlaintext with a"}
{"id": "q-en-mls-protocol-7b637b54646b463a263b8d61e348f8b0c50d9c62abf83ca42f506406920fc28c", "old_text": "actually allowed to evict other members; groups can enforce access control policies on top of these basic mechanism. 5. The protocol uses \"ratchet trees\" for deriving shared secrets among a", "comments": "Adds the overview text proposed in .\nThanks NAME !\nThanks for putting together the overview. The figures are great :-) Looks good to me with one proposition.", "new_text": "actually allowed to evict other members; groups can enforce access control policies on top of these basic mechanism. 4.3. A group comprises a single linear sequence of epochs and groups are generally independent of one-another. However, it can sometimes be useful to link epochs cryptographically, either within a group or across groups. MLS derives a resumption pre-shared key (PSK) from each epoch to allow entropy extracted from one epoch to be injected into a future epoch. This link guarantees that members entering the new epoch agree on a key if and only if were members of the group during the epoch from which the resumption key was extracted. MLS supports two ways to tie a new group to an existing group. Re- initialization closes one group and creates a new group comprising the same members with different parameters. Branching starts a new group with a subset of the original group's participants (with no effect on the original group). In both cases, the new group is linked to the old group via a resumption PSK. Applications may also choose to use resumption PSKs to link epochs in other ways. For example, the following figure shows a case where a resumption PSK from epoch \"n\" is injected into epoch \"n+k\". This demonstrates that the members of the group at epoch \"n+k\" were also members at epoch \"n\", irrespective of any changes to these members' keys due to Updates or Commits. 5. The protocol uses \"ratchet trees\" for deriving shared secrets among a"}
{"id": "q-en-mls-protocol-462ede26f597860eaaddfbba565b537f21017bb4ec96013f937c79fc288f68d4", "old_text": "precisely two children, a _left_ child and a _right_ child. A node is the _root_ of a tree if it has no parents, and _intermediate_ if it has both children and parents. The _descendants_ of a node are that node, its children, and the descendants of its children, and we say a tree _contains_ a node if that node is a descendant of the root of the tree. Nodes are _siblings_ if they share the same parent. A _subtree_ of a tree is the tree given by the descendants of any node, the _head_ of the subtree. The _size_ of a tree or subtree is the number of leaf nodes it contains. For a given parent node, its _left subtree_ is the subtree with its left child as head (respectively _right subtree_).", "comments": "Make the definition of descendant match the common-sense definition (you are not your own descendant), adjust the text which uses the definition. Note: this allows for a much clearer definition of the tree invariant as well.", "new_text": "precisely two children, a _left_ child and a _right_ child. A node is the _root_ of a tree if it has no parents, and _intermediate_ if it has both children and parents. The _descendants_ of a node are that node's children, and the descendants of its children, and we say a tree _contains_ a node if that node is a descendant of the root of the tree, or if the node itself is the root of the tree. Nodes are _siblings_ if they share the same parent. A _subtree_ of a tree is the tree given by any node (the _head_ of the subtree) and its descendants. The _size_ of a tree or subtree is the number of leaf nodes it contains. For a given parent node, its _left subtree_ is the subtree with its left child as head (respectively _right subtree_)."}
{"id": "q-en-mls-protocol-b716a06b2df9a908a6ed27172a9f16a0067e2e3be37a400adacde4a51914427d", "old_text": "the GroupContext. This extension lists the extensions and proposal types that must be supported by all members of the group. For new members, it is enforced by existing members during the application of Add commits. Existing members should of course be in compliance already. In order to ensure this continues to be the case even as the group's extensions can be updated, a GroupContextExtensions proposal is invalid if it contains a \"required_capabilities\" extension that requires capabilities not supported by all current members. 10.2.", "comments": "cc NAME\nLooks good to me, thanks!\nComment by NAME Maybe add a sentence saying this means there is no repudiation of messages?\nI would prefer not to say anything like that here, because the degree to which signature implies non-repudiation depends on a bunch of context for how the signature keys are managed. Clearly if you have long term or certified keys, then yes, there's a pretty high degree of non-repudiation. But people are also interested in configuring MLS to be deniable, e.g., by exchanging signature keys over a deniable channel. So I would propose closing this with no action.\nAh, that is a very good point. In my head I was thinking signature means proof of key and key is proven to person. But that might certainly not be the case here.\nThe description of is clear that proposals defined in the RFC are not to be included because they are considered standard, however there is no such clarification around extensions. In general, extension requirements are not all that clear because in several places (such as Lifetime) it is mentioned that an extension must be included, however they are all listed as recommended rather than required in the table. It also raises a question in that if an extension is required to be implemented as part of the specification, and required in every KeyPackage, should it just be a field rather than an extension?\nEven that is not quite clear to me. At least to me it doesn't follow from Section 10.1. The last one at least, I can shed some light on, as we had this discussion before and I was arguing the same thing. From what I remember, the argument was the keeping it an extension rather than a field would force implementers to implement the Extension mechanism. I don't have a super strong opinion here, but I seem to remember that that's what it boiled down to. I generally agree, though, that some things around RequiredCapabilities (RC) and Capabilities in general is unclear. I discussed this with NAME and here are some examples/questions that came up: If a group doesn't have an RC extension, can we assume that all extensions/proposals in the spec are supported by everyone, or do we have to check all KeyPackages? Do we want all extensions/proposals in the spec to be MTI and thus implicitly supported by everyone? It might be useful to distinguish between basic (MTI) ones like Add, Remove, etc. and consider some optional such as ExternalInit or AppAck. Relatedly, should the extension in the KeyPackage list all the Spec proposals/extensions, too? Are they invalid if they do or if they don't? We might want to discuss the difference between RC and a potential group policy. The way I understand it, RC is not meant to be an allow list or a block list, but instead should specify a minimum requirement for all clients in the group. A question here would be: If it's not in the RC, but everyone's capabilities support it, can I use/send it?\nReacting to Konrad's points here: It seems like to to be sure, you always need to check. RC just helps ensure that the check will produce the right answer. I would propose the answer to this be \"Yes, the proposals/extensions in the spec are MTI\" If the answer to the prior question is as I propose, then you don't need to list spec proposals/extensions in Capabilities The difference between RC and checking everyone supports it is whether we expect it to be supported in the future. For Proposals, that might not matter; for GroupContext extensions, it matters more. It also seems like there's a fair argument that RequiredCapabilities should just be removed from the spec and replaced by some application-layer coordinated policy. I would be OK with that. To NAME point about mandatory extensions -- I would probably be OK with converting mandatory extensions to fields, especially with the better division of labor proposed in .\nI kind of like the idea of RequiredCapabilities because it enforces the proper checks lower down. Application layer would need to proxy packets via a middleware type of layer which could potentially be messy.\nDiscussion on working call: Agreement to not list in-spec capabilities in RC/Capabilities, so all those things are MTI", "new_text": "the GroupContext. This extension lists the extensions and proposal types that must be supported by all members of the group. The \"default\" proposal and extension types defined in this document are assumed to be implemented by all clients, and need not be listed in RequiredCapabilities in order to be safely used. For new members, support for required capabilities is enforced by existing members during the application of Add commits. Existing members should of course be in compliance already. In order to ensure this continues to be the case even as the group's extensions can be updated, a GroupContextExtensions proposal is invalid if it contains a \"required_capabilities\" extension that requires non- default capabilities not supported by all current members. 10.2."}
{"id": "q-en-mls-protocol-5763198fb352c67e7513d4ca8ed46f1edac0dcf31aafbf40106ba87e638f5b63", "old_text": "be perceived as creating a conflict of interest for a particular MLS DE, that MLS DE SHOULD defer to the judgment of the other MLS DEs. 19. References 19.1. URIs", "comments": "This is a minimal section for MIME type registration of message/mls. It assumes that PR include wire_format in all MLS messages (making a format parameter unnecessary), and that all messages have the MLS version number encoded (making a version parameter unnecessary). These parameters can be added if needed.\nWhen MLS messages are transmitted over a protocol which supports Content-Type / MIME Type negotiation, signal that the message is an MLS message. Because MLS messages are really a bag of formats today we should have a MIME type parameter which specifies the specific format included. message/mls;format={applicationgroupcontext}\nThis seems like a fine idea to me. Could go as a section in the protocol spec, or as a separate (small) spec.\nDiscussion on virtual interim: Section or separate doc? How do we handle versioning? Internal to the protocol (field in MLSMessage?) Field in the MIME type Different MIME type per version\nI would prefer a short section in the protocol document. I think Internal to the protocol is cleanest. Likewise the format parameter becomes unnecessary if we add Welcome, PublicGroupState, and KeyPackage to wire_format (see issue )\nNAME send a PR?\nAsking for friend: would we want to define (or use) a structured syntax suffix ala ?\nNAME If you mean for multiple formats, no, we have only one encoding. If someone wants to re-encode the protocol as XML or whatever, a MIME type will be the least of their troubles.\nFixed in", "new_text": "be perceived as creating a conflict of interest for a particular MLS DE, that MLS DE SHOULD defer to the judgment of the other MLS DEs. 18.6. This document registers the \"message/mls\" MIME media type in order to allow other protocols (ex: HTTP RFC7540) to convey MLS messages. 19. References 19.1. URIs"}
{"id": "q-en-mls-protocol-c98d48e928f782b4bea10c845fd2fe7935c7d9676a8c7b0f98737b28b3329f5b", "old_text": "Terminology specific to tree computations is described in ratchet- tree-terminology. We use the TLS presentation language RFC8446 to describe the structure of protocol messages. 3.", "comments": "There are two changes in this PR: Pull the extension to the TLS syntax up into the Terminology section Add an extension that lets vectors have variable-length headers, allowing them to be efficient at small sizes while also accommodating large sizes All vectors are converted to the variable-length header representation. It seems like this should simplify implementation, since the encoder will no longer have to be told how many bytes of header to use for a given vector. This has been the source of several annoying interop bugs.\nLooks great! I only have a small objection: there is now several valid ways to serialize a length, as the example shows, both and are valid ways to encode 37. This means that if you parse a structure, and later want to re-serialize it to check a signature for example, you need to remember how many bits were used to store the length. In short, we loose the property that . The fix is easy: can we add that \"the length MUST be stored using the smallest amount of bits possible\"?\nGood point, updated!\nI wasn't clear under which circumstances you would use a longer length with a number that would fit in a smaller range. If never, then the table should be:\nDIscussion on virtual interim Some concern about being able to send gigantic stuff In adversarial cases, the 4-byte headers used for a lot of stuff might already be a problem Could also have length constraints decoupled from the syntax SIP experience shows that imposing arbitrary length limits isn't a good idea ... but applications should have a way to signal that things are too big TODO(NAME Advisory text \"Beware, things can get big; consider imposing limits and clear failure cases for this\" Consider streaming implementations, where you might not know that the whole message fits in memory Modulo the above TODO, clear to merge", "new_text": "Terminology specific to tree computations is described in ratchet- tree-terminology. 2.1. We use the TLS presentation language RFC8446 to describe the structure of protocol messages. In addition to the base syntax, we add two additional features, the ability for fields to be optional and the ability for vectors to have variable-size length headers. 2.1.1. An optional value is encoded with a presence-signaling octet, followed by the value itself if present. When decoding, a presence octet with a value other than 0 or 1 MUST be rejected as malformed. 2.1.2. In the TLS presentation language, vectors are encoded as a sequence of encoded elements prefixed with a length. The length field has a fixed size set by specifying the minimum and maximum lengths of the encoded sequence of elements. In MLS, there are several vectors whose sizes vary over significant ranges. So instead of using a fixed-size length field, we use a variable-size length using a variable-length integer encoding based on the one in Section 16 of RFC9000. (They differ only in that the one here requires a minimum-size encoding.) Instead of presenting min and max values, the vector description simply includes a \"V\". For example: Such a vector can represent values with length from 0 bytes to 2^62 bytes. The variable-length integer encoding reserves the two most significant bits of the first byte to encode the base 2 logarithm of the integer encoding length in bytes. The integer value is encoded on the remaining bits, in network byte order. The encoded value MUST use the smallest number of bits required to represent the value. When decoding, values using more bits than necessary MUST be treated as malformed. This means that integers are encoded on 1, 2, 4, or 8 bytes and can encode 6-, 14-, 30-, or 62-bit values respectively. For example, the eight-byte sequence c2 19 7c 5e ff 14 e8 8c (in hexadecimal) decodes to the decimal value 151288809941952652; the four byte sequence 9d 7f 3e 7d decodes to 494878333; the two byte sequence 7b bd decodes to 15293; and the single byte 25 decodes to 37 (as does the two byte sequence 40 25). The following figure adapts the pseudocode provided in RFC9000 to add a check for minimum-length encoding: The use of variable-size integers for vector lengths allows vectors to grow very large, up to 2^62 bytes. Implementations should take care not to allow vectors to overflow available storage. To facilitate debugging of potential interoperatbility problems, implementations should provide a clear error when such an overflow condition occurs. 3."}
{"id": "q-en-mls-protocol-c98d48e928f782b4bea10c845fd2fe7935c7d9676a8c7b0f98737b28b3329f5b", "old_text": "The tree hash of a tree is the tree hash of its root node, which we define recursively, starting with the leaves. As some nodes may be blank while others contain data we use the following struct to include data if present. The tree hash of a leaf node is the hash of leaf's \"LeafNodeHashInput\" object which might include a \"LeafNode\" object depending on whether or not it is blank.", "comments": "There are two changes in this PR: Pull the extension to the TLS syntax up into the Terminology section Add an extension that lets vectors have variable-length headers, allowing them to be efficient at small sizes while also accommodating large sizes All vectors are converted to the variable-length header representation. It seems like this should simplify implementation, since the encoder will no longer have to be told how many bytes of header to use for a given vector. This has been the source of several annoying interop bugs.\nLooks great! I only have a small objection: there is now several valid ways to serialize a length, as the example shows, both and are valid ways to encode 37. This means that if you parse a structure, and later want to re-serialize it to check a signature for example, you need to remember how many bits were used to store the length. In short, we loose the property that . The fix is easy: can we add that \"the length MUST be stored using the smallest amount of bits possible\"?\nGood point, updated!\nI wasn't clear under which circumstances you would use a longer length with a number that would fit in a smaller range. If never, then the table should be:\nDIscussion on virtual interim Some concern about being able to send gigantic stuff In adversarial cases, the 4-byte headers used for a lot of stuff might already be a problem Could also have length constraints decoupled from the syntax SIP experience shows that imposing arbitrary length limits isn't a good idea ... but applications should have a way to signal that things are too big TODO(NAME Advisory text \"Beware, things can get big; consider imposing limits and clear failure cases for this\" Consider streaming implementations, where you might not know that the whole message fits in memory Modulo the above TODO, clear to merge", "new_text": "The tree hash of a tree is the tree hash of its root node, which we define recursively, starting with the leaves. The tree hash of a leaf node is the hash of leaf's \"LeafNodeHashInput\" object which might include a \"LeafNode\" object depending on whether or not it is blank."}
{"id": "q-en-mls-protocol-c98d48e928f782b4bea10c845fd2fe7935c7d9676a8c7b0f98737b28b3329f5b", "old_text": "A GroupContextExtensions proposal is used to update the list of extensions in the GroupContext for the group. \"struct { Extension extensions<0..2^32-1>; } GroupContextExtensions; \" A member of the group applies a GroupContextExtensions proposal with the following steps:", "comments": "There are two changes in this PR: Pull the extension to the TLS syntax up into the Terminology section Add an extension that lets vectors have variable-length headers, allowing them to be efficient at small sizes while also accommodating large sizes All vectors are converted to the variable-length header representation. It seems like this should simplify implementation, since the encoder will no longer have to be told how many bytes of header to use for a given vector. This has been the source of several annoying interop bugs.\nLooks great! I only have a small objection: there is now several valid ways to serialize a length, as the example shows, both and are valid ways to encode 37. This means that if you parse a structure, and later want to re-serialize it to check a signature for example, you need to remember how many bits were used to store the length. In short, we loose the property that . The fix is easy: can we add that \"the length MUST be stored using the smallest amount of bits possible\"?\nGood point, updated!\nI wasn't clear under which circumstances you would use a longer length with a number that would fit in a smaller range. If never, then the table should be:\nDIscussion on virtual interim Some concern about being able to send gigantic stuff In adversarial cases, the 4-byte headers used for a lot of stuff might already be a problem Could also have length constraints decoupled from the syntax SIP experience shows that imposing arbitrary length limits isn't a good idea ... but applications should have a way to signal that things are too big TODO(NAME Advisory text \"Beware, things can get big; consider imposing limits and clear failure cases for this\" Consider streaming implementations, where you might not know that the whole message fits in memory Modulo the above TODO, clear to merge", "new_text": "A GroupContextExtensions proposal is used to update the list of extensions in the GroupContext for the group. \"struct { Extension extensions; } GroupContextExtensions; \" A member of the group applies a GroupContextExtensions proposal with the following steps:"}
{"id": "q-en-mls-protocol-5b1e46c511c7c6c0c175dc33c53c102422163008e9740c3b38f0f58785e61f97", "old_text": "Verify that the signature on the KeyPackage is valid using the public key in \"leaf_node.credential\". 11.2. Within MLS, a KeyPackage is identified by its hash (see, e.g.,", "comments": "This change ensures that the \"init\" key pair associated to the key package is one-time-use, at least to the degree that KeyPackages are one-time-use. The init key pair is used to encrypt/decrypt the Welcome message, then never again. The main drawback is requiring the KeyPackage sender to generate an additional key pair, which requires more computation and more entropy.", "new_text": "Verify that the signature on the KeyPackage is valid using the public key in \"leaf_node.credential\". Verify that the value of \"leaf_node.public_key\" is different from the value of the \"init_key\" field. 11.2. Within MLS, a KeyPackage is identified by its hash (see, e.g.,"}
{"id": "q-en-mls-protocol-ae9b605f569d90a71bf652993a28b28017e86c3d29b73372e5741087d4d6a161", "old_text": "Given these inputs, the derivation of secrets for an epoch proceeds as shown in the following diagram: A number of secrets are derived from the epoch secret for different purposes: The \"external secret\" is used to derive an HPKE key pair whose private key is held by the entire group: The public key \"external_pub\" can be published as part of the", "comments": "Looks good! I would maybe add 1 sentence explaining the rule for what is called a secret and what is called a key. The rule says that a secret will go through some key derivation to obtain a key/keys, is that right?\nNAME Yep, that's exactly right. Added a sentence. Thanks for the feedback everyone!\nReopening due to incomplete discussion and inaccuracy. The claims presented are incorrect in response by NAME demonstrate a misunderstanding of the roles of these keys and uses, and constitutes a change to the fundamental uses thereof. The authenticationsecret is NOT a public value nor was it ever introduced as such. If an application is using it as a public value, that onus is on the developers, and I advise against using Cisco's implementation as a standard for MLS. As shown in the name and all original discussion, the authenticationsecret is intended as a secret value (or else 'public' in the same way that the group secret is 'public). It is used for further derivation to support authentication. This should be corrected, and such foundational changes without WG discussion should be avoided.\nClearly we have a disagreement here NAME but maybe let's hold off on attributing misunderstanding. It seems like you are claiming that in order for MLS to be secure, the needs to be protected to the same level as the . What is this based on? If that is true, then we should probably remove the altogether. The \"out-of-band\" protocol in which it would be confirmed would have to be part of the MLS security analysis, thus no longer out-of-band.\nNAME that is not the claim at all - different uses imply different compromise risks. This should not be a new concept to understand. As demonstrated in the terminology, the authenticationsecret is linked to authentication, not confidentiality. MLS, per specification, outsources authentication (the AS is not explicitly defined and all entity authentication within MLS relies on it). Even though it is definitionally out-of-scope and therefore out-of-band (certificates, authentication verification are entirely out-of-band of the protocol spec), we do not say that signature secret keys can be \"public\" - yet you are asserting that an authenticationsecret which can support epoch-level authentication would be public. The authenticationsecret is private in the same way that the exportervalue is: based on key schedule derivation, compromise of these keys should not undermine MLS, but secrecy of them can be leveraged. So, unless you are now proposing removing mention of certificates, etc. from the core protocol, there really isn't a logical basis for removing 'out-of-band' authentication mechanisms.\nWhat I'm trying to get at here is: What is the negative consequence if an is known by someone other than the members of a group? For example, leaking a signature private key would enable the attacker to impersonate the corresponding member. Leaking the would allow the attacker to decrypt all of the group's messages. What is the bad consequence of an becoming public?\nNAME I am also assuming that this push to use epoch-level authentication values as public codes is based on Cisco's implementation choice. I would be curious to know why there is such insistence that all implementations also go about using authentication in the same way. If an implementation chooses to use it as a secret value to then derive an authentication code based on yet other information, why is that a problem?\nPublishing the authenticationsecret as a public value has a similar effect as publishing the exportervalue as public - it is not an issue for MLS per protocol spec, but it would definitely be an issue for the application if it is then used for something secret thereafter. One reason not to specify these as public values is that they are derived following the conditions placed on keys and therefore could be used as secret values if desired by an application. Forcing or annotating them as immediately public, or as final 'codes', when such a use as a secret key is an out-of-scope potential is artificial and unnecessary.\nI'm not pushing anything, I'm just trying to get the protocol to be clear. If this thing needs to be secret, then it needs to be secret, and we need to document that so that people keep it secret. Thanks for elaborating more on your concerns. I see what you're saying -- basically the same point that NAME and I discussed . I think the core of the disagreement here is what we mean when we say that a value is \"public\" or \"secret\". I mean something very specific: That the security guarantees of MLS will not be harmed if a public value is provided to the attacker, and they will be harmed if a secret value is provided to the attacker. Importantly, \"public\" does not mean that the value must be published, or that the security guarantees of an application using MLS will not be harmed if you publish the value. It seems to me that in that specific taxonomy, the is \"public\" and not \"secret\". Do you disagree?\nBTW, the notion of something being \"public\" at one layer but \"secret\" at another is not new. For example, values exported from TLS aren't secret in the sense that publishing them would compromise TLS. You could authenticate your TLS connection by exporting a value and shouting it through megaphones across a crowd. But the main way people use them in practice is as secrets in some other protocol. Every WebRTC call uses SRTP keys that are exported from TLS, and many people would be very sad if these \"public\" values were published.\nIf it would help, I would be happy to add some text to the Security Considerations clarifying this distinction. Basically, draw a clear boundary around which values have to be kept confidential to the client in order to realize the security properties of the protocol. But I still think is a clearer name. Doesn't mean you have to use it in any particular way in your application; doesn't prevent it being used as a shared secret; but also doesn't imply that the MLS library has to worry about its secrecy.\nWithin the bounds of that particular taxonomy there is not an issue. However, while you may be following a given taxonomy, that is not a general one, one that would be widely understood by implementors, nor is it specified in the protocol spec. It is not an unreasonable taxonomy, but it certainly not a general one. If you are looking for MLS usage spec to align to your taxonomy then I do think that there should be a clarification section added. BTW as a point case on more standard terminology use, the TLS 1.3 spec uses the exact phrasing for exporters as \"exportermastersecret\", from which the actual exporter may be derived. This aligns very closely to the authenticationsecret, from which an authentication code later may be derived. TLS is generic in 'exporters' after an exportermastersecret is derived. Thus there is alignment of the term 'secret' to things that can be used for keys (have sufficient entropy), not as an alignment to public/secret at different layers. I am strongly opposed to the use of authenticationcode, and see no clear definition for that in your customized taxonomy. As such, it would follow the more standard interpretations, such as message authentication codes, etc. which are indeed explicitly public values. Codes are normally an end-state in themselves, not keys. Perhaps it is a clearer name for the Cisco implementation design, but it is not a clear name for something that can/should be used for further derivations as may be more generally applicable. Following the TLS norm for exporters, the name would be \"authenticationmastersecret\". The shortened \"authenticationsecret\" was a good name. If you are going to provide a write-up for the custom taxonomy, then an adjustment to \"authenticationexporter\" could perhaps be made.\nThe notion of a having boundary to the protocol and values that may or may not safely be exported across the boundary is not exotic or customized. You see it in FIPS 140-2, PKCS, and WebCrypto, and in tools like ProVerif and Tamarin. The only choice I'm making is in the specific words, in particular reserving \"secret\" for \"something that should not be exposed outside the protocol implementation because it would compromise the protocol\". I hope you agree that is different from say in this regard. It would be OK for an MLS implementation to have an API for an application to access the , but it would not be OK to have an API to access the . I'm not wedded to , I just want something other than . If you want to align with the exported values (to which I agree it is similar), we could call it an .\nThe distinction between those documents and tools, and the MLS spec is that those tools clearly define the boundaries and terminology, vs. just anyone making terminology assumptions without them being specified in the document. Not to be pedantic, but if you want to assume specific meanings for terms relative to a threat model for MLS only, versus a general usage (\"exporter secrets\" being secrets used in other things), then you really do need to specify those assumptions within the specification itself and not various side discussions. Such specification is part of what a standard is meant to do. Within other documents this is clearly defined - i.e., what is known as a threat model - such as one sees in ProVerif and Tamarin. Assuming that you are going to add such a taxonomy, and that it follows that described by you in , then the correct term would be \"authenticationkey\". To quote you what you said there, which correctly aligns to a reasonable usage of the authenticationkey. pointed out that there is a lack of consistency in the usage of \"keys\" vs \"secrets\" throughout the document - something that still has not been addressed and should not be dropped from the issue for edits. Regardless of what taxonomy is being used or even assumed, there is no consistency in it in the spec. You seem to be commenting on adjustments only for the \"key\" and \"secret\" uses that appear as variables, but the commentary within the spec interchangeably use these terms for the same thing. Whatever is used should be applied consistently. It is not reasonable to assume that developers will correctly guess reasons for terminology separations particularly when they are inconsistently enforced.\nOn your latter point about usage of \"keys\" vs \"secrets\" throughout the document \u2014 Sorry if I misinterpreted your intent here. If you're seeing places where there are inconsistencies, it would be helpful if you could highlight those more specifically, or ideally send a PR to fix them.\nNAME had a call about this today and hashed out a couple of things: We should be clear in general about what we mean by secrets/keys, in that we generally use them interchangeably, but sometimes use one or the other to indicate how a specific value is used The need for secrecy of the value labeled depends on the context in which it is used. So we should provide some text to suggest that there are different ways to use it, and use a fairly neutral name. NAME suggested , which I like a lot. I have attempted to implement these in\nThe spec currently uses a mix of terminologies for keys/secrets. In the key schedule we have the following: senderdatasecret | \"sender data\" encryptionsecret | \"encryption\" exportersecret | \"exporter\" authenticationsecret | \"authentication\" externalsecret | \"external\" confirmationkey | \"confirm\" membershipkey | \"membership\" resumptionsecret | \"resumption\" It would be best to provide a coherency to how we use these terms and check for consistency throughout, to avoid confusion for readers: A simple solution would be to use \"secret\" for all cases, i.e., changing membershipkey to membershipsecret.Another alternative, which may take more refining but could be useful to readers is to separate out terms based on items that are for internal protocol use versus external protocol use. For example, encryption, confirmation, membership, etc., would use \"key\" and exporter, authentication, etc., could use \"secret\", or vice versa. --Break-- Ref the mailinglist on confirmation vs authentication secrets, perhaps the naming of these two are a little misleading - authentication normally being within bounds of a protocol while confirmation is external to it. This confusion probably sparked some of that discussion to begin with. It could_ be worth considering switching those terms for clarity.\nI actually think we have a fairly coherent taxonomy here: \"secret\" is something that gets fed into HKDF to derive further things \"key\" is something that gets fed into another algorithm, e.g., HMAC Looking through the list, the ones that deviate from this pattern are: , which is exported to the application but expected to be kept private , which is exported to the application but not expected to be kept private Given that, I would propose we update the table in question to something like the following, using because that's how it's used, and to follow the \"security code\" terminology used by applications:\nThings are easier to understand like this! The \"Purpose\" column is quite valuable I think, I remember I was quite overwhelmed when I first read it and had no idea what all those secret / keys were used for. This helps to understand the big picture.", "new_text": "Given these inputs, the derivation of secrets for an epoch proceeds as shown in the following diagram: A number of values are derived from the epoch secret for different purposes: (In general, we use \"secret\" to refer to a value that is used derive further secret values, and \"key\" to refer to a value that is used with an algorithm such as HMAC or an AEAD algorithm.) The \"external_secret\" is used to derive an HPKE key pair whose private key is held by the entire group: The public key \"external_pub\" can be published as part of the"}
{"id": "q-en-mls-protocol-ae9b605f569d90a71bf652993a28b28017e86c3d29b73372e5741087d4d6a161", "old_text": "GroupSecrets object with the \"psks\" field set, the receiving Client includes them in the key schedule in the order listed in the Commit, or in the \"psks\" field respectively. For resumption PSKs, the PSK is defined as the \"resumption_secret\" of the group and epoch specified in the \"PreSharedKeyID\" object. Specifically, \"psk_secret\" is computed as follows: Here \"0\" represents the all-zero vector of length \"KDF.Nh\". The \"index\" field in \"PSKLabel\" corresponds to the index of the PSK in", "comments": "Looks good! I would maybe add 1 sentence explaining the rule for what is called a secret and what is called a key. The rule says that a secret will go through some key derivation to obtain a key/keys, is that right?\nNAME Yep, that's exactly right. Added a sentence. Thanks for the feedback everyone!\nReopening due to incomplete discussion and inaccuracy. The claims presented are incorrect in response by NAME demonstrate a misunderstanding of the roles of these keys and uses, and constitutes a change to the fundamental uses thereof. The authenticationsecret is NOT a public value nor was it ever introduced as such. If an application is using it as a public value, that onus is on the developers, and I advise against using Cisco's implementation as a standard for MLS. As shown in the name and all original discussion, the authenticationsecret is intended as a secret value (or else 'public' in the same way that the group secret is 'public). It is used for further derivation to support authentication. This should be corrected, and such foundational changes without WG discussion should be avoided.\nClearly we have a disagreement here NAME but maybe let's hold off on attributing misunderstanding. It seems like you are claiming that in order for MLS to be secure, the needs to be protected to the same level as the . What is this based on? If that is true, then we should probably remove the altogether. The \"out-of-band\" protocol in which it would be confirmed would have to be part of the MLS security analysis, thus no longer out-of-band.\nNAME that is not the claim at all - different uses imply different compromise risks. This should not be a new concept to understand. As demonstrated in the terminology, the authenticationsecret is linked to authentication, not confidentiality. MLS, per specification, outsources authentication (the AS is not explicitly defined and all entity authentication within MLS relies on it). Even though it is definitionally out-of-scope and therefore out-of-band (certificates, authentication verification are entirely out-of-band of the protocol spec), we do not say that signature secret keys can be \"public\" - yet you are asserting that an authenticationsecret which can support epoch-level authentication would be public. The authenticationsecret is private in the same way that the exportervalue is: based on key schedule derivation, compromise of these keys should not undermine MLS, but secrecy of them can be leveraged. So, unless you are now proposing removing mention of certificates, etc. from the core protocol, there really isn't a logical basis for removing 'out-of-band' authentication mechanisms.\nWhat I'm trying to get at here is: What is the negative consequence if an is known by someone other than the members of a group? For example, leaking a signature private key would enable the attacker to impersonate the corresponding member. Leaking the would allow the attacker to decrypt all of the group's messages. What is the bad consequence of an becoming public?\nNAME I am also assuming that this push to use epoch-level authentication values as public codes is based on Cisco's implementation choice. I would be curious to know why there is such insistence that all implementations also go about using authentication in the same way. If an implementation chooses to use it as a secret value to then derive an authentication code based on yet other information, why is that a problem?\nPublishing the authenticationsecret as a public value has a similar effect as publishing the exportervalue as public - it is not an issue for MLS per protocol spec, but it would definitely be an issue for the application if it is then used for something secret thereafter. One reason not to specify these as public values is that they are derived following the conditions placed on keys and therefore could be used as secret values if desired by an application. Forcing or annotating them as immediately public, or as final 'codes', when such a use as a secret key is an out-of-scope potential is artificial and unnecessary.\nI'm not pushing anything, I'm just trying to get the protocol to be clear. If this thing needs to be secret, then it needs to be secret, and we need to document that so that people keep it secret. Thanks for elaborating more on your concerns. I see what you're saying -- basically the same point that NAME and I discussed . I think the core of the disagreement here is what we mean when we say that a value is \"public\" or \"secret\". I mean something very specific: That the security guarantees of MLS will not be harmed if a public value is provided to the attacker, and they will be harmed if a secret value is provided to the attacker. Importantly, \"public\" does not mean that the value must be published, or that the security guarantees of an application using MLS will not be harmed if you publish the value. It seems to me that in that specific taxonomy, the is \"public\" and not \"secret\". Do you disagree?\nBTW, the notion of something being \"public\" at one layer but \"secret\" at another is not new. For example, values exported from TLS aren't secret in the sense that publishing them would compromise TLS. You could authenticate your TLS connection by exporting a value and shouting it through megaphones across a crowd. But the main way people use them in practice is as secrets in some other protocol. Every WebRTC call uses SRTP keys that are exported from TLS, and many people would be very sad if these \"public\" values were published.\nIf it would help, I would be happy to add some text to the Security Considerations clarifying this distinction. Basically, draw a clear boundary around which values have to be kept confidential to the client in order to realize the security properties of the protocol. But I still think is a clearer name. Doesn't mean you have to use it in any particular way in your application; doesn't prevent it being used as a shared secret; but also doesn't imply that the MLS library has to worry about its secrecy.\nWithin the bounds of that particular taxonomy there is not an issue. However, while you may be following a given taxonomy, that is not a general one, one that would be widely understood by implementors, nor is it specified in the protocol spec. It is not an unreasonable taxonomy, but it certainly not a general one. If you are looking for MLS usage spec to align to your taxonomy then I do think that there should be a clarification section added. BTW as a point case on more standard terminology use, the TLS 1.3 spec uses the exact phrasing for exporters as \"exportermastersecret\", from which the actual exporter may be derived. This aligns very closely to the authenticationsecret, from which an authentication code later may be derived. TLS is generic in 'exporters' after an exportermastersecret is derived. Thus there is alignment of the term 'secret' to things that can be used for keys (have sufficient entropy), not as an alignment to public/secret at different layers. I am strongly opposed to the use of authenticationcode, and see no clear definition for that in your customized taxonomy. As such, it would follow the more standard interpretations, such as message authentication codes, etc. which are indeed explicitly public values. Codes are normally an end-state in themselves, not keys. Perhaps it is a clearer name for the Cisco implementation design, but it is not a clear name for something that can/should be used for further derivations as may be more generally applicable. Following the TLS norm for exporters, the name would be \"authenticationmastersecret\". The shortened \"authenticationsecret\" was a good name. If you are going to provide a write-up for the custom taxonomy, then an adjustment to \"authenticationexporter\" could perhaps be made.\nThe notion of a having boundary to the protocol and values that may or may not safely be exported across the boundary is not exotic or customized. You see it in FIPS 140-2, PKCS, and WebCrypto, and in tools like ProVerif and Tamarin. The only choice I'm making is in the specific words, in particular reserving \"secret\" for \"something that should not be exposed outside the protocol implementation because it would compromise the protocol\". I hope you agree that is different from say in this regard. It would be OK for an MLS implementation to have an API for an application to access the , but it would not be OK to have an API to access the . I'm not wedded to , I just want something other than . If you want to align with the exported values (to which I agree it is similar), we could call it an .\nThe distinction between those documents and tools, and the MLS spec is that those tools clearly define the boundaries and terminology, vs. just anyone making terminology assumptions without them being specified in the document. Not to be pedantic, but if you want to assume specific meanings for terms relative to a threat model for MLS only, versus a general usage (\"exporter secrets\" being secrets used in other things), then you really do need to specify those assumptions within the specification itself and not various side discussions. Such specification is part of what a standard is meant to do. Within other documents this is clearly defined - i.e., what is known as a threat model - such as one sees in ProVerif and Tamarin. Assuming that you are going to add such a taxonomy, and that it follows that described by you in , then the correct term would be \"authenticationkey\". To quote you what you said there, which correctly aligns to a reasonable usage of the authenticationkey. pointed out that there is a lack of consistency in the usage of \"keys\" vs \"secrets\" throughout the document - something that still has not been addressed and should not be dropped from the issue for edits. Regardless of what taxonomy is being used or even assumed, there is no consistency in it in the spec. You seem to be commenting on adjustments only for the \"key\" and \"secret\" uses that appear as variables, but the commentary within the spec interchangeably use these terms for the same thing. Whatever is used should be applied consistently. It is not reasonable to assume that developers will correctly guess reasons for terminology separations particularly when they are inconsistently enforced.\nOn your latter point about usage of \"keys\" vs \"secrets\" throughout the document \u2014 Sorry if I misinterpreted your intent here. If you're seeing places where there are inconsistencies, it would be helpful if you could highlight those more specifically, or ideally send a PR to fix them.\nNAME had a call about this today and hashed out a couple of things: We should be clear in general about what we mean by secrets/keys, in that we generally use them interchangeably, but sometimes use one or the other to indicate how a specific value is used The need for secrecy of the value labeled depends on the context in which it is used. So we should provide some text to suggest that there are different ways to use it, and use a fairly neutral name. NAME suggested , which I like a lot. I have attempted to implement these in\nThe spec currently uses a mix of terminologies for keys/secrets. In the key schedule we have the following: senderdatasecret | \"sender data\" encryptionsecret | \"encryption\" exportersecret | \"exporter\" authenticationsecret | \"authentication\" externalsecret | \"external\" confirmationkey | \"confirm\" membershipkey | \"membership\" resumptionsecret | \"resumption\" It would be best to provide a coherency to how we use these terms and check for consistency throughout, to avoid confusion for readers: A simple solution would be to use \"secret\" for all cases, i.e., changing membershipkey to membershipsecret.Another alternative, which may take more refining but could be useful to readers is to separate out terms based on items that are for internal protocol use versus external protocol use. For example, encryption, confirmation, membership, etc., would use \"key\" and exporter, authentication, etc., could use \"secret\", or vice versa. --Break-- Ref the mailinglist on confirmation vs authentication secrets, perhaps the naming of these two are a little misleading - authentication normally being within bounds of a protocol while confirmation is external to it. This confusion probably sparked some of that discussion to begin with. It could_ be worth considering switching those terms for clarity.\nI actually think we have a fairly coherent taxonomy here: \"secret\" is something that gets fed into HKDF to derive further things \"key\" is something that gets fed into another algorithm, e.g., HMAC Looking through the list, the ones that deviate from this pattern are: , which is exported to the application but expected to be kept private , which is exported to the application but not expected to be kept private Given that, I would propose we update the table in question to something like the following, using because that's how it's used, and to follow the \"security code\" terminology used by applications:\nThings are easier to understand like this! The \"Purpose\" column is quite valuable I think, I remember I was quite overwhelmed when I first read it and had no idea what all those secret / keys were used for. This helps to understand the big picture.", "new_text": "GroupSecrets object with the \"psks\" field set, the receiving Client includes them in the key schedule in the order listed in the Commit, or in the \"psks\" field respectively. For resumption PSKs, the PSK is defined as the \"resumption_psk\" of the group and epoch specified in the \"PreSharedKeyID\" object. Specifically, \"psk_secret\" is computed as follows: Here \"0\" represents the all-zero vector of length \"KDF.Nh\". The \"index\" field in \"PSKLabel\" corresponds to the index of the PSK in"}
{"id": "q-en-mls-protocol-ae9b605f569d90a71bf652993a28b28017e86c3d29b73372e5741087d4d6a161", "old_text": "9.6. The main MLS key schedule provides a \"resumption_secret\" that is used as a PSK to inject entropy from one epoch into another. This functionality is used in the reinitialization and branching processes described in reinitialization and sub-group-branching, but may be used by applications for other purposes. Some uses of resumption PSKs might call for the use of PSKs from historical epochs. The application SHOULD specify an upper limit on the number of past epochs for which the \"resumption_secret\" may be stored. 9.7. The main MLS key schedule provides a per-epoch \"authentication_secret\". If one of the parties is being actively impersonated by an attacker, their \"authentication_secret\" will differ from that of the other group members. Thus, members of a group MAY use their \"authentication_secrets\" within an out-of-band authentication protocol to ensure that they share the same view of the group. 10.", "comments": "Looks good! I would maybe add 1 sentence explaining the rule for what is called a secret and what is called a key. The rule says that a secret will go through some key derivation to obtain a key/keys, is that right?\nNAME Yep, that's exactly right. Added a sentence. Thanks for the feedback everyone!\nReopening due to incomplete discussion and inaccuracy. The claims presented are incorrect in response by NAME demonstrate a misunderstanding of the roles of these keys and uses, and constitutes a change to the fundamental uses thereof. The authenticationsecret is NOT a public value nor was it ever introduced as such. If an application is using it as a public value, that onus is on the developers, and I advise against using Cisco's implementation as a standard for MLS. As shown in the name and all original discussion, the authenticationsecret is intended as a secret value (or else 'public' in the same way that the group secret is 'public). It is used for further derivation to support authentication. This should be corrected, and such foundational changes without WG discussion should be avoided.\nClearly we have a disagreement here NAME but maybe let's hold off on attributing misunderstanding. It seems like you are claiming that in order for MLS to be secure, the needs to be protected to the same level as the . What is this based on? If that is true, then we should probably remove the altogether. The \"out-of-band\" protocol in which it would be confirmed would have to be part of the MLS security analysis, thus no longer out-of-band.\nNAME that is not the claim at all - different uses imply different compromise risks. This should not be a new concept to understand. As demonstrated in the terminology, the authenticationsecret is linked to authentication, not confidentiality. MLS, per specification, outsources authentication (the AS is not explicitly defined and all entity authentication within MLS relies on it). Even though it is definitionally out-of-scope and therefore out-of-band (certificates, authentication verification are entirely out-of-band of the protocol spec), we do not say that signature secret keys can be \"public\" - yet you are asserting that an authenticationsecret which can support epoch-level authentication would be public. The authenticationsecret is private in the same way that the exportervalue is: based on key schedule derivation, compromise of these keys should not undermine MLS, but secrecy of them can be leveraged. So, unless you are now proposing removing mention of certificates, etc. from the core protocol, there really isn't a logical basis for removing 'out-of-band' authentication mechanisms.\nWhat I'm trying to get at here is: What is the negative consequence if an is known by someone other than the members of a group? For example, leaking a signature private key would enable the attacker to impersonate the corresponding member. Leaking the would allow the attacker to decrypt all of the group's messages. What is the bad consequence of an becoming public?\nNAME I am also assuming that this push to use epoch-level authentication values as public codes is based on Cisco's implementation choice. I would be curious to know why there is such insistence that all implementations also go about using authentication in the same way. If an implementation chooses to use it as a secret value to then derive an authentication code based on yet other information, why is that a problem?\nPublishing the authenticationsecret as a public value has a similar effect as publishing the exportervalue as public - it is not an issue for MLS per protocol spec, but it would definitely be an issue for the application if it is then used for something secret thereafter. One reason not to specify these as public values is that they are derived following the conditions placed on keys and therefore could be used as secret values if desired by an application. Forcing or annotating them as immediately public, or as final 'codes', when such a use as a secret key is an out-of-scope potential is artificial and unnecessary.\nI'm not pushing anything, I'm just trying to get the protocol to be clear. If this thing needs to be secret, then it needs to be secret, and we need to document that so that people keep it secret. Thanks for elaborating more on your concerns. I see what you're saying -- basically the same point that NAME and I discussed . I think the core of the disagreement here is what we mean when we say that a value is \"public\" or \"secret\". I mean something very specific: That the security guarantees of MLS will not be harmed if a public value is provided to the attacker, and they will be harmed if a secret value is provided to the attacker. Importantly, \"public\" does not mean that the value must be published, or that the security guarantees of an application using MLS will not be harmed if you publish the value. It seems to me that in that specific taxonomy, the is \"public\" and not \"secret\". Do you disagree?\nBTW, the notion of something being \"public\" at one layer but \"secret\" at another is not new. For example, values exported from TLS aren't secret in the sense that publishing them would compromise TLS. You could authenticate your TLS connection by exporting a value and shouting it through megaphones across a crowd. But the main way people use them in practice is as secrets in some other protocol. Every WebRTC call uses SRTP keys that are exported from TLS, and many people would be very sad if these \"public\" values were published.\nIf it would help, I would be happy to add some text to the Security Considerations clarifying this distinction. Basically, draw a clear boundary around which values have to be kept confidential to the client in order to realize the security properties of the protocol. But I still think is a clearer name. Doesn't mean you have to use it in any particular way in your application; doesn't prevent it being used as a shared secret; but also doesn't imply that the MLS library has to worry about its secrecy.\nWithin the bounds of that particular taxonomy there is not an issue. However, while you may be following a given taxonomy, that is not a general one, one that would be widely understood by implementors, nor is it specified in the protocol spec. It is not an unreasonable taxonomy, but it certainly not a general one. If you are looking for MLS usage spec to align to your taxonomy then I do think that there should be a clarification section added. BTW as a point case on more standard terminology use, the TLS 1.3 spec uses the exact phrasing for exporters as \"exportermastersecret\", from which the actual exporter may be derived. This aligns very closely to the authenticationsecret, from which an authentication code later may be derived. TLS is generic in 'exporters' after an exportermastersecret is derived. Thus there is alignment of the term 'secret' to things that can be used for keys (have sufficient entropy), not as an alignment to public/secret at different layers. I am strongly opposed to the use of authenticationcode, and see no clear definition for that in your customized taxonomy. As such, it would follow the more standard interpretations, such as message authentication codes, etc. which are indeed explicitly public values. Codes are normally an end-state in themselves, not keys. Perhaps it is a clearer name for the Cisco implementation design, but it is not a clear name for something that can/should be used for further derivations as may be more generally applicable. Following the TLS norm for exporters, the name would be \"authenticationmastersecret\". The shortened \"authenticationsecret\" was a good name. If you are going to provide a write-up for the custom taxonomy, then an adjustment to \"authenticationexporter\" could perhaps be made.\nThe notion of a having boundary to the protocol and values that may or may not safely be exported across the boundary is not exotic or customized. You see it in FIPS 140-2, PKCS, and WebCrypto, and in tools like ProVerif and Tamarin. The only choice I'm making is in the specific words, in particular reserving \"secret\" for \"something that should not be exposed outside the protocol implementation because it would compromise the protocol\". I hope you agree that is different from say in this regard. It would be OK for an MLS implementation to have an API for an application to access the , but it would not be OK to have an API to access the . I'm not wedded to , I just want something other than . If you want to align with the exported values (to which I agree it is similar), we could call it an .\nThe distinction between those documents and tools, and the MLS spec is that those tools clearly define the boundaries and terminology, vs. just anyone making terminology assumptions without them being specified in the document. Not to be pedantic, but if you want to assume specific meanings for terms relative to a threat model for MLS only, versus a general usage (\"exporter secrets\" being secrets used in other things), then you really do need to specify those assumptions within the specification itself and not various side discussions. Such specification is part of what a standard is meant to do. Within other documents this is clearly defined - i.e., what is known as a threat model - such as one sees in ProVerif and Tamarin. Assuming that you are going to add such a taxonomy, and that it follows that described by you in , then the correct term would be \"authenticationkey\". To quote you what you said there, which correctly aligns to a reasonable usage of the authenticationkey. pointed out that there is a lack of consistency in the usage of \"keys\" vs \"secrets\" throughout the document - something that still has not been addressed and should not be dropped from the issue for edits. Regardless of what taxonomy is being used or even assumed, there is no consistency in it in the spec. You seem to be commenting on adjustments only for the \"key\" and \"secret\" uses that appear as variables, but the commentary within the spec interchangeably use these terms for the same thing. Whatever is used should be applied consistently. It is not reasonable to assume that developers will correctly guess reasons for terminology separations particularly when they are inconsistently enforced.\nOn your latter point about usage of \"keys\" vs \"secrets\" throughout the document \u2014 Sorry if I misinterpreted your intent here. If you're seeing places where there are inconsistencies, it would be helpful if you could highlight those more specifically, or ideally send a PR to fix them.\nNAME had a call about this today and hashed out a couple of things: We should be clear in general about what we mean by secrets/keys, in that we generally use them interchangeably, but sometimes use one or the other to indicate how a specific value is used The need for secrecy of the value labeled depends on the context in which it is used. So we should provide some text to suggest that there are different ways to use it, and use a fairly neutral name. NAME suggested , which I like a lot. I have attempted to implement these in\nThe spec currently uses a mix of terminologies for keys/secrets. In the key schedule we have the following: senderdatasecret | \"sender data\" encryptionsecret | \"encryption\" exportersecret | \"exporter\" authenticationsecret | \"authentication\" externalsecret | \"external\" confirmationkey | \"confirm\" membershipkey | \"membership\" resumptionsecret | \"resumption\" It would be best to provide a coherency to how we use these terms and check for consistency throughout, to avoid confusion for readers: A simple solution would be to use \"secret\" for all cases, i.e., changing membershipkey to membershipsecret.Another alternative, which may take more refining but could be useful to readers is to separate out terms based on items that are for internal protocol use versus external protocol use. For example, encryption, confirmation, membership, etc., would use \"key\" and exporter, authentication, etc., could use \"secret\", or vice versa. --Break-- Ref the mailinglist on confirmation vs authentication secrets, perhaps the naming of these two are a little misleading - authentication normally being within bounds of a protocol while confirmation is external to it. This confusion probably sparked some of that discussion to begin with. It could_ be worth considering switching those terms for clarity.\nI actually think we have a fairly coherent taxonomy here: \"secret\" is something that gets fed into HKDF to derive further things \"key\" is something that gets fed into another algorithm, e.g., HMAC Looking through the list, the ones that deviate from this pattern are: , which is exported to the application but expected to be kept private , which is exported to the application but not expected to be kept private Given that, I would propose we update the table in question to something like the following, using because that's how it's used, and to follow the \"security code\" terminology used by applications:\nThings are easier to understand like this! The \"Purpose\" column is quite valuable I think, I remember I was quite overwhelmed when I first read it and had no idea what all those secret / keys were used for. This helps to understand the big picture.", "new_text": "9.6. The main MLS key schedule provides a \"resumption_psk\" that is used as a PSK to inject entropy from one epoch into another. This functionality is used in the reinitialization and branching processes described in reinitialization and sub-group-branching, but may be used by applications for other purposes. Some uses of resumption PSKs might call for the use of PSKs from historical epochs. The application SHOULD specify an upper limit on the number of past epochs for which the \"resumption_psk\" may be stored. 9.7. The main MLS key schedule provides a per-epoch \"authentication_code\". If one of the parties is being actively impersonated by an attacker, their \"authentication_code\" will differ from that of the other group members. Thus, members of a group MAY verify out of band that their \"authentication_code\" values are all the same, in order to ensure that they share the same view of the group. 10."}
{"id": "q-en-mls-protocol-9a7ecca5eafca9466aa04b1d10d46f9ff701c7cb0977cfb5105ab8abe403de00", "old_text": "The ciphertext content is encoded using the MLSCiphertextContent structure. In the MLS key schedule, the sender creates two distinct key ratchets for handshake and application messages for each member of the group. When encrypting a message, the sender looks at the ratchets it", "comments": "Also requires that padding be all-zero. Before, the sender was allowed to put arbitrary octets in there.\nLooks good to me. NAME I think this covers your issue on the mailing list\nDiscussed at 19 May interim. Decision to merge.\nAs discussed , we should refactor so that the padding is not length-delimited, something like: In the MLSCiphertextContent struct we have a field for padding represented using a variable-size vector which helps hide information about the length of content; notably application messages. But because of the way we encode variable-size vectors it turns out you can't actually pad the rest of the content with certain number of bytes: namely 0, 65, 16386, 16387. (E.g. if you pad with a vector of 63 bytes the TLS encoding actually produces 64 bytes for padding: 63 bytes + 1 length byte. But if you pad with 64 bytes you end up with a 66 byte encoding as now you need 2 bytes for the length. Not being able to pad with 0 is OK. But 65 is kind of annoying. E.g. if you want to pad to a target content length X bytes then if the rest of the content ends up being X-65 bytes long you are out of luck. (Assuming I'm right about how this all works) it seems a bit silly to me for a padding scheme to have somewhat arbitrary gaps like that; especially at a not unreasonable number of bytes to want to pad with. If I didn't miss something, and you all agree that its better to avoid having gaps like that then how about represent padding as, say, a fixed length vector's? Jo\u00ebl Amazon Development Center Austria GmbH Brueckenkopfgasse 1 8020 Graz Oesterreich Sitz in Graz Firmenbuchnummer: FN 439453 f Firmenbuchgericht: Landesgericht fuer Zivilrechtssachen Graz", "new_text": "The ciphertext content is encoded using the MLSCiphertextContent structure. The \"padding\" field is set by the sender, by first encoding the content (via the \"select\") and the \"auth\" field, then appending the chosen number of zero bytes. A receiver identifies the padding field in a plaintext decoded from \"MLSCiphertext.ciphertext\" by first decoding the content and the \"auth\" field; then the \"padding\" field comprises any remaining octets of plaintext. The \"padding\" field MUST be filled with all zero bytes. A receiver MUST verify that there are no non-zero bytes in the \"padding\" field, and if this check fails, the enclosing MLSCiphertext MUST be rejected as malformed. In the MLS key schedule, the sender creates two distinct key ratchets for handshake and application messages for each member of the group. When encrypting a message, the sender looks at the ratchets it"}
{"id": "q-en-mls-protocol-9a7ecca5eafca9466aa04b1d10d46f9ff701c7cb0977cfb5105ab8abe403de00", "old_text": "The key and nonce provided to the AEAD are computed as the KDF of the first \"KDF.Nh\" bytes of the ciphertext generated in the previous section. If the length of the ciphertext is less than \"KDF.Nh\", the whole ciphertext is used without padding. In pseudocode, the key and nonce are derived as: The Additional Authenticated Data (AAD) for the SenderData ciphertext is the first three fields of MLSCiphertext:", "comments": "Also requires that padding be all-zero. Before, the sender was allowed to put arbitrary octets in there.\nLooks good to me. NAME I think this covers your issue on the mailing list\nDiscussed at 19 May interim. Decision to merge.\nAs discussed , we should refactor so that the padding is not length-delimited, something like: In the MLSCiphertextContent struct we have a field for padding represented using a variable-size vector which helps hide information about the length of content; notably application messages. But because of the way we encode variable-size vectors it turns out you can't actually pad the rest of the content with certain number of bytes: namely 0, 65, 16386, 16387. (E.g. if you pad with a vector of 63 bytes the TLS encoding actually produces 64 bytes for padding: 63 bytes + 1 length byte. But if you pad with 64 bytes you end up with a 66 byte encoding as now you need 2 bytes for the length. Not being able to pad with 0 is OK. But 65 is kind of annoying. E.g. if you want to pad to a target content length X bytes then if the rest of the content ends up being X-65 bytes long you are out of luck. (Assuming I'm right about how this all works) it seems a bit silly to me for a padding scheme to have somewhat arbitrary gaps like that; especially at a not unreasonable number of bytes to want to pad with. If I didn't miss something, and you all agree that its better to avoid having gaps like that then how about represent padding as, say, a fixed length vector's? Jo\u00ebl Amazon Development Center Austria GmbH Brueckenkopfgasse 1 8020 Graz Oesterreich Sitz in Graz Firmenbuchnummer: FN 439453 f Firmenbuchgericht: Landesgericht fuer Zivilrechtssachen Graz", "new_text": "The key and nonce provided to the AEAD are computed as the KDF of the first \"KDF.Nh\" bytes of the ciphertext generated in the previous section. If the length of the ciphertext is less than \"KDF.Nh\", the whole ciphertext is used. In pseudocode, the key and nonce are derived as: The Additional Authenticated Data (AAD) for the SenderData ciphertext is the first three fields of MLSCiphertext:"}
{"id": "q-en-mls-protocol-9a7ecca5eafca9466aa04b1d10d46f9ff701c7cb0977cfb5105ab8abe403de00", "old_text": "find out the plaintext by correlation between the question and the length. The content and length of the \"padding\" field in \"MLSCiphertextContent\" can be chosen at the time of message encryption by the sender. It is recommended that padding data is comprised of zero-valued bytes and follows an established deterministic padding scheme. 16.2.", "comments": "Also requires that padding be all-zero. Before, the sender was allowed to put arbitrary octets in there.\nLooks good to me. NAME I think this covers your issue on the mailing list\nDiscussed at 19 May interim. Decision to merge.\nAs discussed , we should refactor so that the padding is not length-delimited, something like: In the MLSCiphertextContent struct we have a field for padding represented using a variable-size vector which helps hide information about the length of content; notably application messages. But because of the way we encode variable-size vectors it turns out you can't actually pad the rest of the content with certain number of bytes: namely 0, 65, 16386, 16387. (E.g. if you pad with a vector of 63 bytes the TLS encoding actually produces 64 bytes for padding: 63 bytes + 1 length byte. But if you pad with 64 bytes you end up with a 66 byte encoding as now you need 2 bytes for the length. Not being able to pad with 0 is OK. But 65 is kind of annoying. E.g. if you want to pad to a target content length X bytes then if the rest of the content ends up being X-65 bytes long you are out of luck. (Assuming I'm right about how this all works) it seems a bit silly to me for a padding scheme to have somewhat arbitrary gaps like that; especially at a not unreasonable number of bytes to want to pad with. If I didn't miss something, and you all agree that its better to avoid having gaps like that then how about represent padding as, say, a fixed length vector's? Jo\u00ebl Amazon Development Center Austria GmbH Brueckenkopfgasse 1 8020 Graz Oesterreich Sitz in Graz Firmenbuchnummer: FN 439453 f Firmenbuchgericht: Landesgericht fuer Zivilrechtssachen Graz", "new_text": "find out the plaintext by correlation between the question and the length. The length of the \"padding\" field in \"MLSCiphertextContent\" can be chosen at the time of message encryption by the sender. Senders may use padding to reduce the ability of attackers outside the group to infer the size of the encrypted content. 16.2."}
{"id": "q-en-mls-protocol-d741e87e44eed64a1601ea9920c0a32e1af41e00d210085bd647c0f8355ddd98", "old_text": "13.1.7. An AppAck proposal is used to acknowledge receipt of application messages. Though this information implies no change to the group, it is structured as a Proposal message so that it is included in the group's transcript by being included in Commit messages. An AppAck proposal represents a set of messages received by the sender in the current epoch. Messages are represented by the \"sender\" and \"generation\" values in the MLSCiphertext for the message. Each MessageRange represents receipt of a span of messages whose \"generation\" values form a continuous range from \"first_generation\" to \"last_generation\", inclusive. AppAck proposals are sent as a guard against the Delivery Service dropping application messages. The sequential nature of the \"generation\" field provides a degree of loss detection, since gaps in the \"generation\" sequence indicate dropped messages. AppAck completes this story by addressing the scenario where the Delivery Service drops all messages after a certain point, so that a later generation is never observed. Obviously, there is a risk that AppAck messages could be suppressed as well, but their inclusion in the transcript means that if they are suppressed then the group cannot advance at all. The schedule on which sending AppAck proposals are sent is up to the application, and determines which cases of loss/suppression are detected. For example: The application might have the committer include an AppAck proposal whenever a Commit is sent, so that other members could know when one of their messages did not reach the committer. The application could have a client send an AppAck whenever an application message is sent, covering all messages received since its last AppAck. This would provide a complete view of any losses experienced by active members. The application could simply have clients send AppAck proposals on a timer, so that all participants' state would be known. An application using AppAck proposals to guard against loss/ suppression of application messages also needs to ensure that AppAck messages and the Commits that reference them are not dropped. One way to do this is to always encrypt Proposal and Commit messages, to make it more difficult for the Delivery Service to recognize which messages contain AppAcks. The application can also have clients enforce an AppAck schedule, reporting loss if an AppAck is not received at the expected time. 13.1.8. A GroupContextExtensions proposal is used to update the list of extensions in the GroupContext for the group.", "comments": "The way the spec is written now, clients need to implement all proposal types, and they must react to appropriately to all valid proposals, including AppAck. AppAck in a large, busy group could result in a situation where clients with longer latency connections to the DS (perhaps in a different country, using satellite, or using some wireless networks) might be effectively prevented from sending any messages, or worse could be prevented from sending AppAcks because their proposals are always a few hundred milliseconds late. I think AppAcks still have a place is relatively small groups with low message volumes, but I have a few suggestions: 1) Describe this issue either in the AppAck proposal section or in the security considerations section 2) I think it is reasonable for a group to consider some proposal types invalid. If there is consensus with this position, I'd like to make this clear with an extra sentence or two. If that is not the consensus, then I will propose that only Add, Remove, and Update are assumed to be supported by each group, and other proposal types have to be explicitly added.\nMy team had similar comments. I agree that this shouldn't be a 100% necessary thing to support.\nI would differentiate between \"support\" in the sense of \"if you receive one, do something sensible with it\" vs. \"send this under some circumstances\". My understanding was that the requirement here was the former, and even that the definition of \"sensible\" was up to the implementation. The thinking being that if an application wanted to get some anti-suppression properties, it could send and verify these on some schedule, but when exactly they would be sent would be up to the application. Does that alleviate some of the concern here? If so, I would totally be open to clarifying that. To NAME point about latency, it seems like we could add an parameter to AppAck so that you could report on past epochs' messages in a future epoch. On considering proposal types invalid -- I think there has always been a notion that applications might have policies on which proposals are acceptable. Not just at the proposal type level, but at even finer granularities. Think about things like groups where only certain members are allowed to add/remove participants. I would also be fine laying this out more explicitly, though maybe it would be a more natural fit in the architecture document.\nSorry, meant to delete a comment, closed the issue instead.\nIn the past I've expressed reservations about AppAck being included in the spec. I don't think it's sufficiently general (it only handles a one specific pattern of message loss and won't work for others, per NAME comment), and I don't like that we don't specify how to handle dropped messages in any way. The AppAck functionality is likely to just be re-invented at the application-level because of this. For example, AppAck can't be used to implement read receipts It doesn't help detect the suppression of handshake messages It doesn't handle cases where some messages must be delivered and others may be dropped So instead I would propose removing AppAck from the spec.\nI've opened a PR to that effect in .\nVirtual interim 2022-05-19 Agreement that the bar for support is quite low ... but you don't get a security property unless members actually do something more than the minimum That sort of implies that you want the group to opt in to this mechanism So we should have a new proposal type in a separate I-D\n(To be clear, AppAck should move from the main spec here to its own I-D.)\nOne comment that renumbering the groupcontextextensions proposal was probably a bad idea from a process perspective which we should avoid doing in the future. In this case it seems unlikely that many people have implemented it who could not make a breaking change.", "new_text": "13.1.7. A GroupContextExtensions proposal is used to update the list of extensions in the GroupContext for the group."}
{"id": "q-en-mls-protocol-d741e87e44eed64a1601ea9920c0a32e1af41e00d210085bd647c0f8355ddd98", "old_text": "confirmation_tag by way of the key schedule will confirm that all members of the group agree on the extensions in use. 13.1.9. Add and Remove proposals can be constructed and sent to the group by a party that is outside the group. For example, a Delivery Service", "comments": "The way the spec is written now, clients need to implement all proposal types, and they must react to appropriately to all valid proposals, including AppAck. AppAck in a large, busy group could result in a situation where clients with longer latency connections to the DS (perhaps in a different country, using satellite, or using some wireless networks) might be effectively prevented from sending any messages, or worse could be prevented from sending AppAcks because their proposals are always a few hundred milliseconds late. I think AppAcks still have a place is relatively small groups with low message volumes, but I have a few suggestions: 1) Describe this issue either in the AppAck proposal section or in the security considerations section 2) I think it is reasonable for a group to consider some proposal types invalid. If there is consensus with this position, I'd like to make this clear with an extra sentence or two. If that is not the consensus, then I will propose that only Add, Remove, and Update are assumed to be supported by each group, and other proposal types have to be explicitly added.\nMy team had similar comments. I agree that this shouldn't be a 100% necessary thing to support.\nI would differentiate between \"support\" in the sense of \"if you receive one, do something sensible with it\" vs. \"send this under some circumstances\". My understanding was that the requirement here was the former, and even that the definition of \"sensible\" was up to the implementation. The thinking being that if an application wanted to get some anti-suppression properties, it could send and verify these on some schedule, but when exactly they would be sent would be up to the application. Does that alleviate some of the concern here? If so, I would totally be open to clarifying that. To NAME point about latency, it seems like we could add an parameter to AppAck so that you could report on past epochs' messages in a future epoch. On considering proposal types invalid -- I think there has always been a notion that applications might have policies on which proposals are acceptable. Not just at the proposal type level, but at even finer granularities. Think about things like groups where only certain members are allowed to add/remove participants. I would also be fine laying this out more explicitly, though maybe it would be a more natural fit in the architecture document.\nSorry, meant to delete a comment, closed the issue instead.\nIn the past I've expressed reservations about AppAck being included in the spec. I don't think it's sufficiently general (it only handles a one specific pattern of message loss and won't work for others, per NAME comment), and I don't like that we don't specify how to handle dropped messages in any way. The AppAck functionality is likely to just be re-invented at the application-level because of this. For example, AppAck can't be used to implement read receipts It doesn't help detect the suppression of handshake messages It doesn't handle cases where some messages must be delivered and others may be dropped So instead I would propose removing AppAck from the spec.\nI've opened a PR to that effect in .\nVirtual interim 2022-05-19 Agreement that the bar for support is quite low ... but you don't get a security property unless members actually do something more than the minimum That sort of implies that you want the group to opt in to this mechanism So we should have a new proposal type in a separate I-D\n(To be clear, AppAck should move from the main spec here to its own I-D.)\nOne comment that renumbering the groupcontextextensions proposal was probably a bad idea from a process perspective which we should avoid doing in the future. In this case it seems unlikely that many people have implemented it who could not make a breaking change.", "new_text": "confirmation_tag by way of the key schedule will confirm that all members of the group agree on the extensions in use. 13.1.8. Add and Remove proposals can be constructed and sent to the group by a party that is outside the group. For example, a Delivery Service"}
{"id": "q-en-mls-protocol-d741e87e44eed64a1601ea9920c0a32e1af41e00d210085bd647c0f8355ddd98", "old_text": "the sender will not have the keys necessary to construct an MLSCiphertext object. 13.1.9.1. The \"external_senders\" extension is a group context extension that contains credentials of senders that are permitted to send external", "comments": "The way the spec is written now, clients need to implement all proposal types, and they must react to appropriately to all valid proposals, including AppAck. AppAck in a large, busy group could result in a situation where clients with longer latency connections to the DS (perhaps in a different country, using satellite, or using some wireless networks) might be effectively prevented from sending any messages, or worse could be prevented from sending AppAcks because their proposals are always a few hundred milliseconds late. I think AppAcks still have a place is relatively small groups with low message volumes, but I have a few suggestions: 1) Describe this issue either in the AppAck proposal section or in the security considerations section 2) I think it is reasonable for a group to consider some proposal types invalid. If there is consensus with this position, I'd like to make this clear with an extra sentence or two. If that is not the consensus, then I will propose that only Add, Remove, and Update are assumed to be supported by each group, and other proposal types have to be explicitly added.\nMy team had similar comments. I agree that this shouldn't be a 100% necessary thing to support.\nI would differentiate between \"support\" in the sense of \"if you receive one, do something sensible with it\" vs. \"send this under some circumstances\". My understanding was that the requirement here was the former, and even that the definition of \"sensible\" was up to the implementation. The thinking being that if an application wanted to get some anti-suppression properties, it could send and verify these on some schedule, but when exactly they would be sent would be up to the application. Does that alleviate some of the concern here? If so, I would totally be open to clarifying that. To NAME point about latency, it seems like we could add an parameter to AppAck so that you could report on past epochs' messages in a future epoch. On considering proposal types invalid -- I think there has always been a notion that applications might have policies on which proposals are acceptable. Not just at the proposal type level, but at even finer granularities. Think about things like groups where only certain members are allowed to add/remove participants. I would also be fine laying this out more explicitly, though maybe it would be a more natural fit in the architecture document.\nSorry, meant to delete a comment, closed the issue instead.\nIn the past I've expressed reservations about AppAck being included in the spec. I don't think it's sufficiently general (it only handles a one specific pattern of message loss and won't work for others, per NAME comment), and I don't like that we don't specify how to handle dropped messages in any way. The AppAck functionality is likely to just be re-invented at the application-level because of this. For example, AppAck can't be used to implement read receipts It doesn't help detect the suppression of handshake messages It doesn't handle cases where some messages must be delivered and others may be dropped So instead I would propose removing AppAck from the spec.\nI've opened a PR to that effect in .\nVirtual interim 2022-05-19 Agreement that the bar for support is quite low ... but you don't get a security property unless members actually do something more than the minimum That sort of implies that you want the group to opt in to this mechanism So we should have a new proposal type in a separate I-D\n(To be clear, AppAck should move from the main spec here to its own I-D.)\nOne comment that renumbering the groupcontextextensions proposal was probably a bad idea from a process perspective which we should avoid doing in the future. In this case it seems unlikely that many people have implemented it who could not make a breaking change.", "new_text": "the sender will not have the keys necessary to construct an MLSCiphertext object. 13.1.8.1. The \"external_senders\" extension is a group context extension that contains credentials of senders that are permitted to send external"}
{"id": "q-en-mls-protocol-d741e87e44eed64a1601ea9920c0a32e1af41e00d210085bd647c0f8355ddd98", "old_text": "\"psk\" \"app_ack\" \"reinit\" New proposal types MUST state whether they require a path. If any", "comments": "The way the spec is written now, clients need to implement all proposal types, and they must react to appropriately to all valid proposals, including AppAck. AppAck in a large, busy group could result in a situation where clients with longer latency connections to the DS (perhaps in a different country, using satellite, or using some wireless networks) might be effectively prevented from sending any messages, or worse could be prevented from sending AppAcks because their proposals are always a few hundred milliseconds late. I think AppAcks still have a place is relatively small groups with low message volumes, but I have a few suggestions: 1) Describe this issue either in the AppAck proposal section or in the security considerations section 2) I think it is reasonable for a group to consider some proposal types invalid. If there is consensus with this position, I'd like to make this clear with an extra sentence or two. If that is not the consensus, then I will propose that only Add, Remove, and Update are assumed to be supported by each group, and other proposal types have to be explicitly added.\nMy team had similar comments. I agree that this shouldn't be a 100% necessary thing to support.\nI would differentiate between \"support\" in the sense of \"if you receive one, do something sensible with it\" vs. \"send this under some circumstances\". My understanding was that the requirement here was the former, and even that the definition of \"sensible\" was up to the implementation. The thinking being that if an application wanted to get some anti-suppression properties, it could send and verify these on some schedule, but when exactly they would be sent would be up to the application. Does that alleviate some of the concern here? If so, I would totally be open to clarifying that. To NAME point about latency, it seems like we could add an parameter to AppAck so that you could report on past epochs' messages in a future epoch. On considering proposal types invalid -- I think there has always been a notion that applications might have policies on which proposals are acceptable. Not just at the proposal type level, but at even finer granularities. Think about things like groups where only certain members are allowed to add/remove participants. I would also be fine laying this out more explicitly, though maybe it would be a more natural fit in the architecture document.\nSorry, meant to delete a comment, closed the issue instead.\nIn the past I've expressed reservations about AppAck being included in the spec. I don't think it's sufficiently general (it only handles a one specific pattern of message loss and won't work for others, per NAME comment), and I don't like that we don't specify how to handle dropped messages in any way. The AppAck functionality is likely to just be re-invented at the application-level because of this. For example, AppAck can't be used to implement read receipts It doesn't help detect the suppression of handshake messages It doesn't handle cases where some messages must be delivered and others may be dropped So instead I would propose removing AppAck from the spec.\nI've opened a PR to that effect in .\nVirtual interim 2022-05-19 Agreement that the bar for support is quite low ... but you don't get a security property unless members actually do something more than the minimum That sort of implies that you want the group to opt in to this mechanism So we should have a new proposal type in a separate I-D\n(To be clear, AppAck should move from the main spec here to its own I-D.)\nOne comment that renumbering the groupcontextextensions proposal was probably a bad idea from a process perspective which we should avoid doing in the future. In this case it seems unlikely that many people have implemented it who could not make a breaking change.", "new_text": "\"psk\" \"reinit\" New proposal types MUST state whether they require a path. If any"}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "For a KeyPackageRef, the \"value\" input is the encoded KeyPackage, and the ciphersuite specified in the KeyPackage determines the KDF used. For a LeafNodeRef, the \"value\" input is the LeafNode object for the leaf node in question. For a ProposalRef, the \"value\" input is the MLSMessageContentAuth carrying the proposal. In the latter two cases, the KDF is determined by the group's ciphersuite. 6.3.", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "For a KeyPackageRef, the \"value\" input is the encoded KeyPackage, and the ciphersuite specified in the KeyPackage determines the KDF used. For a ProposalRef, the \"value\" input is the MLSMessageContentAuth carrying the proposal. In the latter two cases, the KDF is determined by the group's ciphersuite. 6.3."}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "clients using some identifier. MLS clients have a few types of identifiers, with different operational properties. The Credentials presented by the clients in a group authenticate application-level identifiers for the clients. These identifiers may not uniquely identify clients. For example, if a user has multiple devices that are all present in an MLS group, then those devices' clients could all present the user's application-layer identifiers. Internally to the protocol, group members are uniquely identified by their leaves, expressed as LeafNodeRef objects. These identifiers are unstable: They change whenever the member sends a Commit, or whenever an Update proposal from the member is committed. MLS provides two unique client identifiers that are stable across epochs: The index of a client among the leaves of the tree The \"epoch_id\" field in the key package The application may also provide application-specific unique identifiers in the \"extensions\" field of KeyPackage or LeafNode objects. 7.", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "clients using some identifier. MLS clients have a few types of identifiers, with different operational properties. Internally to the protocol, group members are uniquely identified by their leaf index. However, a leaf index is only valid for referring to members in a given epoch. The same leaf index may represent a different member, or no member at all, in a subsequent epoch. The Credentials presented by the clients in a group authenticate application-level identifiers for the clients. However, these identifiers may not uniquely identify clients. For example, if a user has multiple devices that are all present in an MLS group, then those devices' clients could all present the user's application-layer identifiers. If needed, applications may add application-specific unique identifiers to the \"extensions\" field of KeyPackage or LeafNode objects. 7."}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "signature key depending on the sender's \"sender_type\": \"member\": The signature key contained in the Credential at the leaf with the sender's \"LeafNodeRef\" \"external\": The signature key contained in the Credential at the index indicated by the \"sender_index\" in the \"external_senders\" group context extension (see Section external-senders-extension). In this case, the \"content_type\" of the message MUST NOT be \"commit\", since only members of the group or new joiners can send Commit messages. \"new_member\": The signature key depends on the \"content_type\":", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "signature key depending on the sender's \"sender_type\": \"member\": The signature key contained in the Credential at the index indicated by \"leaf_index\" in the ratchet tree. \"external\": The signature key contained in the Credential at the index indicated by \"sender_index\" in the \"external_senders\" group context extension (see Section external-senders-extension). In this case, the \"content_type\" of the message MUST NOT be \"commit\", since only members of the group or new joiners can send Commit messages. \"new_member\": The signature key depends on the \"content_type\":"}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "content. Before being encrypted, the sender data is encoded as an object of the following form: MLSSenderData.sender is assumed to be a \"member\" sender type. When constructing an MLSSenderData from a Sender object, the sender MUST verify Sender.sender_type is \"member\" and use Sender.sender for MLSSenderData.sender. The \"reuse_guard\" field contains a fresh random value used to avoid nonce reuse in the case of state loss or corruption, as described in", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "content. Before being encrypted, the sender data is encoded as an object of the following form: When constructing an MLSSenderData from a Sender object, the sender MUST verify Sender.sender_type is \"member\" and use Sender.leaf_index for MLSSenderData.leaf_index. The \"reuse_guard\" field contains a fresh random value used to avoid nonce reuse in the case of state loss or corruption, as described in"}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "is the first three fields of MLSCiphertext: When parsing a SenderData struct as part of message decryption, the recipient MUST verify that the LeafNodeRef indicated in the \"sender\" field identifies a member of the group. 8.", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "is the first three fields of MLSCiphertext: When parsing a SenderData struct as part of message decryption, the recipient MUST verify that the leaf index indicated in the \"leaf_index\" field identifies a non-blank node. 8."}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "13.1.3. A Remove proposal requests that the member with LeafNodeRef \"removed\" be removed from the group. A member of the group applies a Remove message by taking the following steps:", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "13.1.3. A Remove proposal requests that the member with the leaf index \"removed\" be removed from the group. A member of the group applies a Remove message by taking the following steps:"}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "local group state. New members MUST verify the \"signature\" using the public key taken from the credential in the leaf node of the member with LeafNodeRef \"signer\". The signature covers the following structure, comprising all the fields in the GroupInfo above \"signature\": 13.2.3.1.", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "local group state. New members MUST verify the \"signature\" using the public key taken from the credential in the leaf node of the ratchet tree with leaf index \"signer\". The signature covers the following structure, comprising all the fields in the GroupInfo above \"signature\": 13.2.3.1."}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "Verify the signature on the GroupInfo object. The signature input comprises all of the fields in the GroupInfo object except the signature field. The public key and algorithm are taken from the credential in the leaf node of the member with LeafNodeRef \"signer\". If there is no matching leaf node, or if signature verification fails, return an error. Verify the integrity of the ratchet tree.", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "Verify the signature on the GroupInfo object. The signature input comprises all of the fields in the GroupInfo object except the signature field. The public key and algorithm are taken from the credential in the leaf node of the ratchet tree with leaf index \"signer\". If the node is blank or if signature verification fails, return an error. Verify the integrity of the ratchet tree."}
{"id": "q-en-mls-protocol-a105a2eb813678c8cc1f6eed9c7665a07e1aadadbac013766327c289bde75ce8", "old_text": "If the \"path_secret\" value is set in the GroupSecrets object: Identify the lowest common ancestor of the leaf node \"my_leaf\" and of the node of the member with LeafNodeRef \"GroupInfo.signer\". Set the private key for this node to the private key derived from the \"path_secret\".", "comments": "This seems to be a layer of indirection that doesn't add any value to implementation.\nThe Credentials section claims there is an epoch id field in key packages. I don't think that actually exists. My apologies if this is a duplicate issue.\nAddressed in but may need to be pulled out depending on the fate of that PR\ncontains all of the Proposals that were previously sent within the same epoch. This seems unnecessary, since all members should have them anyway. The list of in the actual message should be enough for members to parse and validate a message. We should discuss if we can de-duplicate this, since the list of Proposals might be quite long.\nThe current definition of is as follows; Maybe this issue is obsolete?\nSeems it is now obsolete indeed.\nI don't have a strong opinion on this question, but this change seems sensible to me. The use of LeafNodeRef requires either (a) a linear scan through the tree, which is time-expensive in large trees, or (b) keeping an index that maps LeafNodeRef values to indices, which is storage-intensive in large trees. IIRC, the change to LeafNodeRef was made in the context of NAME work to make the tree description less tied to the array representation. In general, that is the right thing to do, and it's good that we did it. But we still have leaf indices anyway, for . Since we still have the concept, we might as well use it where it makes sense. And it certainly seems to me like and make sense, since they are internal to the protocol. In case folks are worried about the application-facing API -- MLS stacks can certainly be more intelligent, e.g., exposing or or . But they can do that regardless of this PR.", "new_text": "If the \"path_secret\" value is set in the GroupSecrets object: Identify the lowest common ancestor of the leaf node \"my_leaf\" and of the node of the member with leaf index \"GroupInfo.signer\". Set the private key for this node to the private key derived from the \"path_secret\"."}
{"id": "q-en-mls-protocol-3280ecb8060e0eb2f7ebffbb78a94c6cae848dfe42efcbdcdaed74339b1aeaff", "old_text": "Transmit the Welcome message to the other new members The recipient of a Welcome message processes it as described in joining-via-welcome-message. If application context informs the recipient that the Welcome should reflect the creation of a new group", "comments": "Brendan to revise based on Rohan suggestion.\nInterim 2022-05-26: Concerns around making the group_id totally random Malicious group creator can choose it malicious Federated systems get worse randomness Using AEAD.Nk depends on ciphersuite, could be awkward NAME will revise to say: Here are the requirements that group ID must satisfy If you don\u2019t have an alternative plan, do it randomly\nGroup ids are signed to bind certain information to specific groups. But I can imagine a case where a user mis-uses the group id to be something like a \"room title\" which lets an adversary control the group id and introduces potential problems. If we can prove that it's fine for a group id to be reused, then should we get rid of group ids? If not (as I expect), then group ids need to be guaranteed to be unique [globally per signing key] and we need to specify that.\nDiscussed at 19 May interim. NAME to generate PR documenting fresh random value approach.\nVirtual interim 2022-05-19: Agreement that some sort of uniqueness requirement is worthwhile Main question is what scope of uniqueness should be Globally? Within scope of DS? Fresh random value? Simple, but precludes application usage of the field Also could help avoid metadata leakage Client / DS should probably still check for collisions NAME to write up a PR on the fresh random value idea\nGroup IDs must be generated randomly by creator Clients and DS must check that group ids are unique as far as they know", "new_text": "Transmit the Welcome message to the other new members Group IDs SHOULD be constructed in such a way that there's an overwhelmingly low probability of honest group creators generating the same group ID, even without assistance from the Delivery Service. For example, by making the group ID a freshly generated random value of size \"KDF.Nh\". The Delivery Service MAY attempt to ensure that group IDs are globally unique by rejecting the creation of new groups with a previously used ID. The recipient of a Welcome message processes it as described in joining-via-welcome-message. If application context informs the recipient that the Welcome should reflect the creation of a new group"}
{"id": "q-en-mls-protocol-3280ecb8060e0eb2f7ebffbb78a94c6cae848dfe42efcbdcdaed74339b1aeaff", "old_text": "In both cases, the new members need information to bootstrap their local group state. New members MUST verify the \"signature\" using the public key taken from the credential in the leaf node of the ratchet tree with leaf index \"signer\". The signature covers the following structure, comprising all the fields in the GroupInfo above \"signature\": 13.2.3.1.", "comments": "Brendan to revise based on Rohan suggestion.\nInterim 2022-05-26: Concerns around making the group_id totally random Malicious group creator can choose it malicious Federated systems get worse randomness Using AEAD.Nk depends on ciphersuite, could be awkward NAME will revise to say: Here are the requirements that group ID must satisfy If you don\u2019t have an alternative plan, do it randomly\nGroup ids are signed to bind certain information to specific groups. But I can imagine a case where a user mis-uses the group id to be something like a \"room title\" which lets an adversary control the group id and introduces potential problems. If we can prove that it's fine for a group id to be reused, then should we get rid of group ids? If not (as I expect), then group ids need to be guaranteed to be unique [globally per signing key] and we need to specify that.\nDiscussed at 19 May interim. NAME to generate PR documenting fresh random value approach.\nVirtual interim 2022-05-19: Agreement that some sort of uniqueness requirement is worthwhile Main question is what scope of uniqueness should be Globally? Within scope of DS? Fresh random value? Simple, but precludes application usage of the field Also could help avoid metadata leakage Client / DS should probably still check for collisions NAME to write up a PR on the fresh random value idea\nGroup IDs must be generated randomly by creator Clients and DS must check that group ids are unique as far as they know", "new_text": "In both cases, the new members need information to bootstrap their local group state. New members MUST verify that \"group_id\" is unique among the groups they're currently participating in. New members also MUST verify the \"signature\" using the public key taken from the leaf node of the ratchet tree with leaf index \"signer\". The signature covers the following structure, comprising all the fields in the GroupInfo above \"signature\": 13.2.3.1."}
{"id": "q-en-mls-protocol-3280ecb8060e0eb2f7ebffbb78a94c6cae848dfe42efcbdcdaed74339b1aeaff", "old_text": "\"signer\". If the node is blank or if signature verification fails, return an error. Verify the integrity of the ratchet tree. Verify that the tree hash of the ratchet tree matches the", "comments": "Brendan to revise based on Rohan suggestion.\nInterim 2022-05-26: Concerns around making the group_id totally random Malicious group creator can choose it malicious Federated systems get worse randomness Using AEAD.Nk depends on ciphersuite, could be awkward NAME will revise to say: Here are the requirements that group ID must satisfy If you don\u2019t have an alternative plan, do it randomly\nGroup ids are signed to bind certain information to specific groups. But I can imagine a case where a user mis-uses the group id to be something like a \"room title\" which lets an adversary control the group id and introduces potential problems. If we can prove that it's fine for a group id to be reused, then should we get rid of group ids? If not (as I expect), then group ids need to be guaranteed to be unique [globally per signing key] and we need to specify that.\nDiscussed at 19 May interim. NAME to generate PR documenting fresh random value approach.\nVirtual interim 2022-05-19: Agreement that some sort of uniqueness requirement is worthwhile Main question is what scope of uniqueness should be Globally? Within scope of DS? Fresh random value? Simple, but precludes application usage of the field Also could help avoid metadata leakage Client / DS should probably still check for collisions NAME to write up a PR on the fresh random value idea\nGroup IDs must be generated randomly by creator Clients and DS must check that group ids are unique as far as they know", "new_text": "\"signer\". If the node is blank or if signature verification fails, return an error. Verify that the \"group_id\" is unique among the groups that the client is currently participating in. Verify the integrity of the ratchet tree. Verify that the tree hash of the ratchet tree matches the"}
{"id": "q-en-mls-protocol-87b452c307d38f47ec9b624e8bf742e8cdb5ca1f97459b28faa17ad6246d68b4", "old_text": "identical to the \"signature_key\" in the LeafNode containing this credential. Each new credential that has not already been validated by the application MUST be validated against the Authentication Service. Applications SHOULD require that a client present the same set of identifiers throughout its presence in the group, even if its Credential is changed in a Commit or Update. If an application allows clients to change identifiers over time, then each time the client presents a new credential, the application MUST verify that the set of identifiers in the credential is acceptable to the application for this client. 6.3.1. MLS implementations will presumably provide applications with a way to request protocol operations with regard to other clients (e.g., removing clients). Such functions will need to refer to the other", "comments": "Replaces This PR attempts to add precision to the credential validation requirements in the document, along the lines NAME suggests in , in the spirit of what NAME wrote up in and drawing inspiration from what RFC 6125 does in the context of TLS. It seems like this gives a clearer definition of (a) what the AS's job is and (b) when it needs to be done.\nWhile this is spread out throughout the document, it would be beneficial to spell out in which scenarios a Credential MUST be verified (with the AS): When fetching a KeyPackage to add a new member When discovering a KeyPackage in an AddProposal from another member When joining a new group (verify all other members' Credentials), both when receiving a Welcome message and when joining through an External Commit When a member updates its leaf ... anywhere else?\nThanks for picking it up!", "new_text": "identical to the \"signature_key\" in the LeafNode containing this credential. 6.3.1. The application using MLS is responsible for specifying which identifiers it finds acceptable for each member in a group. In other words, following the model that RFC6125 describes for TLS, the application maintains a list of \"reference identifiers\" for the members of a group, and the credentials provide \"presented identifiers\". A member of a group is authenticated by first validating that the member's credential legitimately represents some presented identifiers, and then ensuring that the reference identifiers for the member are authenticated by those presented identifiers. The parts of the system that perform these functions are collectively referred to as the Authentication Service (AS) I-D.ietf-mls- architecture. A member's credential is said to be _validated with the AS_ when the AS verifies the credential's presented identifiers, and verifies that those identifiers match the reference identifiers for the member. Whenever a new credential is introduced in the group, it MUST be validated with the AS. In particular, at the following events in the protocol: When a member receives a KeyPackage that it will use in an Add proposal to add a new member to the group. When a member receives a GroupInfo object that it will use to join a group, either via a Welcome or via an External Commit When a member receives an Add proposal adding a member to the group. When a member receives an Update proposal whose LeafNode has a new credential for the member. When a member receives a Commit with an UpdatePath whose LeafNode has a new credential for the committer. When an \"external_senders\" extension is added to the group, or an existing \"external_senders\" extension is updated. In cases where a member's credential is being replaced, such as Update and Commit cases above, the AS MUST also verify that the set of presented identifiers in the new credential is valid as a successor to the set of presented identifiers in the old credential, according to the application's policy. 6.3.2. MLS implementations will presumably provide applications with a way to request protocol operations with regard to other clients (e.g., removing clients). Such functions will need to refer to the other"}
{"id": "q-en-mls-protocol-87b452c307d38f47ec9b624e8bf742e8cdb5ca1f97459b28faa17ad6246d68b4", "old_text": "The client verifies the validity of a LeafNode using the following steps: Verify that the credential in the LeafNode is valid according to the authentication service and the client's local policy. These actions MUST be the same regardless of at what point in the protocol the LeafNode is being verified with the following exception: If the LeafNode is an update to another LeafNode, the authentication service MUST additionally validate that the set of identities attested by the credential in the new LeafNode is acceptable relative to the identities attested by the old credential. For example: An Update proposal updates the sender's old LeafNode to a new one A \"resync\" external commit removes the joiner's old LeafNode via a Remove proposal and replaces it with a new one Verify that the signature on the LeafNode is valid using \"signature_key\".", "comments": "Replaces This PR attempts to add precision to the credential validation requirements in the document, along the lines NAME suggests in , in the spirit of what NAME wrote up in and drawing inspiration from what RFC 6125 does in the context of TLS. It seems like this gives a clearer definition of (a) what the AS's job is and (b) when it needs to be done.\nWhile this is spread out throughout the document, it would be beneficial to spell out in which scenarios a Credential MUST be verified (with the AS): When fetching a KeyPackage to add a new member When discovering a KeyPackage in an AddProposal from another member When joining a new group (verify all other members' Credentials), both when receiving a Welcome message and when joining through an External Commit When a member updates its leaf ... anywhere else?\nThanks for picking it up!", "new_text": "The client verifies the validity of a LeafNode using the following steps: Verify that the credential in the LeafNode is valid as described in credential-validation. Verify that the signature on the LeafNode is valid using \"signature_key\"."}
{"id": "q-en-mls-protocol-d64ef1dda64f2f350de84c45388177658a0edfd93659c4f75640e7923d582e91", "old_text": "17.6. This document registers the \"message/mls\" MIME media type in order to allow other protocols (ex: HTTP RFC7540) to convey MLS messages. message", "comments": "As NAME pointed out on the mailing list, this reference does not need to be normative.", "new_text": "17.6. This document registers the \"message/mls\" MIME media type in order to allow other protocols (e.g., HTTP RFC7540) to convey MLS messages. message"}
{"id": "q-en-mls-protocol-e35d005193d8fb3b1d8a423bc65c7379e379c017d47ba249349df8b67b37e24a", "old_text": "for key exchange, AES-128-GCM for HPKE, HKDF over SHA2-256, and Ed25519 for signatures. Values with the first byte 255 (decimal) are reserved for Private Use. New ciphersuite values are assigned by IANA as described in iana- considerations.", "comments": "NAME thanks, good catch. should be fixed now.\nComment by NAME Algorithm entries use two octets (65535) but only allow 255 Private Use entries. I think that might be a little low. Maybe allow 0xf000 - 0xffff or something ?\nI agree this should be changed. CCing NAME who I believe initially proposed this.", "new_text": "for key exchange, AES-128-GCM for HPKE, HKDF over SHA2-256, and Ed25519 for signatures. New ciphersuite values are assigned by IANA as described in iana- considerations."}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "produces a single ciphertext output from AEAD encryption (aligning with RFC5116), as opposed to a separate ciphertext and tag. Ciphersuites are represented with the CipherSuite type. HPKE public keys are opaque values in a format defined by the underlying protocol (see the Cryptographic Dependencies section of the HPKE specification for more information). The signature algorithm specified in the ciphersuite is the mandatory algorithm to be used for signatures in FramedContentAuthData and the tree signatures. It MUST be the same as the signature algorithm specified in the credentials in the leaves of the tree (including the leaf node information in KeyPackages used to add new members). Like HPKE public keys, signature public keys are represented as opaque values in a format defined by the ciphersuite's signature scheme. For ciphersuites using Ed25519 or Ed448 signature schemes, the public key is in the format specified in RFC8032. For ciphersuites using", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "produces a single ciphertext output from AEAD encryption (aligning with RFC5116), as opposed to a separate ciphertext and tag. Ciphersuites are represented with the CipherSuite type. The ciphersuites are defined in mls-ciphersuites. 5.1.1. HPKE public keys are opaque values in a format defined by the underlying protocol (see the Cryptographic Dependencies section of the HPKE specification for more information). Signature public keys are likewise represented as opaque values in a format defined by the ciphersuite's signature scheme. For ciphersuites using Ed25519 or Ed448 signature schemes, the public key is in the format specified in RFC8032. For ciphersuites using"}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "is represented as an encoded UncompressedPointRepresentation struct, as defined in RFC8446. The signatures used in this document are encoded as specified in RFC8446. In particular, ECDSA signatures are DER-encoded and EdDSA signatures are defined as the concatenation of \"r\" and \"s\" as", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "is represented as an encoded UncompressedPointRepresentation struct, as defined in RFC8446. 5.1.2. The signature algorithm specified in the ciphersuite is the mandatory algorithm to be used for signatures in FramedContentAuthData and the tree signatures. It MUST be the same as the signature algorithm specified in the credentials in the leaves of the tree (including the leaf node information in KeyPackages used to add new members). The signatures used in this document are encoded as specified in RFC8446. In particular, ECDSA signatures are DER-encoded and EdDSA signatures are defined as the concatenation of \"r\" and \"s\" as"}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "construction, using a distinct label. To avoid collisions in these labels, an IANA registry is defined in mls-signature-labels. The ciphersuites are defined in section mls-ciphersuites. 5.2.", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "construction, using a distinct label. To avoid collisions in these labels, an IANA registry is defined in mls-signature-labels. 5.1.3. As with signing, MLS includes a label and context in encryption operations to avoid confusion between ciphertexts produced for different purposes. Encryption and decryption including this label and context are done as follows: Where EncryptContext is specified as: And its fields set to: Here, the functions \"SealBase\" and \"OpenBase\" are defined RFC9180, using the HPKE algorithms specified by the group's ciphersuite. If MLS extensions require HPKE encryption operations, they should re-use the EncryptWithLabel construction, using a distinct label. To avoid collisions in these labels, an IANA registry is defined in mls- public-key-encryption-labels. 5.2."}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "new leaf nodes), with each ciphertext being the encryption to the respective resolution node. The HPKECiphertext values are computed as where \"node_public_key\" is the public key of the node that the path secret is being encrypted for, group_context is the provisional GroupContext object for the group, and the functions \"SetupBaseS\" and \"Seal\" are defined according to RFC9180. Decryption is performed in the corresponding way, using the private key of the resolution node. 7.7.", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "new leaf nodes), with each ciphertext being the encryption to the respective resolution node. The HPKECiphertext values are encrypted and decrypted as follows: Here \"node_public_key\" is the public key of the node for which the path secret is encrypted, \"group_context\" is the provisional GroupContext object for the group, and the \"EncryptWithLabel\" function is as defined in public-key-encryption. 7.7."}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "match the one in the Welcome message, return an error. Decrypt the \"encrypted_group_secrets\" value with the algorithms indicated by the ciphersuite and the private key corresponding to \"init_key\" in the referenced KeyPackage. If a \"PreSharedKeyID\" is part of the GroupSecrets and the client is not in possession of the corresponding PSK, return an error.", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "match the one in the Welcome message, return an error. Decrypt the \"encrypted_group_secrets\" value with the algorithms indicated by the ciphersuite and the private key \"init_key_priv\" corresponding to \"init_key\" in the referenced KeyPackage. If a \"PreSharedKeyID\" is part of the GroupSecrets and the client is not in possession of the corresponding PSK, return an error."}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "MLS Signature Labels (mls-signature-labels) MLS Exporter Labels (mls-exporter-labels) All of these registries should be under a heading of \"Messaging Layer", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "MLS Signature Labels (mls-signature-labels) MLS Public Key Encryption Labels (mls-public-key-encryption- labels) MLS Exporter Labels (mls-exporter-labels) All of these registries should be under a heading of \"Messaging Layer"}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "17.6. The \"SignWithLabel\" function defined in ciphersuites avoids the risk of confusion between signatures in different contexts. Each context is assigned a distinct label that is incorporated into the signature. This registry records the labels defined in this document, and allows additional labels to be registered in case extensions add other types of signature using the same signature keys used elsewhere in MLS.", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "17.6. The \"SignWithLabel\" function defined in signing avoids the risk of confusion between signatures in different contexts. Each context is assigned a distinct label that is incorporated into the signature. This registry records the labels defined in this document, and allows additional labels to be registered in case extensions add other types of signature using the same signature keys used elsewhere in MLS."}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "17.7. The exporter function defined in exporters allows applications to derive key material from the MLS key schedule. Like the TLS exporter RFC8446, the MLS exporter uses a label to distinguish between", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "17.7. The \"EncryptWithLabel\" function defined in public-key-encryption avoids the risk of confusion between ciphertexts produced for different purposes in different contexts. Each context is assigned a distinct label that is incorporated into the signature. This registry records the labels defined in this document, and allows additional labels to be registered in case extensions add other types of public-key encryption using the same HPKE keys used elsewhere in MLS. Template: Label: The string to be used as the \"Label\" parameter to \"EncryptWithLabel\" Recommended: Same as in Reference: The document where this credential is defined Initial contents: 17.8. The exporter function defined in exporters allows applications to derive key material from the MLS key schedule. Like the TLS exporter RFC8446, the MLS exporter uses a label to distinguish between"}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "by applications, not the core protocol. The table below is intended only to show the column layout of the registry. 17.8. Specification Required RFC8126 registry requests are registered after a three-week review period on the MLS DEs' mailing list: mls-reg-", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "by applications, not the core protocol. The table below is intended only to show the column layout of the registry. 17.9. Specification Required RFC8126 registry requests are registered after a three-week review period on the MLS DEs' mailing list: mls-reg-"}
{"id": "q-en-mls-protocol-cfdb0fc5a6861b8ea98b406a6c4e96fcba644db1f2e17094f052d6d10a0bf8c5", "old_text": "be perceived as creating a conflict of interest for a particular MLS DE, that MLS DE SHOULD defer to the judgment of the other MLS DEs. 17.9. This document registers the \"message/mls\" MIME media type in order to allow other protocols (e.g., HTTP RFC7540) to convey MLS messages.", "comments": "As discussed in the virtual interim 2022-12-16, it is possible that extensions might be tempted to re-use HPKE public keys created for MLS. Even within MLS, we have two different public-key encryption cases, Welcome and Commit (though these currently use different keys). Currently, for Commit encryption, the HPKE parameter is set to the group context. The content of is actually undefined for Welcome encryption (!) This PR makes the following changes: Define general and functions that populte the HPKE parameter with an encoded label and context Apply these to Commit and Welcome encryption: Commit: label=\"UpdatePathNode\", context= Welcome: label=\"Welcome\", context= Add an IANA registry of public-key encryption labels (as for signing) There is also a bit of reorganization of the Ciphersuites section to split it into three subsections.\nNote that any previous analysis of MLS had to take this particular reuse into account. This can't be the case for any arbitrary extensions reusing the key. So if MLS uses an extension that reuses the key, none of the existing analysis applies, which is a much bigger issue. This said, this reuse did decrease security -- it weakened FS in that a joiner had to hold on to their HPKE leaf key, which also allowed to decrypt the init secret from the welcome message, long after joining.\nI think this captures what we discussed during the interim. FWIW, I'm wondering if we didn't have a domain separation issue in the past when we only had one HPKE public key in KeyPackages and used it for both the Welcome subsequently for UpdatePath as well. Either way, I think this is a good defense-in-depth mechanism.", "new_text": "be perceived as creating a conflict of interest for a particular MLS DE, that MLS DE SHOULD defer to the judgment of the other MLS DEs. 17.10. This document registers the \"message/mls\" MIME media type in order to allow other protocols (e.g., HTTP RFC7540) to convey MLS messages."}
{"id": "q-en-mls-protocol-7555d85cf35b20dc253ccb4cd49a9319a2c7047ad2c27afae2c6923871e4d6f5", "old_text": "that the epoch secret is confidential to the members in the current epoch. For each Commit that adds one or more members to the group, there is a single corresponding Welcome message. The Welcome message provides all the new members with the information they need to initialize their views of the key schedule and ratchet tree, so that these views align with the views held by other members of the group in this epoch.", "comments": "This updates the language around how certificate chains are constructed: We allow certificates to be omitted if the client knows any recipients will already possess the omitted certificate. We allow certificate chains to be out-of-order and potentially contain unused certificates. This is nice because it means less data needs to be sent on the wire, and also provides a method for root certificates to be rotated gracefully (as clients can provide chains for both old and new roots).\nNAME it might be useful to also add this language from RFC 8446. It would make the process exactly the same as TLS which makes implementation easier. >The sender's certificate MUST come in the first >CertificateEntry in the list. Each following certificate SHOULD >directly certify the one immediately preceding it. Because >certificate validation requires that trust anchors be distributed >independently, a certificate that specifies a trust anchor MAY be >omitted from the chain, provided that supported peers are known to >possess any omitted certificates.\nI think that is fine and is consistent with RFC8446. Don't we already have a better mechanism for rotating certificates? As soon as a member has a new certificate chain, it can update its LeafNode in an UpdatePath. This semantics of X.509 certificate bundles are already a bit fuzzy. Such a change will make validation more complicated and more error prone, when this seems completely unnecessary.\nThe scenario that this would address is: say that a root CA is compromised and needs to be replaced. Distributing the new CA certificate likely requires a software release, which not everyone will install simultaneously. During the rollout, some users will know only the old compromised CA and some users will know only the new CA and not the compromised one. To preserve interop during the rollout, users need to include two intermediates in their certificate chain: an intermediate signed by the old root that signs their leaf, and an intermediate signed by the new root that signs their leaf. (In the exact scenario I'm thinking of, the intermediate private key is held by the user, so it would not be compromised with the root. Though there may be other arrangements where this flexibility helps.) In terms of implementation complexity: the X509 chain validation libraries I'm aware of already support path building because it's necessary for TLS, so I would expect those could be used\nThis is actually the reason why I think it makes sense to adopt the same mechanism. For example, if you use OpenSSL to validate the chain it does not take a chain as input. It takes a collection of certificates, a set of trusted root certs, and the leaf you want to validate. I believe not making this change has a higher complexity.\nI'm sorry I missed this previously, but this requirement was a source of trouble in TLS and was removed in RFC 8446, which allows extra certificates in the middle. Was there a conscious decision to be more strict here.\nI don't recall this ever being discussed. I think it was just added and nobody ever objected.\nNAME can you elaborate a little bit? The way I read it the requirement is just that there is a valid signature chain, i.e. that one certificate signs the next and so on.\nNAME - I deliberately reverted to the original TLS level of strictness here, in hopes that in starting over with a new ecosystem we could reset things to a cleaner state. Do you think there are legitimate reasons for violating the strict rule, or is it just a common misconfiguration in TLS servers? I would not really be inclined to accommodate the latter. NAME - I believe what NAME is envisioning are cases where there are multiple valid chains, say via old and new versions of an intermediate certificate. So you might do something like provide a \"chain\" of the form (leaf, oldintermediate, newintermediate), with the idea that the client could verify using whichever one makes it happy.\nNAME take a look at the comment I left on the PR. I can see your point about starting over as being valid, but one thing that also caught my eye in the TLS RFC was language around allowing a root CA to be left out of the chain since those could potentially already exist in a local trust store.\nI see. Thanks NAME for the clarification. Maybe we can phrase it in such a way that whatever the credential includes (added intermediary certificates or left-out root certificates) in the end the local client MUST be able to create a continuous certificate chain from the leaf to the root. Do we need to define how the client does that? Since we're already pushing everything else related to authentication into the AS, I'm inclined to say no.", "new_text": "that the epoch secret is confidential to the members in the current epoch. For each Commit that adds one or more members to the group, there are one or more corresponding Welcome messages. Each Welcome message provides new members with the information they need to initialize their views of the key schedule and ratchet tree, so that these views align with the views held by other members of the group in this epoch."}
{"id": "q-en-mls-protocol-7555d85cf35b20dc253ccb4cd49a9319a2c7047ad2c27afae2c6923871e4d6f5", "old_text": "additional information. The format of the encoded identity is defined by the application. For an X.509 credential, each entry in the chain represents a single DER-encoded X.509 certificate. The chain is ordered such that the first entry (chain[0]) is the end-entity certificate and each subsequent certificate in the chain MUST be the issuer of the previous certificate. The public key encoded in the \"subjectPublicKeyInfo\" of the end-entity certificate MUST be identical to the \"signature_key\" in the LeafNode containing this credential. 5.3.1.", "comments": "This updates the language around how certificate chains are constructed: We allow certificates to be omitted if the client knows any recipients will already possess the omitted certificate. We allow certificate chains to be out-of-order and potentially contain unused certificates. This is nice because it means less data needs to be sent on the wire, and also provides a method for root certificates to be rotated gracefully (as clients can provide chains for both old and new roots).\nNAME it might be useful to also add this language from RFC 8446. It would make the process exactly the same as TLS which makes implementation easier. >The sender's certificate MUST come in the first >CertificateEntry in the list. Each following certificate SHOULD >directly certify the one immediately preceding it. Because >certificate validation requires that trust anchors be distributed >independently, a certificate that specifies a trust anchor MAY be >omitted from the chain, provided that supported peers are known to >possess any omitted certificates.\nI think that is fine and is consistent with RFC8446. Don't we already have a better mechanism for rotating certificates? As soon as a member has a new certificate chain, it can update its LeafNode in an UpdatePath. This semantics of X.509 certificate bundles are already a bit fuzzy. Such a change will make validation more complicated and more error prone, when this seems completely unnecessary.\nThe scenario that this would address is: say that a root CA is compromised and needs to be replaced. Distributing the new CA certificate likely requires a software release, which not everyone will install simultaneously. During the rollout, some users will know only the old compromised CA and some users will know only the new CA and not the compromised one. To preserve interop during the rollout, users need to include two intermediates in their certificate chain: an intermediate signed by the old root that signs their leaf, and an intermediate signed by the new root that signs their leaf. (In the exact scenario I'm thinking of, the intermediate private key is held by the user, so it would not be compromised with the root. Though there may be other arrangements where this flexibility helps.) In terms of implementation complexity: the X509 chain validation libraries I'm aware of already support path building because it's necessary for TLS, so I would expect those could be used\nThis is actually the reason why I think it makes sense to adopt the same mechanism. For example, if you use OpenSSL to validate the chain it does not take a chain as input. It takes a collection of certificates, a set of trusted root certs, and the leaf you want to validate. I believe not making this change has a higher complexity.\nI'm sorry I missed this previously, but this requirement was a source of trouble in TLS and was removed in RFC 8446, which allows extra certificates in the middle. Was there a conscious decision to be more strict here.\nI don't recall this ever being discussed. I think it was just added and nobody ever objected.\nNAME can you elaborate a little bit? The way I read it the requirement is just that there is a valid signature chain, i.e. that one certificate signs the next and so on.\nNAME - I deliberately reverted to the original TLS level of strictness here, in hopes that in starting over with a new ecosystem we could reset things to a cleaner state. Do you think there are legitimate reasons for violating the strict rule, or is it just a common misconfiguration in TLS servers? I would not really be inclined to accommodate the latter. NAME - I believe what NAME is envisioning are cases where there are multiple valid chains, say via old and new versions of an intermediate certificate. So you might do something like provide a \"chain\" of the form (leaf, oldintermediate, newintermediate), with the idea that the client could verify using whichever one makes it happy.\nNAME take a look at the comment I left on the PR. I can see your point about starting over as being valid, but one thing that also caught my eye in the TLS RFC was language around allowing a root CA to be left out of the chain since those could potentially already exist in a local trust store.\nI see. Thanks NAME for the clarification. Maybe we can phrase it in such a way that whatever the credential includes (added intermediary certificates or left-out root certificates) in the end the local client MUST be able to create a continuous certificate chain from the leaf to the root. Do we need to define how the client does that? Since we're already pushing everything else related to authentication into the AS, I'm inclined to say no.", "new_text": "additional information. The format of the encoded identity is defined by the application. For an X.509 credential, each entry in the \"certificates\" field represents a single DER-encoded X.509 certificate. The chain is ordered such that the first entry (chain[0]) is the end-entity certificate. The public key encoded in the \"subjectPublicKeyInfo\" of the end-entity certificate MUST be identical to the \"signature_key\" in the LeafNode containing this credential. A chain MAY omit any non-leaf certificates that supported peers are known to already possess. 5.3.1."}
{"id": "q-en-multipart-ct-f3041d71e0a20de680339b090ae14c354ac857d0166e32fb533bf6c9aeff7eb0", "old_text": "on multiple already existing media types. Applications using the application/multipart-core Content-Format define the internal structure of the application/multipart-core representation. For example, one way to structure the sub-types specific to an application/multipart-core container is to always include them at the", "comments": "\u2026oundaries defined by the CDDL\nWe should add something like: \" within the syntactical boundaries defined by the CDDL in Figure 1.\"", "new_text": "on multiple already existing media types. Applications using the application/multipart-core Content-Format define the semantics as well as the internal structure of the application/multipart-core representation according to the syntax described by the CDDL in mct-cddl. For example, one way to structure the sub-types specific to an application/multipart-core container is to always include them at the"}
{"id": "q-en-multipath-fa676345baae2c66e20b61d5eed74518130e63633b8c531bbd5aba681d6e4490", "old_text": "requires negotiation between the two endpoints using a new transport parameter, as specified in nego. This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: https://github.com/mirjak/draft-lmbdhk-quic- multipath/blob/master/presentations/PacketNumberSpace_s.pdf As currently defined in this version of the draft the use of multiple packet number spaces requires the use of connection IDs is both directions. Today's deployments often only use destination connection ID when sending packets from the client to the server as this addresses the most important use cases for migration, like NAT rebinding or mobility events. Further discussion and work is required to evaluate if the use of multiple packet number spaces could be supported as well when the connection ID is only present in one direction. This proposal does not cover address discovery and management. Addresses and the actual decision process to setup or tear down paths", "comments": "This derives from the discussions on issue\nThe commit applies the changes suggested in the previous reviews. With those changes, I think we have a good basis for discussing a unified solution in the working group.\nThanks NAME for the review. I have applied the editorial changes. You are asking for something more regarding the \"specific logic\" required when supporting zero-length CID. There are in fact two pieces to that: logic at the receiver, i.e., the node that chose to receive packets with zero-length CID; and logic at the sender, i.e., the node that accepts to send data on multiple paths towards a node that uses zero-length CID. This is explained in details in the section \"Using Zero-Length connection ID\", so I guess what we need is a reference to that section.\nComment by NAME (on PR ): I think we expand this section a bit as we found getting the delay right was non-trivial. QUIC time-stamp would work for sure. But if one decides not to use QUIC timestamp, we probably want to explain what is needed to be done here.\nI have submitted PR for this one.\nClosing as got merged.\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue.\nThis draft initially originates from a merging effort of previous Multipath proposals. While many points were found to be common between them, there remains one design point that still requires consensus: the number of packet number spaces that a Multipath QUIC connection should support (i.e., one for the whole connection vs. one per path). The current draft enables experimentation with both variants, but in the final version we will certainly need to choose between one of the versions.\nThe main issues mentioned so far: Efficiency. One number space per path should be more efficient, because implementations can directly reuse the loss-recovery logic specified for QUIC ACK size. Single space leads to lots of out of order delivery, which causes ACK sizes to grow Simplicity of code. Single space can be implemented without adding mane new code paths to uni-path QUIC Shorter header. Single space works well with NULL length CID The picoquic implementation shows that \"efficiency\" and \"ack size\" issues of single space implementations can be mitigated. However, that required significant improvements in the code: to obtain good loss efficiency, picoquic remembers not just the path on which a packet was sent, but also the order of the packet on this path, i.e. a \"virtual sequence number\". The loss detection logic then operates on that virtual sequence number. to contain ACK size, picoquic implements a prioritization logic to select the most important ranges in an ACK, avoid acking the same range more than 3 or 4 times, and keep knowledge of already acknowledged ranges so range coalescing works. I think these improvements are good in general, and I will keep them in the implementations whether we go for single space or not. The virtual sequence number is for example useful if the CID changes for reasons not related to path changes in multiple number space variants. It is also useful in unipath variants to avoid interference between sequence numbers used in probes and the RACK logic. The ACK size improvements do reduce the size of ACKs in presence of out of order delivery, e.g., if the network is doing some kind of internal load balancing. On the other hand, the improvements are somewhat complex, would need to be described in separate drafts, and pretty much contradicts the \"simplicity of code\" argument. So we are left with the \"Null length CID\" issue. I see for cases: Client CID Priority ----------------------- long Used by many implementations NULL Preferred configuration of many big deployments long Rarely used, server load balancing does not work NULL Only mentioned in some P2P deployments The point here is that it is somewhat hard to deploy a large server with NULL CID and use server load balancing. This configuration is mostly favored by planned P2P deployments.\nThe big debate is for the configuration with NULL CID on client, long CID on server. The packets from server to client do not carry a CID, and only the last bytes of the sequence number. The client will need some logic to infer the full sequence number before decrypting the packet. The client could maybe use the incoming 5 tuple as part of the logic, but it is not obvious. It is much simpler to assume a direct map from destination CID to number space. That means, if a peer uses a NULL CID, all packets sent to that peer are in the same number space.\nRevised table: Client CID What -- -- long Multiple number space NULL Multiple number spaces on client side (one per CID), single space on server side long Multiple number spaces on server side (one per CID), single space on client side NULL single number space on each side If a node advertises both NULL CID and multipath support, they SHOULD have logic to contain the size of ACK. If a node engages in multipath with a NULL CID peer, they SHOULD have special logic to make loss recovery work well.\nI think the above points to a possible \"unified solution\": if multipath negotiated, tie number spaces to destination CID, use default (N=0) if NULL CID. use multipath ack; in multipath ACK, identify number space by sequence number of DCID in received packets. Default that number to zero if received with NULL CID. in ABANDON path, identify path either by DCID of received packets, SCID of sent packet, or \"this path\" (sending path) if CID is NULL.\nI like the proposal of the \"unified solution\". I think the elegance lies in the fact that it allows us to automatically cover all four cases listed above. The previous dilemma for me was that on one hand we have some use cases where we need to support more than two paths and separate PN makes the job easier, but on the other hand, I think we should not ignore the NULL CID use cases as it is also important. Now, with this proposal, a big part of the problem is solved. The rest of challenge is to make sure single PN remains efficient in terms of ACK and loss recovery. On that part, we plan to do an A/B test and would love to share the results when we get them. There is one more problem as pointed in issue , when we want to take hardware offloads into account. In such a case, we may still need single PN for long server CID. However, if hardware supports nonce modification, this problem can be addressed with the proposed \"unified solution\".\nIf the APi does not support 96 bit sequence numbers, it should always be possible to create an encryption context per number space, using Key=current key and ID = current-ID + CID sequence number. Of course, that forces creation of multiple context, and that makes key rotation a bit harder to manage. But still, that should be possible.\nThanks for the summary NAME I think one point is missing in your list which is related to issue . Use of a single packet number space might not support ECN. Regarding the unified solution: I think what you actually say is that we would specify both solutions and everybody would need to implement both logics. At least regarding the \"simplicity of code\" argument, that would be the worst choice. If we can make the multiple packet number spaces solution work with one-sided CIDs, I'm tending toward such an approach. Use of multiple packet number spaces avoids ambiguity/needed \"smartness\" in ACK handling and packet scheduling which, as you say above, can make the implementation more complex and, moreover, wrong decisions may have large impact on efficient (both local processing and on-the-wire). I don't think we want to leave these things to each implementation individually.\nThe summary Christian made above about design comparison sounds indeed quite accurate. Besides ECN, my other concern about single packet number is that it require cleverness from the receiver side if you want to be performant in some scenarios. At the sender-side, you need to consider a path-relative packet number threshold instead of an absolute one to avoid spurious losses. Just a point I think we did not mentioned yet is that there can be some interactions between Multipath out-of-order number delivery and incoming packet number duplicate detection. This requires maintaining some state at the receiver side, as described by URL With single packet number space, the receiver should take extra care when updating the \"minimum packet number below which all packets are immediately dropped\". Otherwise, in presence of paths with very different latencies, the receiver might end up discarding packets from a (slower) path. I'm also preferring the multiple packet number spaces solution for the above reasons. I'm not against thinking for a \"unified\" solution (the proposal sounds interesting), but I wonder how much complexity this would add compared to requiring end hosts to use one-byte CIDs.\nI think the issue is not really so much \"single vs multiple number space\" as \"support for multipath and NULL CID\". As noted by Mirja, there is an implementation cost there. My take is: If a node uses NULL CID and proposes support of multipath, then that node MUST implement code to deal with size of acknowledgements. If a node sees a peer using NULL CID and supporting multipath, then that node MUST either use only one path at a time or implement code to deal with impact on loss detection due to out of order arrivals, and impact on congestion control including ambiguity of ECN signals. Then add sections on what it means to deal with the side of acknowledgements, out of order arrivals, and congestion control. I think this approach ends up with the correct compromises: It keeps the main case simple. Nodes that rely on multipath just use long CID and one number space per CID. Nodes that use long CID never have to implement the ACK length minimization algorithms. Nodes that see a peer using NULL CID have a variety of implementation choices, e.g., sending on just one path at a time, sending on mostly one path at a time in \"make or break\" transitions, or implementing sophisticated algorithms. If we do that, we can get rid of the single vs multiple space discussion, and end up with a single solution addressing all use cases.\nLooking at all the discussions here and in other issues such as ECN, I think that we should try to write two different versions of section 7: one covering all the aspects of single packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) one covering all the aspects of multiple packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) There would be some overlap between these two sections and also some differences that would become clear as we specify them. At the end of this writing, we'll know whether it is possible to support/unify both or we need to recommend a single one. The other parts of the document are almost independent of that and can evolve in parallel with these two sections. However, I don't think that such a change would be possible by Monday\nI don't think we should rush changes before we have agreed on the final vision.\nIt might be helpful to have these options as PRs (without merging) them, so people can understand all details of each approach.\nI agree with NAME that supporting multi-path with null CIDs is a more fundamental issue than the efficiency comparison between single PN and separate PN, as it would ultimately impact the application scope of multipath QUIC. But indeed, we might want to implement proposed solution first and then decide if we want to adopt such a unified approach.\nSince NAME prodded me, we now have a PR for the \"unified\" proposal.\nI totally agree with Christian that the issue is about \"support for multipath and NULL CID\", and the solution that Christian suggested looks really great! It both takes advantage of multiple spaces, and support NULL CID users without affect the efficiency of ACK arrangements. Besides, the solution is more convenient for implementations, because If both endpoints uses non-zero length cids, endpoints only need to support multiple spaces, and if one of the endpoints use NULL CID, it could use single pn space in one direction and could support NULL CID and multipath at the same time.\nVery high level summary of IETF-113 discussion seems that there is interest and likely support for the unified solution (review minutes for further details).", "new_text": "requires negotiation between the two endpoints using a new transport parameter, as specified in nego. This extension uses multiple packet number spaces. When multipath is negotiated, each destination connection ID is linked to a separate packet number space. Using multiple packet number spaces enables direct use of the loss recovery and congestion control mechanisms defined in QUIC-RECOVERY. Some deployments of QUIC use zero-length connection IDs. When a node selects to use zero-length connection IDs, it is not possible to use different connection IDs for distinguishing packets sent to that node over different paths. This extension also specifies a way to use zero-length CID by using the same packet number space on all paths. However, when using the same packet number space on multiple paths, out of order delivery is likely. This causes inflation of the number of acknowledgement ranges and therefore of the the size of ACK frames. Senders that accept to use a single number space on multiple paths when sending to a node using zero-length CID need to take special care to minimize the impact of multipath delivery on loss detection, congestion control, and ECN handling. This proposal specifies algorithms for controlling the size of acknowledgement packets and ECN handling in Section using-zero-length and ecn- handling. This proposal does not cover address discovery and management. Addresses and the actual decision process to setup or tear down paths"}
{"id": "q-en-multipath-fa676345baae2c66e20b61d5eed74518130e63633b8c531bbd5aba681d6e4490", "old_text": "param_value_definition : If for any one of the endpoints the parameter is absent or set to 0, or if the two endpoints select incompatible values, one proposing 0x1 and the other proposing 0x2, the endpoints MUST fallback to QUIC- TRANSPORT with single path and MUST NOT use any frame or mechanism defined in this document. If an endpoint proposes the value 0x3, the value proposed by the other is accepted. If both endpoints propose the value 0x3, the value 0x2 is negotiated. If endpoint receives unexpected value for the transport parameter \"enable_multipath\", it MUST treat this as a connection error of type", "comments": "This derives from the discussions on issue\nThe commit applies the changes suggested in the previous reviews. With those changes, I think we have a good basis for discussing a unified solution in the working group.\nThanks NAME for the review. I have applied the editorial changes. You are asking for something more regarding the \"specific logic\" required when supporting zero-length CID. There are in fact two pieces to that: logic at the receiver, i.e., the node that chose to receive packets with zero-length CID; and logic at the sender, i.e., the node that accepts to send data on multiple paths towards a node that uses zero-length CID. This is explained in details in the section \"Using Zero-Length connection ID\", so I guess what we need is a reference to that section.\nComment by NAME (on PR ): I think we expand this section a bit as we found getting the delay right was non-trivial. QUIC time-stamp would work for sure. But if one decides not to use QUIC timestamp, we probably want to explain what is needed to be done here.\nI have submitted PR for this one.\nClosing as got merged.\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue.\nThis draft initially originates from a merging effort of previous Multipath proposals. While many points were found to be common between them, there remains one design point that still requires consensus: the number of packet number spaces that a Multipath QUIC connection should support (i.e., one for the whole connection vs. one per path). The current draft enables experimentation with both variants, but in the final version we will certainly need to choose between one of the versions.\nThe main issues mentioned so far: Efficiency. One number space per path should be more efficient, because implementations can directly reuse the loss-recovery logic specified for QUIC ACK size. Single space leads to lots of out of order delivery, which causes ACK sizes to grow Simplicity of code. Single space can be implemented without adding mane new code paths to uni-path QUIC Shorter header. Single space works well with NULL length CID The picoquic implementation shows that \"efficiency\" and \"ack size\" issues of single space implementations can be mitigated. However, that required significant improvements in the code: to obtain good loss efficiency, picoquic remembers not just the path on which a packet was sent, but also the order of the packet on this path, i.e. a \"virtual sequence number\". The loss detection logic then operates on that virtual sequence number. to contain ACK size, picoquic implements a prioritization logic to select the most important ranges in an ACK, avoid acking the same range more than 3 or 4 times, and keep knowledge of already acknowledged ranges so range coalescing works. I think these improvements are good in general, and I will keep them in the implementations whether we go for single space or not. The virtual sequence number is for example useful if the CID changes for reasons not related to path changes in multiple number space variants. It is also useful in unipath variants to avoid interference between sequence numbers used in probes and the RACK logic. The ACK size improvements do reduce the size of ACKs in presence of out of order delivery, e.g., if the network is doing some kind of internal load balancing. On the other hand, the improvements are somewhat complex, would need to be described in separate drafts, and pretty much contradicts the \"simplicity of code\" argument. So we are left with the \"Null length CID\" issue. I see for cases: Client CID Priority ----------------------- long Used by many implementations NULL Preferred configuration of many big deployments long Rarely used, server load balancing does not work NULL Only mentioned in some P2P deployments The point here is that it is somewhat hard to deploy a large server with NULL CID and use server load balancing. This configuration is mostly favored by planned P2P deployments.\nThe big debate is for the configuration with NULL CID on client, long CID on server. The packets from server to client do not carry a CID, and only the last bytes of the sequence number. The client will need some logic to infer the full sequence number before decrypting the packet. The client could maybe use the incoming 5 tuple as part of the logic, but it is not obvious. It is much simpler to assume a direct map from destination CID to number space. That means, if a peer uses a NULL CID, all packets sent to that peer are in the same number space.\nRevised table: Client CID What -- -- long Multiple number space NULL Multiple number spaces on client side (one per CID), single space on server side long Multiple number spaces on server side (one per CID), single space on client side NULL single number space on each side If a node advertises both NULL CID and multipath support, they SHOULD have logic to contain the size of ACK. If a node engages in multipath with a NULL CID peer, they SHOULD have special logic to make loss recovery work well.\nI think the above points to a possible \"unified solution\": if multipath negotiated, tie number spaces to destination CID, use default (N=0) if NULL CID. use multipath ack; in multipath ACK, identify number space by sequence number of DCID in received packets. Default that number to zero if received with NULL CID. in ABANDON path, identify path either by DCID of received packets, SCID of sent packet, or \"this path\" (sending path) if CID is NULL.\nI like the proposal of the \"unified solution\". I think the elegance lies in the fact that it allows us to automatically cover all four cases listed above. The previous dilemma for me was that on one hand we have some use cases where we need to support more than two paths and separate PN makes the job easier, but on the other hand, I think we should not ignore the NULL CID use cases as it is also important. Now, with this proposal, a big part of the problem is solved. The rest of challenge is to make sure single PN remains efficient in terms of ACK and loss recovery. On that part, we plan to do an A/B test and would love to share the results when we get them. There is one more problem as pointed in issue , when we want to take hardware offloads into account. In such a case, we may still need single PN for long server CID. However, if hardware supports nonce modification, this problem can be addressed with the proposed \"unified solution\".\nIf the APi does not support 96 bit sequence numbers, it should always be possible to create an encryption context per number space, using Key=current key and ID = current-ID + CID sequence number. Of course, that forces creation of multiple context, and that makes key rotation a bit harder to manage. But still, that should be possible.\nThanks for the summary NAME I think one point is missing in your list which is related to issue . Use of a single packet number space might not support ECN. Regarding the unified solution: I think what you actually say is that we would specify both solutions and everybody would need to implement both logics. At least regarding the \"simplicity of code\" argument, that would be the worst choice. If we can make the multiple packet number spaces solution work with one-sided CIDs, I'm tending toward such an approach. Use of multiple packet number spaces avoids ambiguity/needed \"smartness\" in ACK handling and packet scheduling which, as you say above, can make the implementation more complex and, moreover, wrong decisions may have large impact on efficient (both local processing and on-the-wire). I don't think we want to leave these things to each implementation individually.\nThe summary Christian made above about design comparison sounds indeed quite accurate. Besides ECN, my other concern about single packet number is that it require cleverness from the receiver side if you want to be performant in some scenarios. At the sender-side, you need to consider a path-relative packet number threshold instead of an absolute one to avoid spurious losses. Just a point I think we did not mentioned yet is that there can be some interactions between Multipath out-of-order number delivery and incoming packet number duplicate detection. This requires maintaining some state at the receiver side, as described by URL With single packet number space, the receiver should take extra care when updating the \"minimum packet number below which all packets are immediately dropped\". Otherwise, in presence of paths with very different latencies, the receiver might end up discarding packets from a (slower) path. I'm also preferring the multiple packet number spaces solution for the above reasons. I'm not against thinking for a \"unified\" solution (the proposal sounds interesting), but I wonder how much complexity this would add compared to requiring end hosts to use one-byte CIDs.\nI think the issue is not really so much \"single vs multiple number space\" as \"support for multipath and NULL CID\". As noted by Mirja, there is an implementation cost there. My take is: If a node uses NULL CID and proposes support of multipath, then that node MUST implement code to deal with size of acknowledgements. If a node sees a peer using NULL CID and supporting multipath, then that node MUST either use only one path at a time or implement code to deal with impact on loss detection due to out of order arrivals, and impact on congestion control including ambiguity of ECN signals. Then add sections on what it means to deal with the side of acknowledgements, out of order arrivals, and congestion control. I think this approach ends up with the correct compromises: It keeps the main case simple. Nodes that rely on multipath just use long CID and one number space per CID. Nodes that use long CID never have to implement the ACK length minimization algorithms. Nodes that see a peer using NULL CID have a variety of implementation choices, e.g., sending on just one path at a time, sending on mostly one path at a time in \"make or break\" transitions, or implementing sophisticated algorithms. If we do that, we can get rid of the single vs multiple space discussion, and end up with a single solution addressing all use cases.\nLooking at all the discussions here and in other issues such as ECN, I think that we should try to write two different versions of section 7: one covering all the aspects of single packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) one covering all the aspects of multiple packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) There would be some overlap between these two sections and also some differences that would become clear as we specify them. At the end of this writing, we'll know whether it is possible to support/unify both or we need to recommend a single one. The other parts of the document are almost independent of that and can evolve in parallel with these two sections. However, I don't think that such a change would be possible by Monday\nI don't think we should rush changes before we have agreed on the final vision.\nIt might be helpful to have these options as PRs (without merging) them, so people can understand all details of each approach.\nI agree with NAME that supporting multi-path with null CIDs is a more fundamental issue than the efficiency comparison between single PN and separate PN, as it would ultimately impact the application scope of multipath QUIC. But indeed, we might want to implement proposed solution first and then decide if we want to adopt such a unified approach.\nSince NAME prodded me, we now have a PR for the \"unified\" proposal.\nI totally agree with Christian that the issue is about \"support for multipath and NULL CID\", and the solution that Christian suggested looks really great! It both takes advantage of multiple spaces, and support NULL CID users without affect the efficiency of ACK arrangements. Besides, the solution is more convenient for implementations, because If both endpoints uses non-zero length cids, endpoints only need to support multiple spaces, and if one of the endpoints use NULL CID, it could use single pn space in one direction and could support NULL CID and multipath at the same time.\nVery high level summary of IETF-113 discussion seems that there is interest and likely support for the unified solution (review minutes for further details).", "new_text": "param_value_definition : If for any one of the endpoints the parameter is absent or set to 0, the endpoints MUST fallback to QUIC-TRANSPORT with single active path and MUST NOT use any frame or mechanism defined in this document. If endpoint receives unexpected value for the transport parameter \"enable_multipath\", it MUST treat this as a connection error of type"}
{"id": "q-en-multipath-fa676345baae2c66e20b61d5eed74518130e63633b8c531bbd5aba681d6e4490", "old_text": "multipath negotiation is ambiguous, they MUST be interpreted as acknowledging packets sent on path 0. Endpoints negotiate the use of one packet number space for all paths or separate packet number spaces per path during the connection handshake nego. While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connection ID. Therefore use of a single number space can be beneficial when endpoints use zero-length connection ID for less overhead. 7.1. If the multipath option is negotiated to use one packet number space for all paths, the packet sequence numbers are allocated from the common number space, so that, for example, packet number N could be sent on one path and packet number N+1 on another. ACK frames report the numbers of packets that have been received so far, regardless of the path on which they have been received. That means the senders needs to maintain an association between sent packet numbers and the path over which these packets were sent. This is necessary to implement per path congestion control. When a packet is acknowledged, the state of the congestion control MUST be updated for the path where the acknowledged packet was originally sent. The RTT is calculated based on the delay between the transmission of that packet and its first acknowledgement (see compute-rtt) and is used to update the RTT statistics for the sending path. Also loss detection MUST be adapted to allow for different RTTs on different paths. For example, timer computations should take into account the RTT of the path on which a packet was sent. Detections based on packet numbers shall compare a given packet number to the highest packet number received for that path. 7.1.1. If senders decide to send packets on paths with different transmission delays, some packets will very likely be received out of order. This will cause the ACK frames to carry multiple ranges of", "comments": "This derives from the discussions on issue\nThe commit applies the changes suggested in the previous reviews. With those changes, I think we have a good basis for discussing a unified solution in the working group.\nThanks NAME for the review. I have applied the editorial changes. You are asking for something more regarding the \"specific logic\" required when supporting zero-length CID. There are in fact two pieces to that: logic at the receiver, i.e., the node that chose to receive packets with zero-length CID; and logic at the sender, i.e., the node that accepts to send data on multiple paths towards a node that uses zero-length CID. This is explained in details in the section \"Using Zero-Length connection ID\", so I guess what we need is a reference to that section.\nComment by NAME (on PR ): I think we expand this section a bit as we found getting the delay right was non-trivial. QUIC time-stamp would work for sure. But if one decides not to use QUIC timestamp, we probably want to explain what is needed to be done here.\nI have submitted PR for this one.\nClosing as got merged.\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue.\nThis draft initially originates from a merging effort of previous Multipath proposals. While many points were found to be common between them, there remains one design point that still requires consensus: the number of packet number spaces that a Multipath QUIC connection should support (i.e., one for the whole connection vs. one per path). The current draft enables experimentation with both variants, but in the final version we will certainly need to choose between one of the versions.\nThe main issues mentioned so far: Efficiency. One number space per path should be more efficient, because implementations can directly reuse the loss-recovery logic specified for QUIC ACK size. Single space leads to lots of out of order delivery, which causes ACK sizes to grow Simplicity of code. Single space can be implemented without adding mane new code paths to uni-path QUIC Shorter header. Single space works well with NULL length CID The picoquic implementation shows that \"efficiency\" and \"ack size\" issues of single space implementations can be mitigated. However, that required significant improvements in the code: to obtain good loss efficiency, picoquic remembers not just the path on which a packet was sent, but also the order of the packet on this path, i.e. a \"virtual sequence number\". The loss detection logic then operates on that virtual sequence number. to contain ACK size, picoquic implements a prioritization logic to select the most important ranges in an ACK, avoid acking the same range more than 3 or 4 times, and keep knowledge of already acknowledged ranges so range coalescing works. I think these improvements are good in general, and I will keep them in the implementations whether we go for single space or not. The virtual sequence number is for example useful if the CID changes for reasons not related to path changes in multiple number space variants. It is also useful in unipath variants to avoid interference between sequence numbers used in probes and the RACK logic. The ACK size improvements do reduce the size of ACKs in presence of out of order delivery, e.g., if the network is doing some kind of internal load balancing. On the other hand, the improvements are somewhat complex, would need to be described in separate drafts, and pretty much contradicts the \"simplicity of code\" argument. So we are left with the \"Null length CID\" issue. I see for cases: Client CID Priority ----------------------- long Used by many implementations NULL Preferred configuration of many big deployments long Rarely used, server load balancing does not work NULL Only mentioned in some P2P deployments The point here is that it is somewhat hard to deploy a large server with NULL CID and use server load balancing. This configuration is mostly favored by planned P2P deployments.\nThe big debate is for the configuration with NULL CID on client, long CID on server. The packets from server to client do not carry a CID, and only the last bytes of the sequence number. The client will need some logic to infer the full sequence number before decrypting the packet. The client could maybe use the incoming 5 tuple as part of the logic, but it is not obvious. It is much simpler to assume a direct map from destination CID to number space. That means, if a peer uses a NULL CID, all packets sent to that peer are in the same number space.\nRevised table: Client CID What -- -- long Multiple number space NULL Multiple number spaces on client side (one per CID), single space on server side long Multiple number spaces on server side (one per CID), single space on client side NULL single number space on each side If a node advertises both NULL CID and multipath support, they SHOULD have logic to contain the size of ACK. If a node engages in multipath with a NULL CID peer, they SHOULD have special logic to make loss recovery work well.\nI think the above points to a possible \"unified solution\": if multipath negotiated, tie number spaces to destination CID, use default (N=0) if NULL CID. use multipath ack; in multipath ACK, identify number space by sequence number of DCID in received packets. Default that number to zero if received with NULL CID. in ABANDON path, identify path either by DCID of received packets, SCID of sent packet, or \"this path\" (sending path) if CID is NULL.\nI like the proposal of the \"unified solution\". I think the elegance lies in the fact that it allows us to automatically cover all four cases listed above. The previous dilemma for me was that on one hand we have some use cases where we need to support more than two paths and separate PN makes the job easier, but on the other hand, I think we should not ignore the NULL CID use cases as it is also important. Now, with this proposal, a big part of the problem is solved. The rest of challenge is to make sure single PN remains efficient in terms of ACK and loss recovery. On that part, we plan to do an A/B test and would love to share the results when we get them. There is one more problem as pointed in issue , when we want to take hardware offloads into account. In such a case, we may still need single PN for long server CID. However, if hardware supports nonce modification, this problem can be addressed with the proposed \"unified solution\".\nIf the APi does not support 96 bit sequence numbers, it should always be possible to create an encryption context per number space, using Key=current key and ID = current-ID + CID sequence number. Of course, that forces creation of multiple context, and that makes key rotation a bit harder to manage. But still, that should be possible.\nThanks for the summary NAME I think one point is missing in your list which is related to issue . Use of a single packet number space might not support ECN. Regarding the unified solution: I think what you actually say is that we would specify both solutions and everybody would need to implement both logics. At least regarding the \"simplicity of code\" argument, that would be the worst choice. If we can make the multiple packet number spaces solution work with one-sided CIDs, I'm tending toward such an approach. Use of multiple packet number spaces avoids ambiguity/needed \"smartness\" in ACK handling and packet scheduling which, as you say above, can make the implementation more complex and, moreover, wrong decisions may have large impact on efficient (both local processing and on-the-wire). I don't think we want to leave these things to each implementation individually.\nThe summary Christian made above about design comparison sounds indeed quite accurate. Besides ECN, my other concern about single packet number is that it require cleverness from the receiver side if you want to be performant in some scenarios. At the sender-side, you need to consider a path-relative packet number threshold instead of an absolute one to avoid spurious losses. Just a point I think we did not mentioned yet is that there can be some interactions between Multipath out-of-order number delivery and incoming packet number duplicate detection. This requires maintaining some state at the receiver side, as described by URL With single packet number space, the receiver should take extra care when updating the \"minimum packet number below which all packets are immediately dropped\". Otherwise, in presence of paths with very different latencies, the receiver might end up discarding packets from a (slower) path. I'm also preferring the multiple packet number spaces solution for the above reasons. I'm not against thinking for a \"unified\" solution (the proposal sounds interesting), but I wonder how much complexity this would add compared to requiring end hosts to use one-byte CIDs.\nI think the issue is not really so much \"single vs multiple number space\" as \"support for multipath and NULL CID\". As noted by Mirja, there is an implementation cost there. My take is: If a node uses NULL CID and proposes support of multipath, then that node MUST implement code to deal with size of acknowledgements. If a node sees a peer using NULL CID and supporting multipath, then that node MUST either use only one path at a time or implement code to deal with impact on loss detection due to out of order arrivals, and impact on congestion control including ambiguity of ECN signals. Then add sections on what it means to deal with the side of acknowledgements, out of order arrivals, and congestion control. I think this approach ends up with the correct compromises: It keeps the main case simple. Nodes that rely on multipath just use long CID and one number space per CID. Nodes that use long CID never have to implement the ACK length minimization algorithms. Nodes that see a peer using NULL CID have a variety of implementation choices, e.g., sending on just one path at a time, sending on mostly one path at a time in \"make or break\" transitions, or implementing sophisticated algorithms. If we do that, we can get rid of the single vs multiple space discussion, and end up with a single solution addressing all use cases.\nLooking at all the discussions here and in other issues such as ECN, I think that we should try to write two different versions of section 7: one covering all the aspects of single packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) one covering all the aspects of multiple packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) There would be some overlap between these two sections and also some differences that would become clear as we specify them. At the end of this writing, we'll know whether it is possible to support/unify both or we need to recommend a single one. The other parts of the document are almost independent of that and can evolve in parallel with these two sections. However, I don't think that such a change would be possible by Monday\nI don't think we should rush changes before we have agreed on the final vision.\nIt might be helpful to have these options as PRs (without merging) them, so people can understand all details of each approach.\nI agree with NAME that supporting multi-path with null CIDs is a more fundamental issue than the efficiency comparison between single PN and separate PN, as it would ultimately impact the application scope of multipath QUIC. But indeed, we might want to implement proposed solution first and then decide if we want to adopt such a unified approach.\nSince NAME prodded me, we now have a PR for the \"unified\" proposal.\nI totally agree with Christian that the issue is about \"support for multipath and NULL CID\", and the solution that Christian suggested looks really great! It both takes advantage of multiple spaces, and support NULL CID users without affect the efficiency of ACK arrangements. Besides, the solution is more convenient for implementations, because If both endpoints uses non-zero length cids, endpoints only need to support multiple spaces, and if one of the endpoints use NULL CID, it could use single pn space in one direction and could support NULL CID and multipath at the same time.\nVery high level summary of IETF-113 discussion seems that there is interest and likely support for the unified solution (review minutes for further details).", "new_text": "multipath negotiation is ambiguous, they MUST be interpreted as acknowledging packets sent on path 0. 7.1. If a zero-length connection ID is used, one packet number space for all paths. That means the packet sequence numbers are allocated from the common number space, so that, for example, packet number N could be sent on one path and packet number N+1 on another. In this case, ACK frames report the numbers of packets that have been received so far, regardless of the path on which they have been received. That means the senders needs to maintain an association between sent packet numbers and the path over which these packets were sent. This is necessary to implement per path congestion control, as explained in zero-length-cid-loss-and-congestion. Further, the receiver of packets with zero-length connection IDs should implement handling of acknowledgements as defined in sending- acknowledgements-and-handling-ranges. ECN handing is specified in ecn-handling, and mitigation of the RTT measurement is further explained in ack-delay-and-zero-length-cid- considerations. If a node does not want to implement this logic, it MAY instead limit its use of multiple paths as explained in restricted-sending-to-zero- length-cid-peer. 7.1.1. If zero-length CID and therefore also a single packet number space is used by the sender, the receiver MAY send ACK frames instead of ACK_MP frames to reduce overhead as the additional path ID field will anyway always carry the same value. If senders decide to send packets on paths with different transmission delays, some packets will very likely be received out of order. This will cause the ACK frames to carry multiple ranges of"}
{"id": "q-en-multipath-fa676345baae2c66e20b61d5eed74518130e63633b8c531bbd5aba681d6e4490", "old_text": "7.1.2. The ACK Delay field of the ACK frame is relative to the largest acknowledged packet number (see Section 13.2.5 of QUIC-TRANSPORT). When using paths with different transmission delays, the reported", "comments": "This derives from the discussions on issue\nThe commit applies the changes suggested in the previous reviews. With those changes, I think we have a good basis for discussing a unified solution in the working group.\nThanks NAME for the review. I have applied the editorial changes. You are asking for something more regarding the \"specific logic\" required when supporting zero-length CID. There are in fact two pieces to that: logic at the receiver, i.e., the node that chose to receive packets with zero-length CID; and logic at the sender, i.e., the node that accepts to send data on multiple paths towards a node that uses zero-length CID. This is explained in details in the section \"Using Zero-Length connection ID\", so I guess what we need is a reference to that section.\nComment by NAME (on PR ): I think we expand this section a bit as we found getting the delay right was non-trivial. QUIC time-stamp would work for sure. But if one decides not to use QUIC timestamp, we probably want to explain what is needed to be done here.\nI have submitted PR for this one.\nClosing as got merged.\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue.\nThis draft initially originates from a merging effort of previous Multipath proposals. While many points were found to be common between them, there remains one design point that still requires consensus: the number of packet number spaces that a Multipath QUIC connection should support (i.e., one for the whole connection vs. one per path). The current draft enables experimentation with both variants, but in the final version we will certainly need to choose between one of the versions.\nThe main issues mentioned so far: Efficiency. One number space per path should be more efficient, because implementations can directly reuse the loss-recovery logic specified for QUIC ACK size. Single space leads to lots of out of order delivery, which causes ACK sizes to grow Simplicity of code. Single space can be implemented without adding mane new code paths to uni-path QUIC Shorter header. Single space works well with NULL length CID The picoquic implementation shows that \"efficiency\" and \"ack size\" issues of single space implementations can be mitigated. However, that required significant improvements in the code: to obtain good loss efficiency, picoquic remembers not just the path on which a packet was sent, but also the order of the packet on this path, i.e. a \"virtual sequence number\". The loss detection logic then operates on that virtual sequence number. to contain ACK size, picoquic implements a prioritization logic to select the most important ranges in an ACK, avoid acking the same range more than 3 or 4 times, and keep knowledge of already acknowledged ranges so range coalescing works. I think these improvements are good in general, and I will keep them in the implementations whether we go for single space or not. The virtual sequence number is for example useful if the CID changes for reasons not related to path changes in multiple number space variants. It is also useful in unipath variants to avoid interference between sequence numbers used in probes and the RACK logic. The ACK size improvements do reduce the size of ACKs in presence of out of order delivery, e.g., if the network is doing some kind of internal load balancing. On the other hand, the improvements are somewhat complex, would need to be described in separate drafts, and pretty much contradicts the \"simplicity of code\" argument. So we are left with the \"Null length CID\" issue. I see for cases: Client CID Priority ----------------------- long Used by many implementations NULL Preferred configuration of many big deployments long Rarely used, server load balancing does not work NULL Only mentioned in some P2P deployments The point here is that it is somewhat hard to deploy a large server with NULL CID and use server load balancing. This configuration is mostly favored by planned P2P deployments.\nThe big debate is for the configuration with NULL CID on client, long CID on server. The packets from server to client do not carry a CID, and only the last bytes of the sequence number. The client will need some logic to infer the full sequence number before decrypting the packet. The client could maybe use the incoming 5 tuple as part of the logic, but it is not obvious. It is much simpler to assume a direct map from destination CID to number space. That means, if a peer uses a NULL CID, all packets sent to that peer are in the same number space.\nRevised table: Client CID What -- -- long Multiple number space NULL Multiple number spaces on client side (one per CID), single space on server side long Multiple number spaces on server side (one per CID), single space on client side NULL single number space on each side If a node advertises both NULL CID and multipath support, they SHOULD have logic to contain the size of ACK. If a node engages in multipath with a NULL CID peer, they SHOULD have special logic to make loss recovery work well.\nI think the above points to a possible \"unified solution\": if multipath negotiated, tie number spaces to destination CID, use default (N=0) if NULL CID. use multipath ack; in multipath ACK, identify number space by sequence number of DCID in received packets. Default that number to zero if received with NULL CID. in ABANDON path, identify path either by DCID of received packets, SCID of sent packet, or \"this path\" (sending path) if CID is NULL.\nI like the proposal of the \"unified solution\". I think the elegance lies in the fact that it allows us to automatically cover all four cases listed above. The previous dilemma for me was that on one hand we have some use cases where we need to support more than two paths and separate PN makes the job easier, but on the other hand, I think we should not ignore the NULL CID use cases as it is also important. Now, with this proposal, a big part of the problem is solved. The rest of challenge is to make sure single PN remains efficient in terms of ACK and loss recovery. On that part, we plan to do an A/B test and would love to share the results when we get them. There is one more problem as pointed in issue , when we want to take hardware offloads into account. In such a case, we may still need single PN for long server CID. However, if hardware supports nonce modification, this problem can be addressed with the proposed \"unified solution\".\nIf the APi does not support 96 bit sequence numbers, it should always be possible to create an encryption context per number space, using Key=current key and ID = current-ID + CID sequence number. Of course, that forces creation of multiple context, and that makes key rotation a bit harder to manage. But still, that should be possible.\nThanks for the summary NAME I think one point is missing in your list which is related to issue . Use of a single packet number space might not support ECN. Regarding the unified solution: I think what you actually say is that we would specify both solutions and everybody would need to implement both logics. At least regarding the \"simplicity of code\" argument, that would be the worst choice. If we can make the multiple packet number spaces solution work with one-sided CIDs, I'm tending toward such an approach. Use of multiple packet number spaces avoids ambiguity/needed \"smartness\" in ACK handling and packet scheduling which, as you say above, can make the implementation more complex and, moreover, wrong decisions may have large impact on efficient (both local processing and on-the-wire). I don't think we want to leave these things to each implementation individually.\nThe summary Christian made above about design comparison sounds indeed quite accurate. Besides ECN, my other concern about single packet number is that it require cleverness from the receiver side if you want to be performant in some scenarios. At the sender-side, you need to consider a path-relative packet number threshold instead of an absolute one to avoid spurious losses. Just a point I think we did not mentioned yet is that there can be some interactions between Multipath out-of-order number delivery and incoming packet number duplicate detection. This requires maintaining some state at the receiver side, as described by URL With single packet number space, the receiver should take extra care when updating the \"minimum packet number below which all packets are immediately dropped\". Otherwise, in presence of paths with very different latencies, the receiver might end up discarding packets from a (slower) path. I'm also preferring the multiple packet number spaces solution for the above reasons. I'm not against thinking for a \"unified\" solution (the proposal sounds interesting), but I wonder how much complexity this would add compared to requiring end hosts to use one-byte CIDs.\nI think the issue is not really so much \"single vs multiple number space\" as \"support for multipath and NULL CID\". As noted by Mirja, there is an implementation cost there. My take is: If a node uses NULL CID and proposes support of multipath, then that node MUST implement code to deal with size of acknowledgements. If a node sees a peer using NULL CID and supporting multipath, then that node MUST either use only one path at a time or implement code to deal with impact on loss detection due to out of order arrivals, and impact on congestion control including ambiguity of ECN signals. Then add sections on what it means to deal with the side of acknowledgements, out of order arrivals, and congestion control. I think this approach ends up with the correct compromises: It keeps the main case simple. Nodes that rely on multipath just use long CID and one number space per CID. Nodes that use long CID never have to implement the ACK length minimization algorithms. Nodes that see a peer using NULL CID have a variety of implementation choices, e.g., sending on just one path at a time, sending on mostly one path at a time in \"make or break\" transitions, or implementing sophisticated algorithms. If we do that, we can get rid of the single vs multiple space discussion, and end up with a single solution addressing all use cases.\nLooking at all the discussions here and in other issues such as ECN, I think that we should try to write two different versions of section 7: one covering all the aspects of single packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) one covering all the aspects of multiple packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) There would be some overlap between these two sections and also some differences that would become clear as we specify them. At the end of this writing, we'll know whether it is possible to support/unify both or we need to recommend a single one. The other parts of the document are almost independent of that and can evolve in parallel with these two sections. However, I don't think that such a change would be possible by Monday\nI don't think we should rush changes before we have agreed on the final vision.\nIt might be helpful to have these options as PRs (without merging) them, so people can understand all details of each approach.\nI agree with NAME that supporting multi-path with null CIDs is a more fundamental issue than the efficiency comparison between single PN and separate PN, as it would ultimately impact the application scope of multipath QUIC. But indeed, we might want to implement proposed solution first and then decide if we want to adopt such a unified approach.\nSince NAME prodded me, we now have a PR for the \"unified\" proposal.\nI totally agree with Christian that the issue is about \"support for multipath and NULL CID\", and the solution that Christian suggested looks really great! It both takes advantage of multiple spaces, and support NULL CID users without affect the efficiency of ACK arrangements. Besides, the solution is more convenient for implementations, because If both endpoints uses non-zero length cids, endpoints only need to support multiple spaces, and if one of the endpoints use NULL CID, it could use single pn space in one direction and could support NULL CID and multipath at the same time.\nVery high level summary of IETF-113 discussion seems that there is interest and likely support for the unified solution (review minutes for further details).", "new_text": "7.1.2. When sending to a zero-length CID receiver, senders may receive acknowledgements that combine packet numbers received over multiple paths. However, even if one packet number space is used on multiple path the sender MUST maintain separate congestion control state for each path. Therefore, senders MUST be able to infer the sending path from the acknowledged packet numbers, for example by remembering which packet was sent on what path. The senders MUST use that information to perform congestion control on the relevant paths, and to correctly estimate the transmission delays on each path. (See ack-delay-and-zero-length-cid-considerations for specific considerations about using the ACK Delay field of ACK frames, and ecn-handling for issues on using ECN marks.) Loss detection as specified in QUIC-RECOVERY uses algorithms based on timers and on sequence numbers. When packets are sent over multiple paths, loss detection must be adapted to allow for different RTTs on different paths. When sending to zero-length CID receivers, packets sent on different paths may be received out of order. Therefore senders cannot directly use the packet sequence numbers to compute the Packet Thresholds defined in Section 6.1.1 of QUIC-RECOVERY. Relying only on Time Thresholds produces correct results, but is somewhat suboptimal. Some implementations have been getting good results by not just remembering the path over which a packet was sent, but also maintaining an order list of packets sent on each path. That ordered list can then be used to compute acknowledgement gaps per path in Packet Threshold tests. 7.1.3. The ACK Delay field of the ACK frame is relative to the largest acknowledged packet number (see Section 13.2.5 of QUIC-TRANSPORT). When using paths with different transmission delays, the reported"}
{"id": "q-en-multipath-fa676345baae2c66e20b61d5eed74518130e63633b8c531bbd5aba681d6e4490", "old_text": "latency. To collect ACK delays on all the paths, hosts should rely on time stamps as described in QUIC-Timestamp. 7.2. If the multipath option is enabled with a value of 2, each path has its own packet number space for transmitting 1-RTT packets and a new ACK frame format is used as specified in ack-mp-frame. Compared to the QUIC version 1 ACK frame, the ACK_MP frames additionally contains a Packet Number Space Identifier (PN Space ID). The PN Space ID used to distinguish packet number spaces for different paths and is simply derived from the sequence number of Destination Connection ID. Therefore, the packet number space for 1-RTT packets can be identified based on the Destination Connection ID in each packets.", "comments": "This derives from the discussions on issue\nThe commit applies the changes suggested in the previous reviews. With those changes, I think we have a good basis for discussing a unified solution in the working group.\nThanks NAME for the review. I have applied the editorial changes. You are asking for something more regarding the \"specific logic\" required when supporting zero-length CID. There are in fact two pieces to that: logic at the receiver, i.e., the node that chose to receive packets with zero-length CID; and logic at the sender, i.e., the node that accepts to send data on multiple paths towards a node that uses zero-length CID. This is explained in details in the section \"Using Zero-Length connection ID\", so I guess what we need is a reference to that section.\nComment by NAME (on PR ): I think we expand this section a bit as we found getting the delay right was non-trivial. QUIC time-stamp would work for sure. But if one decides not to use QUIC timestamp, we probably want to explain what is needed to be done here.\nI have submitted PR for this one.\nClosing as got merged.\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue.\nThis draft initially originates from a merging effort of previous Multipath proposals. While many points were found to be common between them, there remains one design point that still requires consensus: the number of packet number spaces that a Multipath QUIC connection should support (i.e., one for the whole connection vs. one per path). The current draft enables experimentation with both variants, but in the final version we will certainly need to choose between one of the versions.\nThe main issues mentioned so far: Efficiency. One number space per path should be more efficient, because implementations can directly reuse the loss-recovery logic specified for QUIC ACK size. Single space leads to lots of out of order delivery, which causes ACK sizes to grow Simplicity of code. Single space can be implemented without adding mane new code paths to uni-path QUIC Shorter header. Single space works well with NULL length CID The picoquic implementation shows that \"efficiency\" and \"ack size\" issues of single space implementations can be mitigated. However, that required significant improvements in the code: to obtain good loss efficiency, picoquic remembers not just the path on which a packet was sent, but also the order of the packet on this path, i.e. a \"virtual sequence number\". The loss detection logic then operates on that virtual sequence number. to contain ACK size, picoquic implements a prioritization logic to select the most important ranges in an ACK, avoid acking the same range more than 3 or 4 times, and keep knowledge of already acknowledged ranges so range coalescing works. I think these improvements are good in general, and I will keep them in the implementations whether we go for single space or not. The virtual sequence number is for example useful if the CID changes for reasons not related to path changes in multiple number space variants. It is also useful in unipath variants to avoid interference between sequence numbers used in probes and the RACK logic. The ACK size improvements do reduce the size of ACKs in presence of out of order delivery, e.g., if the network is doing some kind of internal load balancing. On the other hand, the improvements are somewhat complex, would need to be described in separate drafts, and pretty much contradicts the \"simplicity of code\" argument. So we are left with the \"Null length CID\" issue. I see for cases: Client CID Priority ----------------------- long Used by many implementations NULL Preferred configuration of many big deployments long Rarely used, server load balancing does not work NULL Only mentioned in some P2P deployments The point here is that it is somewhat hard to deploy a large server with NULL CID and use server load balancing. This configuration is mostly favored by planned P2P deployments.\nThe big debate is for the configuration with NULL CID on client, long CID on server. The packets from server to client do not carry a CID, and only the last bytes of the sequence number. The client will need some logic to infer the full sequence number before decrypting the packet. The client could maybe use the incoming 5 tuple as part of the logic, but it is not obvious. It is much simpler to assume a direct map from destination CID to number space. That means, if a peer uses a NULL CID, all packets sent to that peer are in the same number space.\nRevised table: Client CID What -- -- long Multiple number space NULL Multiple number spaces on client side (one per CID), single space on server side long Multiple number spaces on server side (one per CID), single space on client side NULL single number space on each side If a node advertises both NULL CID and multipath support, they SHOULD have logic to contain the size of ACK. If a node engages in multipath with a NULL CID peer, they SHOULD have special logic to make loss recovery work well.\nI think the above points to a possible \"unified solution\": if multipath negotiated, tie number spaces to destination CID, use default (N=0) if NULL CID. use multipath ack; in multipath ACK, identify number space by sequence number of DCID in received packets. Default that number to zero if received with NULL CID. in ABANDON path, identify path either by DCID of received packets, SCID of sent packet, or \"this path\" (sending path) if CID is NULL.\nI like the proposal of the \"unified solution\". I think the elegance lies in the fact that it allows us to automatically cover all four cases listed above. The previous dilemma for me was that on one hand we have some use cases where we need to support more than two paths and separate PN makes the job easier, but on the other hand, I think we should not ignore the NULL CID use cases as it is also important. Now, with this proposal, a big part of the problem is solved. The rest of challenge is to make sure single PN remains efficient in terms of ACK and loss recovery. On that part, we plan to do an A/B test and would love to share the results when we get them. There is one more problem as pointed in issue , when we want to take hardware offloads into account. In such a case, we may still need single PN for long server CID. However, if hardware supports nonce modification, this problem can be addressed with the proposed \"unified solution\".\nIf the APi does not support 96 bit sequence numbers, it should always be possible to create an encryption context per number space, using Key=current key and ID = current-ID + CID sequence number. Of course, that forces creation of multiple context, and that makes key rotation a bit harder to manage. But still, that should be possible.\nThanks for the summary NAME I think one point is missing in your list which is related to issue . Use of a single packet number space might not support ECN. Regarding the unified solution: I think what you actually say is that we would specify both solutions and everybody would need to implement both logics. At least regarding the \"simplicity of code\" argument, that would be the worst choice. If we can make the multiple packet number spaces solution work with one-sided CIDs, I'm tending toward such an approach. Use of multiple packet number spaces avoids ambiguity/needed \"smartness\" in ACK handling and packet scheduling which, as you say above, can make the implementation more complex and, moreover, wrong decisions may have large impact on efficient (both local processing and on-the-wire). I don't think we want to leave these things to each implementation individually.\nThe summary Christian made above about design comparison sounds indeed quite accurate. Besides ECN, my other concern about single packet number is that it require cleverness from the receiver side if you want to be performant in some scenarios. At the sender-side, you need to consider a path-relative packet number threshold instead of an absolute one to avoid spurious losses. Just a point I think we did not mentioned yet is that there can be some interactions between Multipath out-of-order number delivery and incoming packet number duplicate detection. This requires maintaining some state at the receiver side, as described by URL With single packet number space, the receiver should take extra care when updating the \"minimum packet number below which all packets are immediately dropped\". Otherwise, in presence of paths with very different latencies, the receiver might end up discarding packets from a (slower) path. I'm also preferring the multiple packet number spaces solution for the above reasons. I'm not against thinking for a \"unified\" solution (the proposal sounds interesting), but I wonder how much complexity this would add compared to requiring end hosts to use one-byte CIDs.\nI think the issue is not really so much \"single vs multiple number space\" as \"support for multipath and NULL CID\". As noted by Mirja, there is an implementation cost there. My take is: If a node uses NULL CID and proposes support of multipath, then that node MUST implement code to deal with size of acknowledgements. If a node sees a peer using NULL CID and supporting multipath, then that node MUST either use only one path at a time or implement code to deal with impact on loss detection due to out of order arrivals, and impact on congestion control including ambiguity of ECN signals. Then add sections on what it means to deal with the side of acknowledgements, out of order arrivals, and congestion control. I think this approach ends up with the correct compromises: It keeps the main case simple. Nodes that rely on multipath just use long CID and one number space per CID. Nodes that use long CID never have to implement the ACK length minimization algorithms. Nodes that see a peer using NULL CID have a variety of implementation choices, e.g., sending on just one path at a time, sending on mostly one path at a time in \"make or break\" transitions, or implementing sophisticated algorithms. If we do that, we can get rid of the single vs multiple space discussion, and end up with a single solution addressing all use cases.\nLooking at all the discussions here and in other issues such as ECN, I think that we should try to write two different versions of section 7: one covering all the aspects of single packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) one covering all the aspects of multiple packet number space (RTT estimation, retransmissions, ECN, handling of CIDs, ...) There would be some overlap between these two sections and also some differences that would become clear as we specify them. At the end of this writing, we'll know whether it is possible to support/unify both or we need to recommend a single one. The other parts of the document are almost independent of that and can evolve in parallel with these two sections. However, I don't think that such a change would be possible by Monday\nI don't think we should rush changes before we have agreed on the final vision.\nIt might be helpful to have these options as PRs (without merging) them, so people can understand all details of each approach.\nI agree with NAME that supporting multi-path with null CIDs is a more fundamental issue than the efficiency comparison between single PN and separate PN, as it would ultimately impact the application scope of multipath QUIC. But indeed, we might want to implement proposed solution first and then decide if we want to adopt such a unified approach.\nSince NAME prodded me, we now have a PR for the \"unified\" proposal.\nI totally agree with Christian that the issue is about \"support for multipath and NULL CID\", and the solution that Christian suggested looks really great! It both takes advantage of multiple spaces, and support NULL CID users without affect the efficiency of ACK arrangements. Besides, the solution is more convenient for implementations, because If both endpoints uses non-zero length cids, endpoints only need to support multiple spaces, and if one of the endpoints use NULL CID, it could use single pn space in one direction and could support NULL CID and multipath at the same time.\nVery high level summary of IETF-113 discussion seems that there is interest and likely support for the unified solution (review minutes for further details).", "new_text": "latency. To collect ACK delays on all the paths, hosts should rely on time stamps as described in QUIC-Timestamp. 7.1.4. ECN feedback in QUIC is provided based on counters in the ACK frame (see Section 19.3.2. of QUIC-TRANSPORT). That means if an ACK frame acknowledges multiple packets, the ECN feedback cannot be accounted to a specific packet. There are separate counters for each packet number space. However, sending to zero-length CID receivers, the same number space is used for multiple paths. Respectively, if an ACK frames acknowledges multiple packets from different paths, the ECN feedback cannot unambiguously be assigned to a path. If the sender marks its packets with the ECN capable flags, the network will expect standard reactions to ECN marks, such as slowing down transmission on the sending path. If zero-length CID is used, the sending path is however ambiguous. Therefore, the sender MUST treat a CE marking as a congestion signal on all sending paths that have been by a packet that was acknowledged in the ACK frame signaling the CE counter increase. A host that is sending over multiple paths to a zero-length CID receiver MAY disable ECN marking and send all subsequent packets as Not-ECN capable. 7.1.5. Hosts that are designed to support multipath using multiple number spaces MAY adopt a conservative posture after negotiating multipath support with a peer using zero-length CID. The simplest posture is to only send data on one path at a time, while accepting packets on all acceptable paths. In that case: the attribution of packets to path discussed in zero-length-cid- loss-and-congestion are easy to solve because packets are sent on a single path, the ACK Delays are correct, the vast majority of ECN marks relate to the current sending path. Of course, the hosts will only take limited advantage from the multipath capability in these scenarios. Support for \"make before break\" migrations will improve, but load sharing between multiple paths will not work. 7.2. If packets contain a non-zero CID, each path has its own packet number space for transmitting 1-RTT packets and a new ACK frame format is used as specified in ack-mp-frame. Compared to the QUIC version 1 ACK frame, the ACK_MP frames additionally contains a Packet Number Space Identifier (PN Space ID). The PN Space ID used to distinguish packet number spaces for different paths and is simply derived from the sequence number of Destination Connection ID. Therefore, the packet number space for 1-RTT packets can be identified based on the Destination Connection ID in each packets."}
{"id": "q-en-multipath-b626e5afe475cf20c3f314ade864d8a56b9173526d859beee71904b5ceea66b1", "old_text": "threshold, or the quality of RTT or loss rate is becoming worse) and wants to close the initial path. In Figure fig-example-path-close the server's 1-RTT packets use DCID C1, which has a sequence number of 1, for the first path; the client's 1-RTT packets use DCID S2, which has a sequence number of 2. For the second path, the server's 1-RTT packets use DCID C2, which has a sequence number of 2; the client's 1-RTT packets use CID S3, which has a sequence number of 3. Note that two paths use different packet number space. The client initiates the path closure for the path with ID 1 by sending a packet with an PATH_ABANDON frame. When the server received the PATH_ABANDON frame, it also sends an PATH_ABANDON frame in the next packet. Afterwards the connection IDs in both directions can be retired using the RETIRE_CONNECTION_ID frame. 11.", "comments": "This PR is to resolve issue . The description of the path-closing figure is rephrased for better clarity. The pathidtype field in the PATHABANDON frame is also added in the figure to give an idea how the type is used. Moreover, one case for single PN is added (where the client receives zero-length CID while the server receives long CID). According to previous discussion, in the single PN case, only the client needs to send out a PATHABANDON. The server SHOULD stop sending new data on the path indicated by the PATHABANDON frame after receiving it. However, The client may want to repeat the PATHABANDON frame if it sees the server continuing to send data. It is optional for the server to respond with a PATHABANDON after it receives a PATHABANDON frame from the client.\nThere is one part I am afraid people might feel a little bit hard to follow: the three types of path identifiers in the PATH_STATUS frame. Those three types of path identifiers were introduced in order to be compatible with zero-length CID use cases, so I think we might need two additional example figures (like figure 3 in the current draft), one for (non-zero length CID, zero-length CID) and the other one for (zero-length CID & zero-length CID). Do you think that is a good idea? I can work on those figures if it doesn't complicate the flow of the draft.\nThis might be indeed useful in the \"Example\" section.\nYes, please create a PR!\nNAME I am editing the path closing figures with zero-length CID and single PN. Is it similar to what you have in mind? {: #fig-example-path-close2 title=\"Example of closing a path when the server chooses to use zero-length CID\"} {: #fig-example-path-close3 title=\"Example of closing a path when the both client and server choose to use zero-length CIDs.\"}\nIn the case of single number space, we use ACK, not ACKMP. But in fact you do not need to draw the ACKs there, there is no special processing of ACKs required. Also, there is no need to wait for an idle timeout, which typically is 10 to 30 seconds. That's too large. Do we really need to send \"abandon path\" in both directions? If I think about it, the real concern is that client will send a PATHABANDON message, but some time later, due to differences in delay, etc., the server receive data on the same four tuples. How does the server know whether it should create a new path or not? In the single space case, the logical test may be to check the sequence numbers of the packet. If the sequence number of the new packet is greater than that of the path_abandon, then the client is reactivating a path that was abandoned previously. Maybe the client connected again to the same network, after some disconnected interval. The server can start sending on the reactivated path. If the packet number is lower than that of the path abandon, then the server should not reactivate the path. In the multiple space case, if the CID is defined, there is no ambiguity.\nACK works for me. I think we might also want to add a sentence in the Sec. of \"one packet number space\" explicitly specifying that \"for single PN, use ACK instead of ACKMP\", and change some texts in the ACKMP frame section that is related to zero-length CID (the current draft says \"if the endpoint receives 1-RTT packets with 0-length Connection ID, it SHOULD use Packet Number Space Identifier 0 in ACKMP frames\". ). I think in single PN, sending \"abandon path\" in both directions is helpful. When the client sends \"PATHABANDON\", it promises not to send non-probing packets in the current activity cycle. It could send probing packets to reactivate the path a moment later if the client thinks the network environment has changed. After sending out \"PATHABANDON\", the client should still be able to receive packets from the server for a while on that path because there may be the server's inflight packets on that path. It was not until the client sees the server's \"PATHABANDON\", the client then knows that no more server's data is to be expected except for some out-of-order packets. So after receiving \"PATHABANDON\", the endpoint can start preparing to free memories and clear path stats allocated for that path. The time interval from seeing \"PATHABANDON\" to actually releasing relevant resources is going to be an important parameter, and yes, 10-30 seconds is too long. What would be a more appropriate number for this interval? Does it imply that we need to track the packet number of the PATH_ABANDON for a path that is closed? When a path is in a closed state, the server may reject any newly received packets unless it is a probing packet (back to our path initialization process). In this way, it should be OK to forget any info related to that path in the last activity cycle.\nIn most similar situations, RFC 9000 uses a delay of 3xRTO. Note that in the \"simple\" case, the client can receive packets from the server on any tuple. There are no resource allocated per path for receiving packets. There will be local resource allocated for sending packets, congestion context for example. But not on for receiving: all packets can be decrypted and processed. If the client knows that it will not send, it can discard the resource allocated for sending without risking a side effect.\nI changed the timeout to 3RTO. As the problem is similar to FIN-ACK&FIN-ACK when TCP closes, I kept the ACK in the graph (because the pathidtype=2 refers to the path where the packet is traveling, we want to make sure it is acked before we close the path.). Regarding the resource, I think if we keep a unified receive record, which is not per-path, yes, we can discard the sending resources. But if we have some path stats (e.g. how many packets received in total in this current activity cycle), we need to keep these data structures until we are sure that the peer is not going to send more packets on that path. Indeed, it depends on the implementation, but I feel it is generally helpful to have PATH_ABANDON sent in both directions. What do you think? {: #fig-example-path-close2 title=\"Example of closing a path when the server chooses to use zero-length CID\"} {: #fig-example-path-close3 title=\"Example of closing a path when both client and server choose to use zero-length CIDs.\"}\nI submitted PR . Let's first have sth that we can start modifying.\naddressed by PR\nI think there is still a mismatch between the proposed examples and the definition of the pathidtype in Section 11.1 (definition of PATH_ABANDON frame).", "new_text": "threshold, or the quality of RTT or loss rate is becoming worse) and wants to close the initial path. fig-example-path-close1 illustrates an example of path closing when both the client and the server use non-zero-length CIDs. For the first path, the server's 1-RTT packets use DCID C1, which has a sequence number of 1; the client's 1-RTT packets use DCID S2, which has a sequence number of 2. For the second path, the server's 1-RTT packets use DCID C2, which has a sequence number of 2; the client's 1-RTT packets use DCID S3, which has a sequence number of 3. Note that the paths use different packet number spaces. In this case, the client is going to close the first path. It identifies the path by the sequence number of the received packet's DCID over that path (path identifier type 0x00), hence using the path_id 1. Optionally, the server confirms the path closure by sending an PATH_ABANDON frame using the sequence number of the received packet's DCID over that path (path identifier type 0x00) as path identifier, which corresponds to the path_id 2. Both the client and the server can close the path after receiving the RETIRE_CONNECTION_ID frame for that path. fig-example-path-close2 illustrates an example of path closing when the client chooses to receive zero-length CIDs while the server chooses to receive non-zero-length CIDs. Because there is a zero- length CID in one direction, single packet number spaces are used. For the first path, the client's 1-RTT packets use DCID S2, which has a sequence number of 2. For the second path, the client's 1-RTT packets use DCID S3, which has a sequence number of 3. Again, in this case, the client is going to close the first path. Because the client now receives zero-length CID packets, it needs to use path identifier type 0x01, which identifies a path by the DCID sequence number of the packets it sends over that path, and hence, it uses a path_id 2 in its PATH_ABANDON frame. The server SHOULD stop sending new data on the path indicated by the PATH_ABANDON frame after receiving it. However, The client may want to repeat the PATH_ABANDON frame if it sees the server continuing to send data. When the client's PATH_ABANDON frame is acknowledged, it sends out a RETIRE_CONNECTION_ID frame for the CID used on the first path. The server can readily close the first path when it receives the RETIRE_CONNECTION_ID frame from the client. However, since the client will not receive a RETIRE_CONNECTION_ID frame, after sending out the RETIRE_CONNECTION_ID frame, the client waits for 3 RTO before closing the path. 11."}
{"id": "q-en-multipath-2bd6800311ef13ef411fd64953778859f4f44e724ab99673d680ed426008162b", "old_text": "When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATH_CHALLENGE and PATH_RESPONSE frames described in Section 8 of QUIC-TRANSPORT. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. If validation succeed, the client can send non-probing, 1-RTT packets on the new paths. In contrast with the specification in Section 9 of", "comments": "Fix issue\nIn section 3.1, we wrote When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATHCHALLENGE and PATHRESPONSE frames described in of ]. After receiving packets from the client on the new paths, the servers MAY in turn attempt to validate these paths using the same mechanisms. With multipath, we cannot simply write that the server MAY validate the new path. There is a risk of a malicious client that operates as follows: client creates path from its own address, A. Path is validate by server client starts long download from server client creates a second path to server using a spoofed address, B client starts address validation for B but does not care about the response from the server client stops acknowledging packets over the first path to force the server to move packets to the second one and then opportunistically acknowledges packets that should have been sent by the server over that second packet (SPNS would probably make such an attack simpler than MPNS) We probably need to start working on the security considerations section of the draft.\nThis is already covered in the QUIC threat model. The server is supposed to validate new paths by sending a \"path challenge\" on that path, then waiting for a challenge response. If the client spoofs a path, it will never receive the challenge, and thus the server will never receive the response and will not validate the path. See section 8.2 of RFC 9000, path validation. Security section 21.1.3. Connection Migration states that \"Path validation () establishes that a peer is both willing and able to receive packets sent on a particular path.\"\nSo yes, the current text should be \"the servers MUST in turn attempt to validate these paths using the same mechanisms, as specified in {{ Section 8.2 of RFC9000 }}.\"\nNo, this should not be a MUST. It is true that the path MUST be validated before it can be used. However, just because the serve receives a new packet on a path, that doesn't automatically mean that the server has to start path validation. It could e.g. also decide to simply withdraw that packet.\nActually, this is also not right. This is what RFC9000 says (section 9): So I guess we should maybe not use normative language here but refer to section 9 in RFC9000 instead...?\nJust submitted PR for this issue.", "new_text": "When the multipath option is negotiated, clients that want to use an additional path MUST first initiate the Address Validation procedure with PATH_CHALLENGE and PATH_RESPONSE frames described in Section 8.2 of QUIC-TRANSPORT. After receiving packets from the client on a new path, if the server decides to use the new path, the server MUST perform path validation Section 8.2 of QUIC-TRANSPORT unless it has previously validated that address. If validation succeed, the client can send non-probing, 1-RTT packets on the new paths. In contrast with the specification in Section 9 of"}
{"id": "q-en-multipath-a5c441deed2da8d23cb0197fe0ca7158f506db44715d8e8518c5a44fde67abff", "old_text": "provides several general-purpose packet schedulers depending on the application goals. 8. Simultaneous use of multiple paths enables different retransmission", "comments": "I believe we can merge this now as it is. If we want to add more guidance later we should open a separate issue and another PR.\nThis issue came up during the discussion of issue . We say that ACK or ACK_MP frames can be send over any path, however, maybe we should discuss some guidance and trade-offs. E.g. sending on the same path as the packets received also for RTT measurements; using the shortest delay path might have some performance benefits...?\nI agree. We should add some guidance here. We did some experiment regarding this issue. Our finding is returning ACK on the shortest path is helpful. But the improvement is only more significant when the path rtt ratio is larger than 4:1.", "new_text": "provides several general-purpose packet schedulers depending on the application goals. Note that the receiver could use a different scheduling strategy to send ACK(_MP) frames. The recommended default behaviour consists in sending ACK(_MP) frames on the path they acknowledge packets. Other scheduling strategies, such as sending ACK(_MP) frames on the lowest latency path, might be considered, but they could impact the sender with side effects on, e.g., the RTT estimation or the congestion control scheme. When adopting such asymetrical acknowledgment scheduling, the receiver should at least ensure that the sender negotiated one-way delay calculation mechanism (e.g., QUIC- Timestamp). 8. Simultaneous use of multiple paths enables different retransmission"}
{"id": "q-en-multipath-bc6d58ca5ba771a100e9a6761f1d0f9f18812b6485a82b97cb8069869d5dd61d", "old_text": "If endpoint receives an unexpected value for the transport parameter \"enable_multipath\", it MUST treat this as a connection error of type MP_CONNECTION_ERROR and close the connection. This extension does not change the definition of any transport parameter defined in Section 18.2. of QUIC-TRANSPORT.", "comments": "Fix issue\nIn section 3, we have However, the error code does not exist (anymore), and I'm not sure we need such a specific error code anyway. Why not raising a in such case?\nIt's reasonable to use TRANSPORTPARAMETERERROR. Submitted PR for this issue.", "new_text": "If endpoint receives an unexpected value for the transport parameter \"enable_multipath\", it MUST treat this as a connection error of type TRANSPORT_PARAMETER_ERROR (specified in Section 20.1 of QUIC- TRANSPORT) and close the connection. This extension does not change the definition of any transport parameter defined in Section 18.2. of QUIC-TRANSPORT."}
{"id": "q-en-multipath-85775c2c1bb65462a4df1b4e1b168e0edfcb8440ee22d17cc0ff3f52275ec872", "old_text": "7.1. Senders MUST manage per-path congestion status, and MUST NOT send more data on a given path than congestion control on that path allows. This is already a requirement of QUIC-TRANSPORT. When a Multipath QUIC connection uses two or more paths, there is no guarantee that these paths are fully disjoint. When two (or more", "comments": "as this is already stated normatively in RFC900.\nCurrently the draft say: However, we should not use normative language in this draft but just point to RFC9000 instead as things are defined normatively there already. Btw. this reference should point to section 13.3 (\"Upon detecting losses, a sender MUST take appropriate congestion control action.\") and 9.4 (\"Packets sent on the old path MUST NOT contribute to congestion control or RTT estimation for the new path.\") of RFC9000 specifically and maybe also to RFC9002 directly.\nLGTMLGTM", "new_text": "7.1. When the QUIC multipath extension is used, senders manage per-path congestion status as required in Section 9.4 of QUIC-TRANSPORT. However, in QUIC-TRANSPORT only one active path is assumed and as such the requirement is to reset the congestion control status on path migration. With the multipath extension, multiple paths can be used simultaneously, therefore separate congestion control state is maintained for each path. This means a sender is not allowed to send more data on a given path than congestion control for that path indicates. When a Multipath QUIC connection uses two or more paths, there is no guarantee that these paths are fully disjoint. When two (or more"}
{"id": "q-en-multipath-86ee771a8ecf97de1663331f8b9b92b883023bc0588ac8c3999df849b4a72935", "old_text": "This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connection ID. Therefore use of a single number space can be beneficial in highly constrained networks that do not benefit from exposing the connection ID in the header. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: https://github.com/mirjak/draft-lmbdhk-quic-", "comments": "Not sure to understand why the last part of the sentence is an issue. The use of zero-length connection IDs is problematic anyway in setups with NATs, for example.\nI agree that in many deployments you want at least one CID from the client to server. Multiple packet numbers as currently defined however requires CIDs in both direction; there is separate issue to discuss that. The next sentence also further explains that there might be other deployments though, e.g. in constraint networks, that might also benefit from saving the bits of the CID. This one sentence is of course only a very brief summary of the pros and cons of the different approach and more discussion is needed in the group. So I don't expect to have this part in the draft like this when we finally publish it as RFC. Is there anything in the issue that we need to address now? Or can we close this issue?\nThanks, Mirja. That's what I expected, but I do still think more elaboration is needed for this text to be useful. I would suggest to make this modification and leave the discussion of pros/cons to be adequately documented in Section 7: OLD: This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connection ID. Therefore use of a single number space can be beneficial in highly constrained networks that do not benefit from exposing the connection ID in the header. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: URL URL NEW: This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: URL URL\nYou are suggesting to strike out the sentences that try to summarize the differences, \"While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connectionID. Therefore use of a single number space can be beneficial in highly constrained networks that do not benefit from exposing the connection ID in the header.\" I think that's a good idea. The sentences are a short summary of the arguments presented in the powerpoint, which are not limited to \"highly constrained networks\". For example, the Chrome browser, by default, uses NULL CID. I may be placing words in the mouth of Google developers, but I believe their goal is to reduce the overhead of the QUIC header compared to TCP/TLS. This is related to a general quest for performance, not to network constraints.", "new_text": "This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: https://github.com/mirjak/draft-lmbdhk-quic-"}
{"id": "q-en-multipath-86ee771a8ecf97de1663331f8b9b92b883023bc0588ac8c3999df849b4a72935", "old_text": "multipath negotiation is ambiguous, they MUST be interpreted as acknowledging packets sent on path 0. 7.1. If the multipath option is negotiated to use one packet number space", "comments": "Not sure to understand why the last part of the sentence is an issue. The use of zero-length connection IDs is problematic anyway in setups with NATs, for example.\nI agree that in many deployments you want at least one CID from the client to server. Multiple packet numbers as currently defined however requires CIDs in both direction; there is separate issue to discuss that. The next sentence also further explains that there might be other deployments though, e.g. in constraint networks, that might also benefit from saving the bits of the CID. This one sentence is of course only a very brief summary of the pros and cons of the different approach and more discussion is needed in the group. So I don't expect to have this part in the draft like this when we finally publish it as RFC. Is there anything in the issue that we need to address now? Or can we close this issue?\nThanks, Mirja. That's what I expected, but I do still think more elaboration is needed for this text to be useful. I would suggest to make this modification and leave the discussion of pros/cons to be adequately documented in Section 7: OLD: This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connection ID. Therefore use of a single number space can be beneficial in highly constrained networks that do not benefit from exposing the connection ID in the header. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: URL URL NEW: This proposal supports the negotiation of either the use of one packet number space for all paths or the use of separate packet number spaces per path. While both approaches are supported by the specification in this version of the document, the intention for the final publication of a multipath extension for QUIC is to choose one option in order to avoid incompatibility. More evaluation and implementation experience is needed to select one approach before final publication. Some discussion about pros and cons can be found here: URL URL\nYou are suggesting to strike out the sentences that try to summarize the differences, \"While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connectionID. Therefore use of a single number space can be beneficial in highly constrained networks that do not benefit from exposing the connection ID in the header.\" I think that's a good idea. The sentences are a short summary of the arguments presented in the powerpoint, which are not limited to \"highly constrained networks\". For example, the Chrome browser, by default, uses NULL CID. I may be placing words in the mouth of Google developers, but I believe their goal is to reduce the overhead of the QUIC header compared to TCP/TLS. This is related to a general quest for performance, not to network constraints.", "new_text": "multipath negotiation is ambiguous, they MUST be interpreted as acknowledging packets sent on path 0. Endpoints negotiate the use of one packet number space for all paths or separate packet number spaces per path during the connection handshake nego. While separate packet number spaces allow for more efficient ACK encoding, especially when paths have highly different latencies, this approach requires the use of a connection ID. Therefore use of a single number space can be beneficial when endpoints use zero-length connection ID for less overhead. 7.1. If the multipath option is negotiated to use one packet number space"}
{"id": "q-en-multipath-9516913c9bf0a5ce1091b37af29bb82786ff0eaca6a5e9f77be3ffd7826091bd", "old_text": "this limit even applies when no connection ID is exposed in the QUIC header. 3. After completing the handshake, endpoints have agreed to enable", "comments": "NAME you wrote: That sounds like you want to restrict the multipath extension to only one server address (but potentially different paths from different clients address)...? I think this restriction is unnecessary. Yes we don't specify a way to learn server addresses but that doesn't means that the client doesn't know (e.g. based on higher layer information) multiple addresses of the server. What should we restrict that?\nHowever, can we maybe merge this PR as it is for now and you open a separate issue for your additional point NAME ?\nMirja, That was my understanding of the sentences in the PR. Maybe I got it wrong. I think that we should discuss the role of preferred address in section 3 where we explain how paths can be opened.\nFine, let's do it this way.\nRFC9000 defines a list of transport parameters. 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08 0x09 0x0a 0x0b 0x0c 0x0d 0x0e 0x0f 0x10 [QUICWG] These transport parameters apply to all the paths that compose a Multipath QUIC connection. We should probably confirm this in the mpquic draft to avoid any misinterpretation (e.g. for maxidletimeout or ack delays). There are two transport parameters that could warrant some discussion: disableactivemigration : This transport parameter is incompatible with enablemultipath. An endpoint that receives both this transport parameter and a non zero enablemultipath parameter MUST treat this as a connection error of type MPCONNECTIONERROR and close the connection. preferredaddress: RFC9000, section 9.2.1 contains the following: Once the handshake is confirmed, the client SHOULD select one of the two addresses provided by the server and initiate path validation (see Section 8.2). A client constructs packets using any previously unused active connection ID, taken from either the preferredaddress transport parameter or a NEWCONNECTION_ID frame. As soon as path validation succeeds, the client SHOULD begin sending all future packets to the new server address using the new connection ID and discontinue use of the old server address. If path validation fails, the client MUST continue sending all future packets to the server's original IP address. For multipath, do we want to be stronger than SHOULD select one ? Can a client create news paths towards the initial address or only to one of the preferred addresses ? Should the client use both addresses (IPv4 and IPv6) if non-empty ?\nGood point about \"disableactivemigration\". I would read it as also disabling multipath. We do not need to say that \"transport parameters apply to all the paths that compose a Multipath QUIC connection\". We are already inheriting that property from compatibility with path migration in RFC 9000. Multipath support is an extension to QUIC v1, not a new protocol, so we do not need to specify all the defaults. Same goes for the preferred address -- meaning does not change just because the nodes negotiate multipath. I would avoid debating \"which server addresses the client can try\" in the base multipath document. If we want the server to signal plausible addresses to the client, let's have a specific document for that.\nI would maybe add a sentence that this draft does not change the meaning of preferred_address and if that address could also be used for multipath is out of scope for this extension.\nRegarding disableactivemigration: this cannot be a connection error, as you can also send preferred address and after migration to the preferred address you can still use migration or multipath (if negotiated). I think the definition in RFC9000 is still valid; it means you can't send any packets from another address than the one used in the handshake. Which effectively means disabling multipath as well as Christian noted (if not moved to another preferred address). I guess we could explicitly note that this draft does not change the definition of disableactivemigration. For your reference this is what RFC9000 says:\nI created a PR in . Is this sufficient to address this issue or do we need to say more? NAME ?\nThe proposed PR sounds good to me, with the slight addition I mentioned.", "new_text": "this limit even applies when no connection ID is exposed in the QUIC header. This extension does not change the definition of any transport parameter defined in Section 18.2. of QUIC-TRANSPORT. Inline with the definition in QUIC-TRANSPORT disable_active_migration also disables multipath support, except \"after a client has acted on a preferred_address transport parameter\" Section 18.2. of QUIC- TRANSPORT. Further, it is out of scope for this document to specify if the old address can also be used for multipath after a client has migrated to the address provided in the preferred_address transport parameter. However, it SHOULD NOT be assumed that it is possible to use both addresses simultaneously without further confirmation from the other host. 3. After completing the handshake, endpoints have agreed to enable"}
{"id": "q-en-oblivious-http-95a72363743c12351f6cdae762e12bfa22cf49bf02e3ff950e03891887158fdc", "old_text": "A relay MAY add information to requests if the client is aware of the nature of the information that could be added. The client does not need to be aware of the exact value added for each request, but needs to know the range of possible values the relay might use. It is important to note that information added by the relay can reduce the size of the anonymity set of clients at a gateway. Moreover, relays MAY apply differential treatment to clients that", "comments": "This seems fine to me, but doesn't seem to include URL\nNAME after trying to work that text in it became clear (to me) that a better outcome was modify the existing text we have about anonymity set reduction. So I went with that approach rather than open the can of consistency worms that NAME alluded to.\nLet's say HPKE public key A was valid from 2022-07-01 to 2022-07-02. The client encapsulates a request X using pubkey A, but buffers it because of network issues, in order to retry later. Now it rotates and public key B is valid as of 2022-07-03. X encapsulated using pubkey A would not be decryptable by pubkey B. So if you receive X on 2022-07-03, then you know that X was generated between 2022-07-01 and 2022-07-02.\nI'm not inclined to add this. \"Don't do that\" seems pretty obvious to me, especially when we recommend that requests include a Date.\nWhile this advice seems fine, I agree that it's not really necessary. It also brings up more questions to me \u2014 what stops any layer below the OHTTP layer or an intermediary from arbitrarily buffering messages? Maybe just have a note that the key config that's used as well as explicit date markings can leak the time that the request was generated?\nThis seems like a reasonable way to deal with the problem. Liveness information can leak in a number of places, including through noting which public key was used. It might also leak via the date. Maybe this can become part of the differential treatment section?\nThat sounds good to me\nI think that while this text is true, I believe that it is also unnecessary. Discussion of key liveness are dealt with in the key-consistency draft, so this is really about requests that are transmitted well after they are constructed. If we start to document that, there are lots of other things that start to come up.\nI agree, but the content I'm proposing here relates to how liveness affects differential treatment. We've already noted a few key ways in which this can happen, so it seems pretty harmless to me to include one more. I don't think we need to say anything more about how to deal with liveness or how it pertains to consistency. I'll propose some text to make this more clear. No more than one sentence.", "new_text": "A relay MAY add information to requests if the client is aware of the nature of the information that could be added. The client does not need to be aware of the exact value added for each request, but needs to know the range of possible values the relay might use. Importantly, information added by the relay - beyond what is already revealed through encapsulated requests from clients - can reduce the size of the anonymity set of clients at a gateway. Moreover, relays MAY apply differential treatment to clients that"}
{"id": "q-en-oblivious-http-95a72363743c12351f6cdae762e12bfa22cf49bf02e3ff950e03891887158fdc", "old_text": "request that is made to the target. This means that only the gateway or target are in a position to identify abuse. A gateway MAY send signals toward the relay to provide feedback about specific requests. A relay that acts on this feedback could - either inadvertently or by design - lead to clients being deanonymized. 7.2.2.", "comments": "This seems fine to me, but doesn't seem to include URL\nNAME after trying to work that text in it became clear (to me) that a better outcome was modify the existing text we have about anonymity set reduction. So I went with that approach rather than open the can of consistency worms that NAME alluded to.\nLet's say HPKE public key A was valid from 2022-07-01 to 2022-07-02. The client encapsulates a request X using pubkey A, but buffers it because of network issues, in order to retry later. Now it rotates and public key B is valid as of 2022-07-03. X encapsulated using pubkey A would not be decryptable by pubkey B. So if you receive X on 2022-07-03, then you know that X was generated between 2022-07-01 and 2022-07-02.\nI'm not inclined to add this. \"Don't do that\" seems pretty obvious to me, especially when we recommend that requests include a Date.\nWhile this advice seems fine, I agree that it's not really necessary. It also brings up more questions to me \u2014 what stops any layer below the OHTTP layer or an intermediary from arbitrarily buffering messages? Maybe just have a note that the key config that's used as well as explicit date markings can leak the time that the request was generated?\nThis seems like a reasonable way to deal with the problem. Liveness information can leak in a number of places, including through noting which public key was used. It might also leak via the date. Maybe this can become part of the differential treatment section?\nThat sounds good to me\nI think that while this text is true, I believe that it is also unnecessary. Discussion of key liveness are dealt with in the key-consistency draft, so this is really about requests that are transmitted well after they are constructed. If we start to document that, there are lots of other things that start to come up.\nI agree, but the content I'm proposing here relates to how liveness affects differential treatment. We've already noted a few key ways in which this can happen, so it seems pretty harmless to me to include one more. I don't think we need to say anything more about how to deal with liveness or how it pertains to consistency. I'll propose some text to make this more clear. No more than one sentence.", "new_text": "request that is made to the target. This means that only the gateway or target are in a position to identify abuse. A gateway MAY send signals toward the relay to provide feedback about specific requests. For example, a gateway could respond differently to requests it cannot decapsulate, as mentioned in errors. A relay that acts on this feedback could - either inadvertently or by design - lead to client deanonymization. 7.2.2."}
{"id": "q-en-oblivious-http-2fd5aa307b28b28e9738d0b9f7b5bfe8fcaad5f9ab3aeccb9835024ed7fec7bd", "old_text": "Oblivious Relay Resource also needs to observe the guidance in relay- responsibilities. An Oblivious Gateway Resource, if it receives any response from the Target Resource, sends a single 200 response containing the encapsulated response. Like the request from the client, this", "comments": "Section 5 details HTTP usage. It seems it's technically correct but the section closes with a sequence of paragraphs that jump back and forth between statements about how an Oblivious Gateway Resource treats requests or responses. Perhaps these could be consolidated to present requests first consistently. I.e. swap the final two paragraphs around", "new_text": "Oblivious Relay Resource also needs to observe the guidance in relay- responsibilities. An Oblivious Gateway Resource acts as a gateway for requests to the Target Resource (see HTTP). The one exception is that any information it might forward in a response MUST be encapsulated, unless it is responding to errors it detects before removing encapsulation of the request; see errors. An Oblivious Gateway Resource, if it receives any response from the Target Resource, sends a single 200 response containing the encapsulated response. Like the request from the client, this"}
{"id": "q-en-oblivious-http-2fd5aa307b28b28e9738d0b9f7b5bfe8fcaad5f9ab3aeccb9835024ed7fec7bd", "old_text": "information that does not reveal information about the encapsulated response. An Oblivious Gateway Resource acts as a gateway for requests to the Target Resource (see HTTP). The one exception is that any information it might forward in a response MUST be encapsulated, unless it is responding to errors it detects before removing encapsulation of the request; see errors. In order to achieve the privacy and security goals of the protocol an Oblivious Gateway Resource also needs to observe the guidance in server-responsibilities.", "comments": "Section 5 details HTTP usage. It seems it's technically correct but the section closes with a sequence of paragraphs that jump back and forth between statements about how an Oblivious Gateway Resource treats requests or responses. Perhaps these could be consolidated to present requests first consistently. I.e. swap the final two paragraphs around", "new_text": "information that does not reveal information about the encapsulated response. In order to achieve the privacy and security goals of the protocol an Oblivious Gateway Resource also needs to observe the guidance in server-responsibilities."}
{"id": "q-en-oblivious-http-f0788ff541648fd591d282c456b482b0a979d4e95072ff19451e3a4a9b22d26d", "old_text": "indicate that the client configuration used to construct the request is incorrect or out of date. 6. In this design, a wishes to make a request of a server that is", "comments": "We debated this at some length. This defines an optional-to-use signaling method for the gateway so that it can inform the client when the key configuration is not acceptable. Clients can use this as a prompt to refresh their configuration.\n(Originally posted by NAME in URL): Any lifetime information that comes with a key is helpful, but in practice is insufficient for clients to drive key updates. Key rotation may not occur on a fixed schedule and it may not occur synchronously (some gateway instances might not get the key at the right time. This means that clients cannot distinguish between failures due to configuration mismatch or request tampering (or whatever else might cause the gateway to reply without an encapsulated response). If the client can't distinguish then the client's recovery implementation is a lot more complicated. Does it always try to forcibly update its configuration? Does it implement some sort of backoff? Can a relay abuse the client's recovery implementation to somehow put the client into a state where it is stuck with an old configuration? It seems vastly simpler to just be clear that the configuration is wrong as a signal to the client that it's time to update. This was done in ODoH with a 401 response code. proposed a new problem statement, which I think is the right way to deal with this. Yes, there are potential privacy problems based on automated updates, but those depend on the client's consistency story and is therefore not something new a signal would introduce.\nThis looks great -- thanks!", "new_text": "indicate that the client configuration used to construct the request is incorrect or out of date. 5.3. The problem type PROBLEM of \"https://iana.org/assignments/http- problem-types#ohttp-key\" is defined. An MAY use this problem type in a response to indicate that an used an outdated or incorrect . fig-key-problem shows an example response in HTTP/1.1 format. As this response cannot be encrypted, it might not reach the . A cannot rely on the using this problem type. A might also be configured to disregard responses that are not encapsulated on the basis that they might be subject to observation or modification by an . A might manage the risk of an outdated using a heuristic approach whereby it periodically refreshes its if it receives a response with an error status code that has not been encapsulated. 6. In this design, a wishes to make a request of a server that is"}
{"id": "q-en-oblivious-http-f0788ff541648fd591d282c456b482b0a979d4e95072ff19451e3a4a9b22d26d", "old_text": "9. Please update the \"Media Types\" registry at https://www.iana.org/assignments/media-types [1] for the media types \"application/ohttp-keys\" (iana-keys), \"message/ohttp-req\" (iana-req), and \"message/ohttp-res\" (iana-res). 9.1. The \"application/ohttp-keys\" media type identifies a used by", "comments": "We debated this at some length. This defines an optional-to-use signaling method for the gateway so that it can inform the client when the key configuration is not acceptable. Clients can use this as a prompt to refresh their configuration.\n(Originally posted by NAME in URL): Any lifetime information that comes with a key is helpful, but in practice is insufficient for clients to drive key updates. Key rotation may not occur on a fixed schedule and it may not occur synchronously (some gateway instances might not get the key at the right time. This means that clients cannot distinguish between failures due to configuration mismatch or request tampering (or whatever else might cause the gateway to reply without an encapsulated response). If the client can't distinguish then the client's recovery implementation is a lot more complicated. Does it always try to forcibly update its configuration? Does it implement some sort of backoff? Can a relay abuse the client's recovery implementation to somehow put the client into a state where it is stuck with an old configuration? It seems vastly simpler to just be clear that the configuration is wrong as a signal to the client that it's time to update. This was done in ODoH with a 401 response code. proposed a new problem statement, which I think is the right way to deal with this. Yes, there are potential privacy problems based on automated updates, but those depend on the client's consistency story and is therefore not something new a signal would introduce.\nThis looks great -- thanks!", "new_text": "9. Please update the \"Media Types\" registry at https://iana.org/assignments/media-types [1] for the media types \"application/ohttp-keys\" (iana-keys), \"message/ohttp-req\" (iana-req), and \"message/ohttp-res\" (iana-res). Please update the \"HTTP Problem Types\" registry at https://iana.org/assignments/http-problem-types [2] for the types \"date\" (iana-problem-date) and \"ohttp-key\" (iana-problem-ohttp-key). 9.1. The \"application/ohttp-keys\" media type identifies a used by"}
{"id": "q-en-oblivious-http-f0788ff541648fd591d282c456b482b0a979d4e95072ff19451e3a4a9b22d26d", "old_text": "IANA are requested to create a new entry in the \"HTTP Problem Type\" registry established by PROBLEM. 10. References 10.1. URIs [1] https://www.iana.org/assignments/media-types Index", "comments": "We debated this at some length. This defines an optional-to-use signaling method for the gateway so that it can inform the client when the key configuration is not acceptable. Clients can use this as a prompt to refresh their configuration.\n(Originally posted by NAME in URL): Any lifetime information that comes with a key is helpful, but in practice is insufficient for clients to drive key updates. Key rotation may not occur on a fixed schedule and it may not occur synchronously (some gateway instances might not get the key at the right time. This means that clients cannot distinguish between failures due to configuration mismatch or request tampering (or whatever else might cause the gateway to reply without an encapsulated response). If the client can't distinguish then the client's recovery implementation is a lot more complicated. Does it always try to forcibly update its configuration? Does it implement some sort of backoff? Can a relay abuse the client's recovery implementation to somehow put the client into a state where it is stuck with an old configuration? It seems vastly simpler to just be clear that the configuration is wrong as a signal to the client that it's time to update. This was done in ODoH with a 401 response code. proposed a new problem statement, which I think is the right way to deal with this. Yes, there are potential privacy problems based on automated updates, but those depend on the client's consistency story and is therefore not something new a signal would introduce.\nThis looks great -- thanks!", "new_text": "IANA are requested to create a new entry in the \"HTTP Problem Type\" registry established by PROBLEM. 9.5. IANA are requested to create a new entry in the \"HTTP Problem Type\" registry established by PROBLEM. 10. References 10.1. URIs [1] https://iana.org/assignments/media-types [2] https://iana.org/assignments/http-problem-types Index"}
{"id": "q-en-oblivious-http-ba19c9ba9773c7d2771e651cd004bcd2c3328b5f241051c9a0043f49892c1a48", "old_text": "constructing and processing an . The Nenc parameter corresponding to the KEM used in HPKE can be found in HPKE. Nenc refers to the size of the encapsulated KEM shared secret, in bytes. 4.2.", "comments": "From NAME Can we (additionally or in replacement) point to the \"HPKE KEM Identifiers\" IANA registry created from this table instead? Same for this and \"HPKE AEAD Identifiers\" registry.", "new_text": "constructing and processing an . The Nenc parameter corresponding to the KEM used in HPKE can be found in HPKE or the HPKE KEM IANA registry [1]. Nenc refers to the size of the encapsulated KEM shared secret, in bytes. 4.2."}
{"id": "q-en-oblivious-http-ba19c9ba9773c7d2771e651cd004bcd2c3328b5f241051c9a0043f49892c1a48", "old_text": "constructing and processing an . The Nn and Nk values correspond to parameters of the AEAD used in HPKE, which is defined in HPKE. Nn and Nk refer to the size of the AEAD nonce and key respectively, in bytes. The nonce length is set to the larger of these two lengths, i.e., max(Nn, Nk). 4.3.", "comments": "From NAME Can we (additionally or in replacement) point to the \"HPKE KEM Identifiers\" IANA registry created from this table instead? Same for this and \"HPKE AEAD Identifiers\" registry.", "new_text": "constructing and processing an . The Nn and Nk values correspond to parameters of the AEAD used in HPKE, which is defined in HPKE or the HPKE AEAD IANA registry [2]. Nn and Nk refer to the size of the AEAD nonce and key respectively, in bytes. The nonce length is set to the larger of these two lengths, i.e., max(Nn, Nk). 4.3."}
{"id": "q-en-oblivious-http-ba19c9ba9773c7d2771e651cd004bcd2c3328b5f241051c9a0043f49892c1a48", "old_text": "9. Please update the \"Media Types\" registry at https://iana.org/assignments/media-types [1] for the media types \"application/ohttp-keys\" (iana-keys), \"message/ohttp-req\" (iana-req), and \"message/ohttp-res\" (iana-res). Please update the \"HTTP Problem Types\" registry at https://iana.org/assignments/http-problem-types [2] for the types \"date\" (iana-problem-date) and \"ohttp-key\" (iana-problem-ohttp-key). 9.1.", "comments": "From NAME Can we (additionally or in replacement) point to the \"HPKE KEM Identifiers\" IANA registry created from this table instead? Same for this and \"HPKE AEAD Identifiers\" registry.", "new_text": "9. Please update the \"Media Types\" registry at https://iana.org/assignments/media-types [3] for the media types \"application/ohttp-keys\" (iana-keys), \"message/ohttp-req\" (iana-req), and \"message/ohttp-res\" (iana-res). Please update the \"HTTP Problem Types\" registry at https://iana.org/assignments/http-problem-types [4] for the types \"date\" (iana-problem-date) and \"ohttp-key\" (iana-problem-ohttp-key). 9.1."}
{"id": "q-en-oblivious-http-ba19c9ba9773c7d2771e651cd004bcd2c3328b5f241051c9a0043f49892c1a48", "old_text": "10.1. URIs [1] https://iana.org/assignments/media-types [2] https://iana.org/assignments/http-problem-types Index", "comments": "From NAME Can we (additionally or in replacement) point to the \"HPKE KEM Identifiers\" IANA registry created from this table instead? Same for this and \"HPKE AEAD Identifiers\" registry.", "new_text": "10.1. URIs [1] https://www.iana.org/assignments/hpke/hpke.xhtml#hpke-kem-ids [2] https://www.iana.org/assignments/hpke/hpke.xhtml#hpke-aead-ids [3] https://iana.org/assignments/media-types [4] https://iana.org/assignments/http-problem-types Index"}
{"id": "q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944", "old_text": "identified by \"aead_id\". The then constructs an , \"enc_request\", from a binary encoded HTTP request, \"request\", as follows: Construct a message header, \"hdr\", by concatenating the values of \"key_id\", \"kem_id\", \"kdf_id\", and \"aead_id\", as one 8-bit integer", "comments": "Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one.", "new_text": "identified by \"aead_id\". The then constructs an , \"enc_request\", from a binary encoded HTTP request BINARY, \"request\", as follows: Construct a message header, \"hdr\", by concatenating the values of \"key_id\", \"kem_id\", \"kdf_id\", and \"aead_id\", as one 8-bit integer"}
{"id": "q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944", "old_text": "bhttp request\", a zero byte, \"key_id\" as an 8-bit integer, plus \"kem_id\", \"kdf_id\", and \"aead_id\" as three 16-bit integers. Create a receiving HPKE context by invoking \"SetupBaseR()\" (HPKE) with \"skR\", \"enc\", and \"info\". This produces a context \"rctxt\". Decrypt \"ct\" by invoking the \"Open()\" method on \"rctxt\" (HPKE), with an empty associated data \"aad\", yielding \"request\" or an", "comments": "Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one.", "new_text": "bhttp request\", a zero byte, \"key_id\" as an 8-bit integer, plus \"kem_id\", \"kdf_id\", and \"aead_id\" as three 16-bit integers. Create a receiving HPKE context, \"rctxt\", by invoking \"SetupBaseR()\" (HPKE) with \"skR\", \"enc\", and \"info\". Decrypt \"ct\" by invoking the \"Open()\" method on \"rctxt\" (HPKE), with an empty associated data \"aad\", yielding \"request\" or an"}
{"id": "q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944", "old_text": "In pseudocode, this procedure is as follows: 4.4. Given an HPKE context, \"context\"; a request message, \"request\"; and a response, \"response\", generate an , \"enc_response\", as follows: Export a secret, \"secret\", from \"context\", using the string \"message/bhttp response\" as the \"exporter_context\" parameter to", "comments": "Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one.", "new_text": "In pseudocode, this procedure is as follows: The retains the HPKE context, \"rctxt\", so that it can encapsulate a response. 4.4. generate an , \"enc_response\", from a binary encoded HTTP response BINARY, \"response\". The uses the HPKE receiver context, \"rctxt\", as the HPKE context, \"context\", as follows: Export a secret, \"secret\", from \"context\", using the string \"message/bhttp response\" as the \"exporter_context\" parameter to"}
{"id": "q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944", "old_text": "In pseudocode, this procedure is as follows: decrypt an by reversing this process. That is, they first parse \"enc_response\" into \"response_nonce\" and \"ct\". They then follow the same process to derive values for \"aead_key\" and \"aead_nonce\". The uses these values to decrypt \"ct\" using the Open function provided by the AEAD. Decrypting might produce an error, as follows:", "comments": "Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one.", "new_text": "In pseudocode, this procedure is as follows: decrypt an by reversing this process. That is, first parse \"enc_response\" into \"response_nonce\" and \"ct\". They then follow the same process to derive values for \"aead_key\" and \"aead_nonce\", using their sending HPKE context, \"sctxt\", as the HPKE context, \"context\". The uses these values to decrypt \"ct\" using the Open function provided by the AEAD. Decrypting might produce an error, as follows:"}
{"id": "q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944", "old_text": "respectively. In order to achieve the privacy and security goals of the protocol a also needs to observe the guidance in client-responsibilities. The interacts with the as an HTTP client by constructing a request using the same restrictions as the request, except that the target", "comments": "Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one.", "new_text": "respectively. In order to achieve the privacy and security goals of the protocol a also needs to observe the guidance in sec-client. The interacts with the as an HTTP client by constructing a request using the same restrictions as the request, except that the target"}
{"id": "q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944", "old_text": "requests over time. However, this increases the risk that their request is rejected as outside the acceptable window. 7. One goal of this design is that independent requests are only", "comments": "Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one.", "new_text": "requests over time. However, this increases the risk that their request is rejected as outside the acceptable window. 6.9. The media type defined in ohttp-keys represents keying material. The content of this media type is not active (see RFC6838), but it governs how a might interact with an . The security implications of processing it are described in sec-client; privacy implications are described in privacy. The security implications of handling the message media types defined in req-res-media is covered in other parts of this section in more detail. However, these message media types are also encrypted encapsulations of HTTP requests and responses. HTTP messages contain content, which can use any media type. In particular, requests are processed by an Oblivious , which - as an HTTP resource - defines how content is processed; see HTTP. HTTP clients can also use resource identity and response content to determine how content is processed. Consequently, the security considerations of HTTP also apply to the handling of the content of these media types. 7. One goal of this design is that independent requests are only"}
{"id": "q-en-oblivious-http-19e4f0b48540567450a759b43e8ce0232186314227e2b2a186ad282641f10944", "old_text": "Please update the \"Media Types\" registry at https://iana.org/assignments/media-types [2] for the media types \"application/ohttp-keys\" (iana-keys), \"message/ohttp-req\" (iana-req), and \"message/ohttp-res\" (iana-res). Please update the \"HTTP Problem Types\" registry at https://iana.org/assignments/http-problem-types [3] for the types", "comments": "Why is it that media type registration is a journey of discovery each and every time? It's like the coastline paradox: the closer you get, the more detail resolves. Also, each requirement makes sense in isolation, but if you need an entire RFC to explain it, maybe something has gone wrong.\nno strong feelings on this one.", "new_text": "Please update the \"Media Types\" registry at https://iana.org/assignments/media-types [2] for the media types \"application/ohttp-keys\" (iana-keys), \"message/ohttp-req\" (iana-req), and \"message/ohttp-res\" (iana-res), following the procedures of RFC6838. Please update the \"HTTP Problem Types\" registry at https://iana.org/assignments/http-problem-types [3] for the types"}
{"id": "q-en-oblivious-http-3895703f97207cc302790d5c19baac49d65052edd842f3fb42076a5ac5d8fe3c", "old_text": "Including a \"Date\" header field in the response allows the to correct clock errors by retrying the same request using the value of the \"Date\" field provided by the . The value of the \"Date\" field can be copied if the request is fresh, with an adjustment based on the \"Age\" field otherwise. When retrying a request, the MUST create a fresh encryption of the modified request, using a new HPKE context. Intermediaries can sometimes rewrite the \"Date\" field when forwarding responses. This might cause problems if the and intermediary clocks", "comments": "Corrected to mention response freshness, with a new citation Added note about timing of a retry revealing RTT Corrected figure title grammar\nNice addition on RTT timing", "new_text": "Including a \"Date\" header field in the response allows the to correct clock errors by retrying the same request using the value of the \"Date\" field provided by the . The value of the \"Date\" field can be copied if the response is fresh, with an adjustment based on the \"Age\" field otherwise; see HTTP-CACHING. When retrying a request, the MUST create a fresh encryption of the modified request, using a new HPKE context. Retrying immediately allows the to measure the round trip time to the . The observed delay might reveal something about the location of the . could delay retries to add some uncertainty to any observed delay. Intermediaries can sometimes rewrite the \"Date\" field when forwarding responses. This might cause problems if the and intermediary clocks"}
{"id": "q-en-oblivious-http-0225787543fa024ccebce5994e71de1b206ea268ef6fd6a23a592542c9ef75f6", "old_text": "the public key from the configuration, \"pkR\", and a selected combination of KDF, identified by \"kdf_id\", and AEAD, identified by \"aead_id\". The then constructs an , \"enc_request\", from a binary encoded HTTP request BINARY, \"request\", as follows:", "comments": "Following up from URL It seems like the key configuration lists a set of potential HPKE algorithms that the client can use. The section URL does say \"a selected combination of KDF, identified by kdfid, and AEAD, identified by aeadid\", but it's easy to miss the selected word and even then its not apparent as to what the selection is from. I think it could be made clearer that the client can choose from a list of HPKE algorithms specified in the Key Configuration.", "new_text": "the public key from the configuration, \"pkR\", and a combination of KDF, identified by \"kdf_id\", and AEAD, identified by \"aead_id\", that the selects from those in the . The then constructs an , \"enc_request\", from a binary encoded HTTP request BINARY, \"request\", as follows:"}
{"id": "q-en-oblivious-http-fe7f55ad63416ddb90e8b43556d024dadbe9e06727acbbe95e69a0dbd1458417", "old_text": "The types HpkeKemId, HpkeKdfId, and HpkeAeadId identify a KEM, KDF, and AEAD respectively. The definitions for these identifiers and the semantics of the algorithms they identify can be found in HPKE. 4.2.", "comments": "and . Given (either assuming a fixed key ID, or looking up the KEM based on key ID), I think this is a nice simplification. It also authenticates more, even if redundant. (The kemID is already part of the HPKE key schedule.) I implemented this change and it's straightforward.\nCurrently, an encapsulated request carries a key ID which maps to a KEM key that determines the size of the encapsulated public key in a request. This means that code parsing a request needs to know the ID -> KEM mapping in order to parse things correctly, which is somewhat annoying. I propose we add KEMID to the request so that this mapping isn't needed to just parse the message. As a bonus, that would also fold in the KEMID explicitly into the AAD alongside the KDFID and AEADID.\nWe currently length-prefix it, but we don't have to. Think about this a little.\nBoth the public key and encapsulated secret are fixed length. I think making these fixed length on the wire is good. However, it does require the parsing code to know the corresponding size, which depends on the KEM. This requires there to be a function to map the kem_id to the corresponding size(s). NAME do you have such an API in your implementation? I have one in mine, which would make this pretty straightforward to implement.", "new_text": "The types HpkeKemId, HpkeKdfId, and HpkeAeadId identify a KEM, KDF, and AEAD respectively. The definitions for these identifiers and the semantics of the algorithms they identify can be found in HPKE. The Npk parameter corresponding to the HpkeKdfId can be found in HPKE. 4.2."}
{"id": "q-en-oblivious-http-fe7f55ad63416ddb90e8b43556d024dadbe9e06727acbbe95e69a0dbd1458417", "old_text": "enc-request. request describes the process for constructing and processing an Encapsulated Request. Responses are bound to responses and so consist only of AEAD- protected content. response describes the process for constructing and processing an Encapsulated Response.", "comments": "and . Given (either assuming a fixed key ID, or looking up the KEM based on key ID), I think this is a nice simplification. It also authenticates more, even if redundant. (The kemID is already part of the HPKE key schedule.) I implemented this change and it's straightforward.\nCurrently, an encapsulated request carries a key ID which maps to a KEM key that determines the size of the encapsulated public key in a request. This means that code parsing a request needs to know the ID -> KEM mapping in order to parse things correctly, which is somewhat annoying. I propose we add KEMID to the request so that this mapping isn't needed to just parse the message. As a bonus, that would also fold in the KEMID explicitly into the AAD alongside the KDFID and AEADID.\nWe currently length-prefix it, but we don't have to. Think about this a little.\nBoth the public key and encapsulated secret are fixed length. I think making these fixed length on the wire is good. However, it does require the parsing code to know the corresponding size, which depends on the KEM. This requires there to be a function to map the kem_id to the corresponding size(s). NAME do you have such an API in your implementation? I have one in mine, which would make this pretty straightforward to implement.", "new_text": "enc-request. request describes the process for constructing and processing an Encapsulated Request. The Nenc parameter corresponding to the HpkeKdfId can be found in HPKE. Responses are bound to responses and so consist only of AEAD- protected content. response describes the process for constructing and processing an Encapsulated Response."}
{"id": "q-en-oblivious-http-fe7f55ad63416ddb90e8b43556d024dadbe9e06727acbbe95e69a0dbd1458417", "old_text": "Clients encapsulate a request \"request\" using values from a key configuration: the key identifier from the configuration, \"keyID\", the public key from the configuration, \"pkR\", and", "comments": "and . Given (either assuming a fixed key ID, or looking up the KEM based on key ID), I think this is a nice simplification. It also authenticates more, even if redundant. (The kemID is already part of the HPKE key schedule.) I implemented this change and it's straightforward.\nCurrently, an encapsulated request carries a key ID which maps to a KEM key that determines the size of the encapsulated public key in a request. This means that code parsing a request needs to know the ID -> KEM mapping in order to parse things correctly, which is somewhat annoying. I propose we add KEMID to the request so that this mapping isn't needed to just parse the message. As a bonus, that would also fold in the KEMID explicitly into the AAD alongside the KDFID and AEADID.\nWe currently length-prefix it, but we don't have to. Think about this a little.\nBoth the public key and encapsulated secret are fixed length. I think making these fixed length on the wire is good. However, it does require the parsing code to know the corresponding size, which depends on the KEM. This requires there to be a function to map the kem_id to the corresponding size(s). NAME do you have such an API in your implementation? I have one in mine, which would make this pretty straightforward to implement.", "new_text": "Clients encapsulate a request \"request\" using values from a key configuration: the key identifier from the configuration, \"keyID\", with the corresponding KEM identified by \"kemID\", the public key from the configuration, \"pkR\", and"}
{"id": "q-en-oblivious-http-fe7f55ad63416ddb90e8b43556d024dadbe9e06727acbbe95e69a0dbd1458417", "old_text": "encapsulation key \"enc\". Construct associated data, \"aad\", by concatenating the values of \"keyID\", \"kdfID\", and \"aeadID\", as 8-, 16- and 16-bit integers respectively, each in network byte order. Encrypt (seal) \"request\" with \"aad\" as associated data using \"context\", yielding ciphertext \"ct\".", "comments": "and . Given (either assuming a fixed key ID, or looking up the KEM based on key ID), I think this is a nice simplification. It also authenticates more, even if redundant. (The kemID is already part of the HPKE key schedule.) I implemented this change and it's straightforward.\nCurrently, an encapsulated request carries a key ID which maps to a KEM key that determines the size of the encapsulated public key in a request. This means that code parsing a request needs to know the ID -> KEM mapping in order to parse things correctly, which is somewhat annoying. I propose we add KEMID to the request so that this mapping isn't needed to just parse the message. As a bonus, that would also fold in the KEMID explicitly into the AAD alongside the KDFID and AEADID.\nWe currently length-prefix it, but we don't have to. Think about this a little.\nBoth the public key and encapsulated secret are fixed length. I think making these fixed length on the wire is good. However, it does require the parsing code to know the corresponding size, which depends on the KEM. This requires there to be a function to map the kem_id to the corresponding size(s). NAME do you have such an API in your implementation? I have one in mine, which would make this pretty straightforward to implement.", "new_text": "encapsulation key \"enc\". Construct associated data, \"aad\", by concatenating the values of \"keyID\", \"kemID\", \"kdfID\", and \"aeadID\", as one 8-bit integer and three 16-bit integers, respectively, each in network byte order. Encrypt (seal) \"request\" with \"aad\" as associated data using \"context\", yielding ciphertext \"ct\"."}
{"id": "q-en-oblivious-http-fe7f55ad63416ddb90e8b43556d024dadbe9e06727acbbe95e69a0dbd1458417", "old_text": "Servers decrypt an Encapsulated Request by reversing this process. Given an Encapsulated Request \"enc_request\", a server: Parses \"enc_request\" into \"keyID\", \"kdfID\", \"aeadID\", \"enc\", and \"ct\" (indicated using the function \"parse()\" in pseudocode). The server is then able to find the HPKE private key, \"skR\", corresponding to \"keyID\". a. If \"keyID\" does not identify a key, the server returns an error. b. If \"kdfID\" and \"aeadID\" identify a combination of KDF and AEAD that the server is unwilling to use with \"skR\", the server returns", "comments": "and . Given (either assuming a fixed key ID, or looking up the KEM based on key ID), I think this is a nice simplification. It also authenticates more, even if redundant. (The kemID is already part of the HPKE key schedule.) I implemented this change and it's straightforward.\nCurrently, an encapsulated request carries a key ID which maps to a KEM key that determines the size of the encapsulated public key in a request. This means that code parsing a request needs to know the ID -> KEM mapping in order to parse things correctly, which is somewhat annoying. I propose we add KEMID to the request so that this mapping isn't needed to just parse the message. As a bonus, that would also fold in the KEMID explicitly into the AAD alongside the KDFID and AEADID.\nWe currently length-prefix it, but we don't have to. Think about this a little.\nBoth the public key and encapsulated secret are fixed length. I think making these fixed length on the wire is good. However, it does require the parsing code to know the corresponding size, which depends on the KEM. This requires there to be a function to map the kem_id to the corresponding size(s). NAME do you have such an API in your implementation? I have one in mine, which would make this pretty straightforward to implement.", "new_text": "Servers decrypt an Encapsulated Request by reversing this process. Given an Encapsulated Request \"enc_request\", a server: Parses \"enc_request\" into \"keyID\", \"kemID\", \"kdfID\", \"aeadID\", \"enc\", and \"ct\" (indicated using the function \"parse()\" in pseudocode). The server is then able to find the HPKE private key, \"skR\", corresponding to \"keyID\". a. If \"keyID\" does not identify a key matching the type of \"kemID\", the server returns an error. b. If \"kdfID\" and \"aeadID\" identify a combination of KDF and AEAD that the server is unwilling to use with \"skR\", the server returns"}
{"id": "q-en-ops-drafts-d2093e6932b04d0a8aa9091df9f2ceef5af3b678e55782d7101c8c08b35c8ebd", "old_text": "the QUIC header, if present. This supports cases where address information changes, such as NAT rebinding, intentional change of the local interface, or based on an indication in the handshake of the server for a preferred address to be used. Currently QUIC only supports failover cases. Only one \"path\" can be used at a time, and only when the new path is validated all traffic", "comments": "note: when merged.\nWe should probably make it clear what the application can do here (SPA for servers; maybe intentional migration for clients?) I'm not sure if migration is transparent to client applications or not.\nThe client application might have some knowledge if it's behind a NAT or not and as such might want to indicate to use Conn ID or not. We can make that more explicit in the text.\nI did some small edits on the text in the connection migration section but NAME maybe you had something more in mind and want to propose more text?\nThis whole section should then also be moved to after the Conn ID section.\nI like the move of text to the manageability draft. If it were me, I'd move \"Server-Generated Connection ID\" as well, as I don't think apps need to worry about it, but it's not a big deal. I think adding text about server preferred address is probably more important for this draft.\nAdded some more text in\nNAME we think the current state of the draft addresses this issue; could you check and close this issue if so?", "new_text": "the QUIC header, if present. This supports cases where address information changes, such as NAT rebinding, intentional change of the local interface, or based on an indication in the handshake of the server for a preferred address to be used. As such if the client is known or likely to sit behind a NAT, use of a connection ID for the server is strongly recommended. A non-empty connection ID for the server is also strongly recommended when migration is supported. Currently QUIC only supports failover cases. Only one \"path\" can be used at a time, and only when the new path is validated all traffic"}
{"id": "q-en-ops-drafts-d2093e6932b04d0a8aa9091df9f2ceef5af3b678e55782d7101c8c08b35c8ebd", "old_text": "characteristics as input for the switching decision or the congestion controller on the new path. 8. QUIC connections are closed either by expiration of an idle timeout,", "comments": "note: when merged.\nWe should probably make it clear what the application can do here (SPA for servers; maybe intentional migration for clients?) I'm not sure if migration is transparent to client applications or not.\nThe client application might have some knowledge if it's behind a NAT or not and as such might want to indicate to use Conn ID or not. We can make that more explicit in the text.\nI did some small edits on the text in the connection migration section but NAME maybe you had something more in mind and want to propose more text?\nThis whole section should then also be moved to after the Conn ID section.\nI like the move of text to the manageability draft. If it were me, I'd move \"Server-Generated Connection ID\" as well, as I don't think apps need to worry about it, but it's not a big deal. I think adding text about server preferred address is probably more important for this draft.\nAdded some more text in\nNAME we think the current state of the draft addresses this issue; could you check and close this issue if so?", "new_text": "characteristics as input for the switching decision or the congestion controller on the new path. Only the client can actively migrate. However, servers can indicate during the handshake that they prefer to transfer the connection to a different address after the handshake, e.g. to move from an address that is shared by multiple servers to an address that is unique to the server instance. The server can provide an IPv4 and an IPv6 address in a transport parameter during the TLS handshake and the client can select between the two if both are provided. See also Section 9.6 of QUIC. 8. QUIC connections are closed either by expiration of an idle timeout,"}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "changes under discussion, via issues and pull requests in GitHub current as of the time of writing. 1.1. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. 2. In this section, we discusses those aspects of the QUIC transport", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "changes under discussion, via issues and pull requests in GitHub current as of the time of writing. 2. In this section, we discusses those aspects of the QUIC transport"}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "header, is integrity protected. Further, information that was sent and exposed in handshake packets sent before the cryptographic context was established are validated later during the cryptographic handshake. Therefore, devices on path MUST NOT change any information or bits in QUIC packet headers, since alteration of header information will lead to a failed integrity check at the receiver, and can even lead to connection termination. 2.6.", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "header, is integrity protected. Further, information that was sent and exposed in handshake packets sent before the cryptographic context was established are validated later during the cryptographic handshake. Therefore, devices on path cannot alter any information or bits in QUIC packet headers, since alteration of header information will lead to a failed integrity check at the receiver, and can even lead to connection termination. 2.6."}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "given that establishing a new connection using 0-RTT support is cheap and fast. QoS mechanisms in the network MAY also use the connection ID for service differentiation, as a change of connection ID is bound to a change of address which anyway is likely to lead to a re-route on a different path with different network characteristics.", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "given that establishing a new connection using 0-RTT support is cheap and fast. QoS mechanisms in the network could also use the connection ID for service differentiation, as a change of connection ID is bound to a change of address which anyway is likely to lead to a re-route on a different path with different network characteristics."}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "applicability, and issues that application developers must consider when using QUIC as a transport for their application. 1.1. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" in this document are to be interpreted as described in BCP 14 RFC2119 RFC8174 when, and only when, they appear in all capitals, as shown here. 2. QUIC uses UDP as a substrate. This enables both userspace", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "applicability, and issues that application developers must consider when using QUIC as a transport for their application. 2. QUIC uses UDP as a substrate. This enables both userspace"}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "even if TFO was successfully negotiated (see PaaschNanog). Any fallback mechanism is likely to impose a degradation of performance; however, fallback MUST not silently violate the application's expectation of confidentiality or integrity of its payload data.", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "even if TFO was successfully negotiated (see PaaschNanog). Any fallback mechanism is likely to impose a degradation of performance; however, fallback must not silently violate the application's expectation of confidentiality or integrity of its payload data."}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "packet is visible to the network. Therefore stream multiplexing is not intended to be used for differentiating streams in terms of network treatment. Application traffic requiring different network treatment SHOULD therefore be carried over different five-tuples (i.e. multiple QUIC connections). Given QUIC's ability to send application data in the first RTT of a connection (if a previous connection to the same host has been successfully established to", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "packet is visible to the network. Therefore stream multiplexing is not intended to be used for differentiating streams in terms of network treatment. Application traffic requiring different network treatment should therefore be carried over different five-tuples (i.e. multiple QUIC connections). Given QUIC's ability to send application data in the first RTT of a connection (if a previous connection to the same host has been successfully established to"}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "example, the default port for HTTP/3 QUIC-HTTP is UDP port 443, analogous to HTTP/1.1 or HTTP/2 over TLS over TCP. Applications SHOULD define an alternate endpoint discovery mechanism to allow the usage of ports other than the default. For example, HTTP/3 (QUIC-HTTP sections 3.2 and 3.3) specifies the use of ALPN RFC7301 for service discovery which allows the server to use and", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "example, the default port for HTTP/3 QUIC-HTTP is UDP port 443, analogous to HTTP/1.1 or HTTP/2 over TLS over TCP. Applications should define an alternate endpoint discovery mechanism to allow the usage of ports other than the default. For example, HTTP/3 (QUIC-HTTP sections 3.2 and 3.3) specifies the use of ALPN RFC7301 for service discovery which allows the server to use and"}
{"id": "q-en-ops-drafts-e62cda345f327488c697e2ac05272ccb9896a04c3ca1c89c414fe4dff6296e0b", "old_text": "applications using QUIC, as well. Application developers should note that any fallback they use when QUIC cannot be used due to network blocking of UDP SHOULD guarantee the same security properties as QUIC; if this is not possible, the connection SHOULD fail to allow the application to explicitly handle fallback to a less-secure alternative. See fallback. 14.", "comments": "note: don't merge yet, still need to make the pass on applicability but working on the CI issue at the moment\ndone. mergeable if ok.\nThere are a few instances that might be read as normative statements. Is that really wise? Examples: cannot? Maybe this isn't something that this document needs to say. It could just say that fallback to unencrypted TCP might not be a desirable outcome. There is also a bunch of \"MUST\"-level statements in the 0-RTT section, but I've proposed a rewrite that (I hope) eliminates them.\nFixed in : removed all 2119 language.\nNot clear if these documents should use normative language at all. The applicability statement also sometimes uses normative language to recommend the implementation of certain interfaces; I don't think this is the right document for that but as we also don't have another document, we should decide if we want to do that or not!\nManageability is a user's guide to the wire image, as such there's no sense to normative language (though I suppose some might like to add MUST NOT observe packets). Since applicability is in part a set of recommendations to application designers, normative language might make sense there... but the document is informational, so normative language here has to reference or be trivially derived from normative language in one of the base drafts. To simplify things, I'd just scrub all the SHOULDs, MAYs, and MUSTs.", "new_text": "applications using QUIC, as well. Application developers should note that any fallback they use when QUIC cannot be used due to network blocking of UDP should guarantee the same security properties as QUIC; if this is not possible, the connection should fail to allow the application to explicitly handle fallback to a less-secure alternative. See fallback. 14."}
{"id": "q-en-ops-drafts-1704add8c4f3233c6e481ab51b545c12fbf6bb03539f7c2e09d5fd593ee0caa3", "old_text": "handshake packets sent before the cryptographic context was established are validated later during the cryptographic handshake. Therefore, devices on path cannot alter any information or bits in QUIC packet headers, since alteration of header information will lead to a failed integrity check at the receiver, and can even lead to connection termination. 2.6.", "comments": "addresses\n\"QUIC's linkability resistance ensures that a deliberate connection migration is accompanied by a change in the connection ID and necessitate that connection ID aware DDoS defense system must have the same information about connection IDs as the load balancer {{?I-D.ietf-quic-load-balancers}}. This may be complicated where mitigation and load balancing environments are logically separate.\" This is insufficient. Even the load balancer is only able to identify the end server, not the connection.\nOn a related note, the suggestion to use the \"first 8 bytes\" of a server's connection ID is somewhat strange. Why not observe the CIDL it's using in long headers, or just have common configuration?\nHow about if we just remove any wording about load balancers here? Like this maybe:\nOn the 8 bytes: I actually not sure where this comes from, however, I assume that the thinking was that using the first 8 bytes might be good enough to avoid collisions if you don't know the connection ID length. But I guess we can remove it or add some more explanation... any thoughts?\nIf you read the Retry Services section of QUIC-LB, you'll see that there is no dependence on the Connection ID encoding at all. It is a common format for Retry Tokens that allows servers to (at a minimum) extract the original CIDs necessary for verification. Certainly, no device should drop packets simply because the CID has changed, which is maybe what you're trying to say here? But I'm not sure that this is a DDoS thing.\nThe point is when a DDoS attack is on-going you want to make sure that on-going connections keep running while new connections are handled more carefully. So you have to be flow aware. However, maybe it's acceptable if connectivity breaks during a DDoS when the 5 tuple changes...? Otherwise you would need to be CID aware.\nThe VN section suggests that observers not use the version number to detect QUIC. Would it be futile to ask them to admit QUIC versions they don't recognize rather than drop them?\nProbably.\nNot sure. Depends on the function you implement. I guess there could be security functions that decide to only allow certain versions... not sure if that is justified but I don't think we can stop it.\nAny further opinions here?\nWe can stop it with greasing, I guess. IMO the draft should specify best practices that don't break extensibility and if people don't follow it it's on them. That's the approach I took in QUIC-LB.\nThe best practice is to not block any unknown traffic as blocking hinders evolution. However, I guess there might be security reasons why some will not do that. I guess allowing for certain versions of QUIC is better than blocking UDP entirely...?\n+1... I don't think that putting text in this document saying \"don't wholesale block versions of QUIC you don't understand\" will actually change the behavior of the boxes that will ship this behavior, since drop-unknown is taken as a security best practice. But I still think it's worth putting that advice in the document nonetheless, if only to have \"I told you so\" in an RFC. (I do still tend to think that we'll see only two behaviors emerge in the wild: (1) permit anything with a QUIC wire image since there's not much to grab on to there (and rely on the LB implementation to eat DoS hiding behind that wire image) and (2) block anything that smells like QUIC (because it is easy to implement in the default case and it forces fallback to TCP, which already has lots of running security monitoring code).)\nwill work up a PR for us to discuss\nIn manageability: \"Integrity Protection of the Wire Image {#wire-integrity} As soon as the cryptographic context is established, all information in the QUIC header, including information exposed in the packet header, is integrity protected. Further, information that was sent and exposed in handshake packets sent before the cryptographic context was established are validated later during the cryptographic handshake. Therefore, devices on path cannot alter any information or bits in QUIC packet headers, since alteration of header information will lead to a failed integrity check at the receiver, and can even lead to connection termination.\" This isn't precise. The Initial packet is not meaningfully integrity-protected, but the contents of the CRYPTO frame (and the connection IDs) are. An on-path attacker could, for instance, change the ClientHello's Packet Number to from '0' to '1' and then adjust the ACK frame from '1' to '0'. (Why would one do this?)\nIs the ACK frame in the case not encrypted...?\nThe Initial is decryptable by anyone, and so could be rewritten.\nOkay. What do we want to write here now?\nhow about \"URL alter any information or bits in QUIC packet headers, since alteration...\" changing to: \"URL alter any information or bits in QUIC packet headers, except specific parts of Initial packets, since alteration...\"\nWFM. I can change that as soon as your PR landed (or you add it to your PR).", "new_text": "handshake packets sent before the cryptographic context was established are validated later during the cryptographic handshake. Therefore, devices on path cannot alter any information or bits in QUIC packet headers, except specific parts of Initial packets, since alteration of header information will lead to a failed integrity check at the receiver, and can even lead to connection termination. 2.6."}
{"id": "q-en-ops-drafts-7cabb60fae38a9366cf48434bc578cae1ff5b314653e9736bb65f524be9bda30", "old_text": "renders 5-tuple based filtering insufficient and requires more state to be maintained by DDoS defense systems. For the common case of NAT rebinding, DDoS defense systems can detect a change in the client's endpoint address by linking flows based on the first 8 bytes of the server's connection IDs, provided the server is using at least 8- bytes-long connection IDs. QUIC's linkability resistance ensures that a deliberate connection migration is accompanied by a change in the connection ID and necessitates that connection ID-aware DDoS defense system must have the same information about connection IDs as the load balancer I-D.ietf-quic-load-balancers. This may be complicated where mitigation and load balancing environments are logically separate. It is questionable whether connection migrations must be supported during a DDoS attack. If the connection migration is not visible to", "comments": "Merged... with the neurons I have left on 22 December, this seems broadly true. I'm not sure we should be too prescriptive about load balancing in this doc, though -- in part because I suspect the challenge of handling DDoS attacks using QUIC traffic will lead to divergent innovation in the LB space in the near term...\nSo if the 5-tuple changes because of NAT, the connection ID might actually not change. So in case of DDoS you might want to look at the connection ID to at cover those cases. For DDoS you usually don't need a hard guarantee to track all flows (if you block a flow incorrectly, it has to reconnect), but where you can track the flow when the 5-tuple changes, it might actually help. That is what this paragraph is/was about.\n\"QUIC's linkability resistance ensures that a deliberate connection migration is accompanied by a change in the connection ID and necessitate that connection ID aware DDoS defense system must have the same information about connection IDs as the load balancer {{?I-D.ietf-quic-load-balancers}}. This may be complicated where mitigation and load balancing environments are logically separate.\" This is insufficient. Even the load balancer is only able to identify the end server, not the connection.\nOn a related note, the suggestion to use the \"first 8 bytes\" of a server's connection ID is somewhat strange. Why not observe the CIDL it's using in long headers, or just have common configuration?\nHow about if we just remove any wording about load balancers here? Like this maybe:\nOn the 8 bytes: I actually not sure where this comes from, however, I assume that the thinking was that using the first 8 bytes might be good enough to avoid collisions if you don't know the connection ID length. But I guess we can remove it or add some more explanation... any thoughts?\nIf you read the Retry Services section of QUIC-LB, you'll see that there is no dependence on the Connection ID encoding at all. It is a common format for Retry Tokens that allows servers to (at a minimum) extract the original CIDs necessary for verification. Certainly, no device should drop packets simply because the CID has changed, which is maybe what you're trying to say here? But I'm not sure that this is a DDoS thing.\nThe point is when a DDoS attack is on-going you want to make sure that on-going connections keep running while new connections are handled more carefully. So you have to be flow aware. However, maybe it's acceptable if connectivity breaks during a DDoS when the 5 tuple changes...? Otherwise you would need to be CID aware.\nThis PR removes the problems I cited in the issue. Ship it. I wonder if we should hit the point in the last sentence a little harder. If you want to support connection migration, there's nothing you can do in this space except the Retry Service stuff in QUIC-LB, is there?", "new_text": "renders 5-tuple based filtering insufficient and requires more state to be maintained by DDoS defense systems. For the common case of NAT rebinding, DDoS defense systems can detect a change in the client's endpoint address by linking flows based on the server's connection IDs. QUIC's linkability resistance ensures that a deliberate connection migration is accompanied by a change in the connection ID. It is questionable whether connection migrations must be supported during a DDoS attack. If the connection migration is not visible to"}
{"id": "q-en-ops-drafts-ebb4ea8593cf83078a0e57eaf0b9eeaf0efafcea9e9efe8bfb2988b48b64c840", "old_text": "stream by instructing QUIC to send a FIN bit in a STREAM frame. It cannot gracefully close the ingress direction without a peer- generated FIN, much like in TCP. However, an endpoint can abruptly close either the ingress or egress direction; these actions are fully independent of each other. QUIC does not provide an interface for exceptional handling of any stream. If a stream that is critical for an application is closed,", "comments": "This text in is incorrect. An endpoint can abruptly close (reset) egress data, but it can only request that its peer abruptly close ingress (using STOP_SENDING).\nis there a missing \"not\"? Do you mean: s/An endpoint can abruptly close (reset) egress data/An endpoint cannot abruptly close (reset) egress data/ ? Anyway, yes, we need to fix that!", "new_text": "stream by instructing QUIC to send a FIN bit in a STREAM frame. It cannot gracefully close the ingress direction without a peer- generated FIN, much like in TCP. However, an endpoint can abruptly close the egress direction or request that its peer abruptly close the ingress direction; these actions are fully independent of each other. QUIC does not provide an interface for exceptional handling of any stream. If a stream that is critical for an application is closed,"}
{"id": "q-en-ops-drafts-e13897dffc7fc0b36603c029f17f9030c12f6a2129ebe98fe10d950211a44f3e", "old_text": "As QUIC is a general purpose transport protocol, there are no requirements that servers use a particular UDP port for QUIC. For applications with a fallback to TCP that do not already have an alternate mapping to UDP, the registration (if necessary) and use of the UDP port number corresponding to the TCP port already registered for the application is RECOMMENDED. For example, the default port for HTTP/3 QUIC-HTTP is UDP port 443, analogous to HTTP/1.1 or HTTP/2 over TLS over TCP. Applications could define an alternate endpoint discovery mechanism to allow the usage of ports other than the default. For example,", "comments": "Issue: \u201cand use of the UDP port number corresponding to the TCP port already registered for the application is RECOMMENDED\u201d. I agree with the intent, but here an RFC2119 keyword is used, but is this intended, and can this please cite the RFC that says this rather than just stating this.\nSimilarly noted in\nPR removes normative language and says \"usually ... appropriate\" instead. Is that okay? Or should the wording be stronger?", "new_text": "As QUIC is a general purpose transport protocol, there are no requirements that servers use a particular UDP port for QUIC. For applications with a fallback to TCP that do not already have an alternate mapping to UDP, usually the registration (if necessary) and use of the UDP port number corresponding to the TCP port already registered for the application is appropriate. For example, the default port for HTTP/3 QUIC-HTTP is UDP port 443, analogous to HTTP/1.1 or HTTP/2 over TLS over TCP. Applications could define an alternate endpoint discovery mechanism to allow the usage of ports other than the default. For example,"}
{"id": "q-en-ops-drafts-38ef3cfbcaf9f4e1dac88b5c507bbf1573e39a5948be4689164337e81ec28494", "old_text": "Given the prevalence of the assumption in network management practice that a port number maps unambiguously to an application, the use of ports that cannot easily be mapped to a registered service name might lead to blocking or other interference by network elements such as firewalls that rely on the port number for application identification. 7.", "comments": "The proposed text looks OK to me.\nThe ID said: \u201cblocking or other interference by network elements such as firewalls\u201d I think this language could be seen as having a little attitude that could be avoided. I suggest replacing \u201cinterference\u201d, and instead say that this can change the forwarding policy. This also extends it to include action from QoS policies and service level agreements. \u201cby network elements such as firewalls that rely on the port number for application identification.\u201d I\u2019d prefer \u201cuse\u201d to \u201crely\u201d \u2026 because they might well use other heuristics in future if they do prove more useful to classify sessions/packets.\nIs the wording in better/good?", "new_text": "Given the prevalence of the assumption in network management practice that a port number maps unambiguously to an application, the use of ports that cannot easily be mapped to a registered service name might lead to blocking or other changes to the forwarding behavior by network elements such as firewalls that use the port number for application identification. 7."}
{"id": "q-en-ops-drafts-14f5190dd6065ea1e1f6fd45a8fe71caf828803373ad9e8f10c847cb6eb016a6", "old_text": "13. QUIC provides integrity protection for its version negotiation process. This process assumes that the set of versions that a server supports is fixed. This complicates the process for deploying new QUIC versions or disabling old versions when servers operate in clusters. A server that rolls out a new version of QUIC can do so in three", "comments": "process assumes that the set of versions that a server supports is fixed. This complicates the process for deploying new QUIC versions or disabling old versions when servers operate in clusters. This functionality was removed from -transport, and appears to refer to text in draft-ietf-quic-version-negotiation, but that draft isn't referenced at all.\nI can add a reference to draft-ietf-quic-version-negotiation and make clear that this is an extension and not part of the base spec.", "new_text": "13. QUIC version 1 does not specify a version negotation mechanism in the base spec but I-D.draft-ietf-quic-version-negotiation proposes an extension. This process assumes that the set of versions that a server supports is fixed. This complicates the process for deploying new QUIC versions or disabling old versions when servers operate in clusters. A server that rolls out a new version of QUIC can do so in three"}
{"id": "q-en-ops-drafts-edd056268a86b242f0d6f95c682614fa8e112ce789285b85f1e752dd317b2bff", "old_text": "a solution to this problem. However, hiding information about the change of the IP address or port conceals important and security- relevant information from QUIC endpoints and as such would facilitate amplification attacks (see section 9 of QUIC-TRANSPORT). An NAT function that hides peer address changes prevents the other end from detecting and mitigating attacks as the endpoint cannot verify connectivity to the new address using QUIC PATH_CHALLENGE and", "comments": "Some section refs weren't linked to the sections; some objects were defined multiple ways; one reference wasn't actually cited in the text.\nThanks a lot!", "new_text": "a solution to this problem. However, hiding information about the change of the IP address or port conceals important and security- relevant information from QUIC endpoints and as such would facilitate amplification attacks (see Section 9 of QUIC-TRANSPORT). A NAT function that hides peer address changes prevents the other end from detecting and mitigating attacks as the endpoint cannot verify connectivity to the new address using QUIC PATH_CHALLENGE and"}
{"id": "q-en-ops-drafts-edd056268a86b242f0d6f95c682614fa8e112ce789285b85f1e752dd317b2bff", "old_text": "to use smaller packets, so that the limited reassembly capacity is not exceeded. For TCP, MSS clamping (Section 3.2 of {?RFC4459}}) is often used to change the sender's maximum TCP segment size, but QUIC requires a different approach. Section 14 of QUIC-TRANSPORT advises senders to probe larger sizes using Datagram Packetization Layer PMTU Discovery", "comments": "Some section refs weren't linked to the sections; some objects were defined multiple ways; one reference wasn't actually cited in the text.\nThanks a lot!", "new_text": "to use smaller packets, so that the limited reassembly capacity is not exceeded. For TCP, MSS clamping (Section 3.2 of RFC4459) is often used to change the sender's maximum TCP segment size, but QUIC requires a different approach. Section 14 of QUIC-TRANSPORT advises senders to probe larger sizes using Datagram Packetization Layer PMTU Discovery"}
{"id": "q-en-ops-drafts-edd056268a86b242f0d6f95c682614fa8e112ce789285b85f1e752dd317b2bff", "old_text": "14. I-D.draft-ietf-quic-datagram specifies a QUIC extension to enable sending and receiving unreliable datagrams over QUIC. Unlike operating directly over UDP, applications that use the QUIC datagram service do not need to implement their own congestion control, per RFC8085, as QUIC datagrams are congestion controlled. QUIC datagrams are not flow-controlled, and as such data chunks may be dropped if the receiver is overloaded. While the reliable", "comments": "Some section refs weren't linked to the sections; some objects were defined multiple ways; one reference wasn't actually cited in the text.\nThanks a lot!", "new_text": "14. I-D.ietf-quic-datagram specifies a QUIC extension to enable sending and receiving unreliable datagrams over QUIC. Unlike operating directly over UDP, applications that use the QUIC datagram service do not need to implement their own congestion control, per RFC8085, as QUIC datagrams are congestion controlled. QUIC datagrams are not flow-controlled, and as such data chunks may be dropped if the receiver is overloaded. While the reliable"}
{"id": "q-en-ops-drafts-c872c54cd98c006e1ff323190d891212f0db68c9387c49f477022f947c09ff9f", "old_text": "cause long timeout-based delays before this problem is detected by the endpoints and the connection can potentially be re-established. QUIC endpoints support address migration and a QUIC connection can survive a change of the IP address or ports by mapping the connection ID, if present, to an existing connection. Ideally a new connection ID is used at the same time when the address/port changes to avoid linkability. As new connection IDs belonging to the same connection are not known to on-path devices, network devices are not able to map QUIC connections after a 4-tuple change. As such, when the 4-tuple changes, stateful devices lose their state and break connectivity if state is required for forwarding, while the endpoints would otherwise survive such a change. Use of connection IDs is specifically discouraged for NAT applications. If a NAT hits an operational limit, it is recommended to rather drop the initial packets of a flow (see also sec-", "comments": "This paragraph is redundant with the previous paragraph, however, not sure I removed the right one now...", "new_text": "cause long timeout-based delays before this problem is detected by the endpoints and the connection can potentially be re-established. Use of connection IDs is specifically discouraged for NAT applications. If a NAT hits an operational limit, it is recommended to rather drop the initial packets of a flow (see also sec-"}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "impacted by QUIC. QUIC is an end-to-end transport protocol. No information in the protocol header, even that which can be inspected, is meant to be mutable by the network. This is achieved through integrity protection of the wire image WIRE-IMAGE. Encryption of most control signaling means that less information is visible to the network than is the case with TCP. Integrity protection can also simplify troubleshooting, because none of the nodes on the network path can modify transport layer information. However, it does imply that in-network operations that depend on modification of data are not possible without the cooperation of a QUIC endpoint. This might be possible with the introduction of a proxy which authenticates as an endpoint. Proxy operations are not in scope for this document. Network management is not a one-size-fits-all endeavour: practices considered necessary or even mandatory within enterprise networks", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "impacted by QUIC. QUIC is an end-to-end transport protocol. No information in the protocol header, even that which can be inspected, is mutable by the network. This is achieved through integrity protection of the wire image WIRE-IMAGE. Encryption of most control signaling means that less information is visible to the network than is the case with TCP. Integrity protection can also simplify troubleshooting, because none of the nodes on the network path can modify transport layer information. However, it means in-network operations that depend on modification of data are not possible without the cooperation of an QUIC endpoint. This might be possible with the introduction of a proxy which authenticates as an endpoint. Proxy operations are not in scope for this document. Network management is not a one-size-fits-all endeavour: practices considered necessary or even mandatory within enterprise networks"}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "which type of header is present. The purpose of this bit is invariant across QUIC versions. The long header exposes more information. In version 1 of QUIC, it is used during connection establishment, including version negotiation, retry, and 0-RTT data. It contains a version number, as well as source and destination connection IDs for grouping packets belonging to the same flow. The definition and location of these fields in the QUIC long header are invariant for future versions of QUIC, although future versions of QUIC may provide additional fields in the long header QUIC-INVARIANTS. Short headers contain only an optional destination connection ID and the spin bit for RTT measurement. In version 1 of QUIC, they are", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "which type of header is present. The purpose of this bit is invariant across QUIC versions. The long header exposes more information. It contains a version number, as well as source and destination connection IDs for associating packets with a QUIC connection. The definition and location of these fields in the QUIC long header are invariant for future versions of QUIC, although future versions of QUIC may provide additional fields in the long header QUIC-INVARIANTS. In version 1 of QUIC, the long header is used during connection establishment to transmit crypto handshake data, perform version negotiation, retry, and send 0-RTT data. Short headers contain only an optional destination connection ID and the spin bit for RTT measurement. In version 1 of QUIC, they are"}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "identifies the packet as a Version Negotiation packet. QUIC version 1 uses version 0x00000001. Operators should expect to observe packets with other version numbers as a result of various Internet experiments, future standards, and greasing. All deployed versions are maintained in an IANA registry (see Section 22.2 of QUIC-TRANSPORT). source and destination connection ID: short and long packet", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "identifies the packet as a Version Negotiation packet. QUIC version 1 uses version 0x00000001. Operators should expect to observe packets with other version numbers as a result of various Internet experiments, future standards, and greasing (RFC7801). All deployed versions are maintained in an IANA registry (see Section 22.2 of QUIC-TRANSPORT). source and destination connection ID: short and long packet"}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "packet number: All packets except Version Negotiation and Retry packets have an associated packet number; however, this packet number is encrypted, and therefore not of use to on-path observers. The offset of the packet number is encoded in long headers, while it is implicit (depending on destination connection ID length) in short headers. The length of the packet number is cryptographically obfuscated.", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "packet number: All packets except Version Negotiation and Retry packets have an associated packet number; however, this packet number is encrypted, and therefore not of use to on-path observers. The offset of the packet number can be decoded in long headers, while it is implicit (depending on destination connection ID length) in short headers. The length of the packet number is cryptographically obfuscated."}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "Server Initial datagram as shown in fig-server-initial typically containing three packets: an Initial packet with the beginning of the server's side of the TLS handshake, a Handshake packet with the rest of the server's side of the TLS handshake, and initial 1-RTT data, if present. The Client Completion datagram contains at least one Handshake packet", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "Server Initial datagram as shown in fig-server-initial typically containing three packets: an Initial packet with the beginning of the server's side of the TLS handshake, a Handshake packet with the rest of the server's side of the TLS handshake, and 1-RTT data, if present. The Client Completion datagram contains at least one Handshake packet"}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "Even though transport parameters transmitted in the client's Initial packet are observable by the network, they cannot be modified by the network without risking connection failure. Further, the reply from the server cannot be observed, so observers on the network cannot know which parameters are actually in use.", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "Even though transport parameters transmitted in the client's Initial packet are observable by the network, they cannot be modified by the network without causing connection failure. Further, the reply from the server cannot be observed, so observers on the network cannot know which parameters are actually in use."}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "3.2. This document focuses on QUIC version 1, and this section applies only to packets belonging to QUIC version 1 flows; for purposes of on-path observation, it assumes that these packets have been identified as such through the observation of a version number exchange as described above. Connection establishment uses Initial and Handshake packets containing a TLS handshake, and Retry packets that do not contain", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "3.2. This document focuses on QUIC version 1, and this Connection Confirmation section applies only to packets belonging to QUIC version 1 flows; for purposes of on-path observation, it assumes that these packets have been identified as such through the observation of a version number exchange as described above. Connection establishment uses Initial and Handshake packets containing a TLS handshake, and Retry packets that do not contain"}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "detected using heuristics similar to those used to detect TLS over TCP. A client initiating a connection may also send data in 0-RTT packets directly after the Initial packet containing the TLS Client Hello. Since these packets may be reordered in the network, 0-RTT packets could be seen before the Initial packet. Note that in this version of QUIC, clients send Initial packets", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "detected using heuristics similar to those used to detect TLS over TCP. A client initiating a connection may also send data in 0-RTT packets directly after the Initial packet containing the TLS Client Hello. Since packets may be reordered or lost in the network, 0-RTT packets could be seen before the Initial packet. Note that in this version of QUIC, clients send Initial packets"}
{"id": "q-en-ops-drafts-356bc3dd8152a33a0adb1fa5d208079eef979aa4b7e933f7229bb683c116f769", "old_text": "3.4.1. If the ClientHello is not encrypted, it can be derived from the client's Initial packet by calculating the Initial secret to decrypt the packet payload and parsing the QUIC CRYPTO Frame containing the TLS ClientHello.", "comments": "This goes to the end of \"Server Name Indication\" Some changes are attempts at clarifications, but I could have incorrectly clarified something, so feel free to reject changes.", "new_text": "3.4.1. If the ClientHello is not encrypted, SNI can be derived from the client's Initial packet by calculating the Initial secret to decrypt the packet payload and parsing the QUIC CRYPTO Frame containing the TLS ClientHello."}
{"id": "q-en-ops-drafts-93c3288dc75d65e61b0a74c51928e86983020b39c6b37f5608cf42b11ea21b1d", "old_text": "An application that implements fallback needs to consider the security consequences. A fallback to TCP and TLS exposes control information to modification and manipulation in the network. Further downgrades to older TLS versions than used in QUIC, which is 1.3, might result in significantly weaker cryptographic protection. For example, the results of protocol negotiation RFC7301 only have confidentiality protection if TLS 1.3 is used. These applications must operate, perhaps with impaired functionality, in the absence of features provided by QUIC not present in the", "comments": "From PR : I'd also like to see something like : /In this section, we discuss those aspects of the QUIC transport protocol that/This section discusses those aspects of the QUIC transport protocol that/ /Here, we are concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, which we define as the information available in the packet/This section is concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, defined as the information available in the packet/ and /least four datagrams we'll call \"Client Initial\",/least four datagrams, labelled \"Client Initial\",/ and / 0-RTT data may also be seen in the Client Initial datagram, as shown in /When the client uses 0-RTT connection resumption, the Client Initial datagram can also carry 0-RTT data, as shown in/ and /typically around 10 packets/typically to around 10 packets/ and /of a connection after one of one endpoint/of a connection after an endpoint/ I'd prefer more like: /Though the guidance there holds, a particularly unwise behavior admits a handful of UDP packets and then makes a decision as to whether or not to filter later packets in the same connection. QUIC applications are encouraged to fail over to TCP if early packets do/ ... and I'll close PR .\n\u201cQUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network.\u201d Could this benefit form a cross-reference to manageability, where this topic is also discussed from a network perspective?\nThree places where the text might avoid confusion when read in future years: 1) \u201cFurther downgrades to older TLS versions than used in QUIC, which is 1.3, \u201c \u2026 will it always be 1.3, or should this be \u201cin QUIC [REF] is TLS 1.3\u201d 2) \u201cCurrently, QUIC only provides fully reliable stream transmission, which means that prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c I'm not so happy with currently here: Is this the best way to say this, might it be better to say this rather than even hinting that future QUIC might only be reliable: \u201cWhen a QUIC endpoint provides provides fully reliable stream transmission, prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c 3) I\u2019d prefer to see a ref here, since this something that the WG might later decide to work upon: \u201cCurrently QUIC only supports failover cases [Quic-Transport].\u201d\nThat mega paragraph has a bunch of good concepts, but it's all run together into a single blob, which makes it a bit hard to follow.", "new_text": "An application that implements fallback needs to consider the security consequences. A fallback to TCP and TLS exposes control information to modification and manipulation in the network. Further downgrades to older TLS versions than 1.3, which is used in QUIC version 1, might result in significantly weaker cryptographic protection. For example, the results of protocol negotiation RFC7301 only have confidentiality protection if TLS 1.3 is used. These applications must operate, perhaps with impaired functionality, in the absence of features provided by QUIC not present in the"}
{"id": "q-en-ops-drafts-93c3288dc75d65e61b0a74c51928e86983020b39c6b37f5608cf42b11ea21b1d", "old_text": "Priority handling of retransmissions can be implemented by the sender in the transport layer. QUIC recommends retransmitting lost data before new data, unless indicated differently by the application. Currently, QUIC only provides fully reliable stream transmission, which means that prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. For partially reliable or unreliable streams, priority scheduling of retransmissions over data of higher-priority streams might not be desirable. For such streams, QUIC could either provide an explicit interface to control prioritization, or derive the prioritization decision from the reliability level of the stream. 4.3.", "comments": "From PR : I'd also like to see something like : /In this section, we discuss those aspects of the QUIC transport protocol that/This section discusses those aspects of the QUIC transport protocol that/ /Here, we are concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, which we define as the information available in the packet/This section is concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, defined as the information available in the packet/ and /least four datagrams we'll call \"Client Initial\",/least four datagrams, labelled \"Client Initial\",/ and / 0-RTT data may also be seen in the Client Initial datagram, as shown in /When the client uses 0-RTT connection resumption, the Client Initial datagram can also carry 0-RTT data, as shown in/ and /typically around 10 packets/typically to around 10 packets/ and /of a connection after one of one endpoint/of a connection after an endpoint/ I'd prefer more like: /Though the guidance there holds, a particularly unwise behavior admits a handful of UDP packets and then makes a decision as to whether or not to filter later packets in the same connection. QUIC applications are encouraged to fail over to TCP if early packets do/ ... and I'll close PR .\n\u201cQUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network.\u201d Could this benefit form a cross-reference to manageability, where this topic is also discussed from a network perspective?\nThree places where the text might avoid confusion when read in future years: 1) \u201cFurther downgrades to older TLS versions than used in QUIC, which is 1.3, \u201c \u2026 will it always be 1.3, or should this be \u201cin QUIC [REF] is TLS 1.3\u201d 2) \u201cCurrently, QUIC only provides fully reliable stream transmission, which means that prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c I'm not so happy with currently here: Is this the best way to say this, might it be better to say this rather than even hinting that future QUIC might only be reliable: \u201cWhen a QUIC endpoint provides provides fully reliable stream transmission, prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c 3) I\u2019d prefer to see a ref here, since this something that the WG might later decide to work upon: \u201cCurrently QUIC only supports failover cases [Quic-Transport].\u201d\nThat mega paragraph has a bunch of good concepts, but it's all run together into a single blob, which makes it a bit hard to follow.", "new_text": "Priority handling of retransmissions can be implemented by the sender in the transport layer. QUIC recommends retransmitting lost data before new data, unless indicated differently by the application. When a QUIC endpoint uses fully reliable streams for transmission, prioritization of retransmissions will be beneficial in most cases, filling in gaps and freeing up the flow control window. For partially reliable or unreliable streams, priority scheduling of retransmissions over data of higher-priority streams might not be desirable. For such streams, QUIC could either provide an explicit interface to control prioritization, or derive the prioritization decision from the reliability level of the stream. 4.3."}
{"id": "q-en-ops-drafts-93c3288dc75d65e61b0a74c51928e86983020b39c6b37f5608cf42b11ea21b1d", "old_text": "making further progress. At this stage connections can be terminated via idle timeout or explicit close; see sec-termination). An application that uses QUIC might communicate a cumulative stream limit but require the connection to be closed before the limit is reached. For example, to stop the server to perform scheduled maintenance. Immediate connection close causes abrupt closure of actively used streams. Depending on how an application uses QUIC streams, this could be undesirable or detrimental to behavior or performance. A more graceful closure technique is to stop sending increases to stream limits and allow the connection to naturally terminate once remaining streams are consumed. However, the period of time it takes to do so is dependent on the client and an unpredictable closing period might not fit application or operational needs. Applications using QUIC can be conservative with open stream limits in order to reduce the commitment and indeterminism. However, being overly conservative with stream limits affects stream concurrency. Balancing these aspects can be specific to applications and their deployments. Instead of relying on stream limits to avoid abrupt closure, an application-layer graceful close mechanism can be used to communicate the intention to explicitly close the connection at some future point. HTTP/3 provides such a mechanism using the GOAWAY frame. In HTTP/3, when the GOAWAY frame is received by a client, it stops opening new streams even if the cumulative stream limit would allow. Instead the client would create a new connection on which to open further streams. Once all streams are closed on the old connection, it can be terminated safely by a connection close or after expiration of the idle time out (see also sec-termination). 5.", "comments": "From PR : I'd also like to see something like : /In this section, we discuss those aspects of the QUIC transport protocol that/This section discusses those aspects of the QUIC transport protocol that/ /Here, we are concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, which we define as the information available in the packet/This section is concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, defined as the information available in the packet/ and /least four datagrams we'll call \"Client Initial\",/least four datagrams, labelled \"Client Initial\",/ and / 0-RTT data may also be seen in the Client Initial datagram, as shown in /When the client uses 0-RTT connection resumption, the Client Initial datagram can also carry 0-RTT data, as shown in/ and /typically around 10 packets/typically to around 10 packets/ and /of a connection after one of one endpoint/of a connection after an endpoint/ I'd prefer more like: /Though the guidance there holds, a particularly unwise behavior admits a handful of UDP packets and then makes a decision as to whether or not to filter later packets in the same connection. QUIC applications are encouraged to fail over to TCP if early packets do/ ... and I'll close PR .\n\u201cQUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network.\u201d Could this benefit form a cross-reference to manageability, where this topic is also discussed from a network perspective?\nThree places where the text might avoid confusion when read in future years: 1) \u201cFurther downgrades to older TLS versions than used in QUIC, which is 1.3, \u201c \u2026 will it always be 1.3, or should this be \u201cin QUIC [REF] is TLS 1.3\u201d 2) \u201cCurrently, QUIC only provides fully reliable stream transmission, which means that prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c I'm not so happy with currently here: Is this the best way to say this, might it be better to say this rather than even hinting that future QUIC might only be reliable: \u201cWhen a QUIC endpoint provides provides fully reliable stream transmission, prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c 3) I\u2019d prefer to see a ref here, since this something that the WG might later decide to work upon: \u201cCurrently QUIC only supports failover cases [Quic-Transport].\u201d\nThat mega paragraph has a bunch of good concepts, but it's all run together into a single blob, which makes it a bit hard to follow.", "new_text": "making further progress. At this stage connections can be terminated via idle timeout or explicit close; see sec-termination). An application that uses QUIC and communicated a cumulative stream limit might require the connection to be closed before the limit is reached. For example, to stop the server to perform scheduled maintenance. Immediate connection close causes abrupt closure of actively used streams. Depending on how an application uses QUIC streams, this could be undesirable or detrimental to behavior or performance. A more graceful closure technique is to stop sending increases to stream limits and allow the connection to naturally terminate once remaining streams are consumed. However, the period of time it takes to do so is dependent on the peer and an unpredictable closing period might not fit application or operational needs. Applications using QUIC can be conservative with open stream limits in order to reduce the commitment and indeterminism. However, being overly conservative with stream limits affects stream concurrency. Balancing these aspects can be specific to applications and their deployments. Instead of relying on stream limits to avoid abrupt closure, an application-layer graceful close mechanism can be used to communicate the intention to explicitly close the connection at some future point. HTTP/3 provides such a mechanism using the GOAWAY frame. In HTTP/3, when the GOAWAY frame is received by a client, it stops opening new streams even if the cumulative stream limit would allow. Instead the client would create a new connection on which to open further streams. Once all streams are closed on the old connection, it can be terminated safely by a connection close or after expiration of the idle time out (see also sec-termination). 5."}
{"id": "q-en-ops-drafts-93c3288dc75d65e61b0a74c51928e86983020b39c6b37f5608cf42b11ea21b1d", "old_text": "length connection ID is also strongly recommended when migration is supported. Currently QUIC only supports the use of a single network path at a time, which enables failover use cases. Path validation is required so that endpoints validate paths before use to avoid address spoofing attacks. Path validation takes at least one RTT and congestion control will also be reset after path migration. Therefore migration usually has a performance impact. QUIC probing packets, which can be sent on multiple paths at once, are used to perform address validation as well as measure path", "comments": "From PR : I'd also like to see something like : /In this section, we discuss those aspects of the QUIC transport protocol that/This section discusses those aspects of the QUIC transport protocol that/ /Here, we are concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, which we define as the information available in the packet/This section is concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, defined as the information available in the packet/ and /least four datagrams we'll call \"Client Initial\",/least four datagrams, labelled \"Client Initial\",/ and / 0-RTT data may also be seen in the Client Initial datagram, as shown in /When the client uses 0-RTT connection resumption, the Client Initial datagram can also carry 0-RTT data, as shown in/ and /typically around 10 packets/typically to around 10 packets/ and /of a connection after one of one endpoint/of a connection after an endpoint/ I'd prefer more like: /Though the guidance there holds, a particularly unwise behavior admits a handful of UDP packets and then makes a decision as to whether or not to filter later packets in the same connection. QUIC applications are encouraged to fail over to TCP if early packets do/ ... and I'll close PR .\n\u201cQUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network.\u201d Could this benefit form a cross-reference to manageability, where this topic is also discussed from a network perspective?\nThree places where the text might avoid confusion when read in future years: 1) \u201cFurther downgrades to older TLS versions than used in QUIC, which is 1.3, \u201c \u2026 will it always be 1.3, or should this be \u201cin QUIC [REF] is TLS 1.3\u201d 2) \u201cCurrently, QUIC only provides fully reliable stream transmission, which means that prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c I'm not so happy with currently here: Is this the best way to say this, might it be better to say this rather than even hinting that future QUIC might only be reliable: \u201cWhen a QUIC endpoint provides provides fully reliable stream transmission, prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c 3) I\u2019d prefer to see a ref here, since this something that the WG might later decide to work upon: \u201cCurrently QUIC only supports failover cases [Quic-Transport].\u201d\nThat mega paragraph has a bunch of good concepts, but it's all run together into a single blob, which makes it a bit hard to follow.", "new_text": "length connection ID is also strongly recommended when migration is supported. The base specification of QUIC version 1 only supports the use of a single network path at a time, which enables failover use cases. Path validation is required so that endpoints validate paths before use to avoid address spoofing attacks. Path validation takes at least one RTT and congestion control will also be reset after path migration. Therefore migration usually has a performance impact. QUIC probing packets, which can be sent on multiple paths at once, are used to perform address validation as well as measure path"}
{"id": "q-en-ops-drafts-93c3288dc75d65e61b0a74c51928e86983020b39c6b37f5608cf42b11ea21b1d", "old_text": "QUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network. QUIC has a long header that exposes some additional information (the version and the source connection ID), while the short header exposes only the destination connection ID. In QUIC version 1, the long header is used during connection establishment, while the short header is used for data transmission in an established connection. The connection ID can be zero length. Zero length connection IDs can be chosen on each endpoint individually, on any packet except the", "comments": "From PR : I'd also like to see something like : /In this section, we discuss those aspects of the QUIC transport protocol that/This section discusses those aspects of the QUIC transport protocol that/ /Here, we are concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, which we define as the information available in the packet/This section is concerned primarily with the unencrypted part of QUIC's wire image {{WIRE-IMAGE}}, defined as the information available in the packet/ and /least four datagrams we'll call \"Client Initial\",/least four datagrams, labelled \"Client Initial\",/ and / 0-RTT data may also be seen in the Client Initial datagram, as shown in /When the client uses 0-RTT connection resumption, the Client Initial datagram can also carry 0-RTT data, as shown in/ and /typically around 10 packets/typically to around 10 packets/ and /of a connection after one of one endpoint/of a connection after an endpoint/ I'd prefer more like: /Though the guidance there holds, a particularly unwise behavior admits a handful of UDP packets and then makes a decision as to whether or not to filter later packets in the same connection. QUIC applications are encouraged to fail over to TCP if early packets do/ ... and I'll close PR .\n\u201cQUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network.\u201d Could this benefit form a cross-reference to manageability, where this topic is also discussed from a network perspective?\nThree places where the text might avoid confusion when read in future years: 1) \u201cFurther downgrades to older TLS versions than used in QUIC, which is 1.3, \u201c \u2026 will it always be 1.3, or should this be \u201cin QUIC [REF] is TLS 1.3\u201d 2) \u201cCurrently, QUIC only provides fully reliable stream transmission, which means that prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c I'm not so happy with currently here: Is this the best way to say this, might it be better to say this rather than even hinting that future QUIC might only be reliable: \u201cWhen a QUIC endpoint provides provides fully reliable stream transmission, prioritization of retransmissions will be beneficial in most cases, by filling in gaps and freeing up the flow control window. \u201c 3) I\u2019d prefer to see a ref here, since this something that the WG might later decide to work upon: \u201cCurrently QUIC only supports failover cases [Quic-Transport].\u201d\nThat mega paragraph has a bunch of good concepts, but it's all run together into a single blob, which makes it a bit hard to follow.", "new_text": "QUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established or because the information is intended to be used by the network. For more information on manageability of QUIC see also I-D.ietf-quic- manageability. QUIC has a long header that exposes some additional information (the version and the source connection ID), while the short header exposes only the destination connection ID. In QUIC version 1, the long header is used during connection establishment, while the short header is used for data transmission in an established connection. The connection ID can be zero length. Zero length connection IDs can be chosen on each endpoint individually, on any packet except the"}
{"id": "q-en-ops-drafts-ac2fe523a1c99edc3d713b48d9448651b0b94f8cd2d01f07b16201e890fd8428", "old_text": "default port for HTTP/3 QUIC-HTTP is UDP port 443, analogous to HTTP/1.1 or HTTP/2 over TLS over TCP. Additionally, Application-Layer Version Negotiation RFC7301 permits the client and server to negotiate which of several protocols will be used on a given connection. Therefore, multiple applications might be supported on a single UDP port based on the ALPN token offered. Applications using QUIC should register an ALPN token for use in the TLS handshake. Applications could define an alternate endpoint discovery mechanism to allow the usage of ports other than the default. For example, HTTP/3 (Sections 3.2 and 3.3 of QUIC-HTTP) specifies the use of HTTP Alternative Services RFC7838 for an HTTP origin to advertise the availability of an equivalent HTTP/3 endpoint on a certain UDP port by using the \"h3\" ALPN token. Note that HTTP/3's ALPN token (\"h3\") identifies not only the version of the application protocol, but also the version of QUIC itself; this approach allows unambiguous agreement between the endpoints on the protocol stack in use. Given the prevalence of the assumption in network management practice that a port number maps unambiguously to an application, the use of ports that cannot easily be mapped to a registered service name might", "comments": "Okay I made another pass on the text based on NAME comments, however, it didn't get any short but maybe it's slight more clear now. Please re-review! (Also I moved some paragraphs around to improve the text flow; so the PR now looks bigger than it is. So we also need to update the section title now, which is \"Port Selection and Application Endpoint Discovery\"?)\nApplicability Sec 8: >Note that HTTP/3's ALPN token (\"h3\") identifies not only the version of the application protocol, but also the version of QUIC >itself; this approach allows unambiguous agreement between the endpoints on the protocol stack in use. I was going to start a discussion about this at IETF 111. I think we should think hard about this; if there are a lot of versions with small differences to the application, this could result in a ridiculous cloud of RFCs.\nIs there a change to make to this document in WGLC, or is this a note to be careful in the future?\nI personally don't like this linkage, and think the ALPN should just define the application. While linking the version does smooth out version negotiation, it can result in an explosion in the ALPN registry. I'll start a thread on the list about this, since I suppose it can't wait for the discussion at 111 that I was planning.\nThe point this was discussed already at length and the manageability document is just reflecting what has been decided for h3. So if at all you'd need to raise an issue on the h3 draft but I guess that's also too late.\nI disagree that the text of the H3 draft mandates this: That is a long way from saying \"roll a new ALPN for each QUIC version\". Maybe everyone tacitly agrees we should roll a new ALPN, and I'm in the rough. But I'd like to confirm that.\nSee quicwg/base-drafts\nHow about rewording the sentence to reflect reality, roughly Since QUIC v1 (RFC9000) deferred defining a version negotiation mechanism. HTTP/3 (RFC-to-be-whatever) deferred deciding how it would map to other QUIC versions. HTTP/3 requires QUIC v1 and defines the ALPN token (\"h3\") to only apply to that version. This solves a versioning problem at the cost of future flexibility. Application protocol mappings written for QUIC v1 or other QUIC versions may be able to benefit from version negotiation mechanisms that did not exist during the development of HTTP/3 (RFC-to-be-whatever). Coupling QUIC version to ALPN tokens is one approach but not the only approach.\nI believe the text already says that this doesn't have to be applied by other application. The cited text is really only meant to described what was decided from h3 as an example. We can definitely reword this for reflect the consensus more clearly as proposed by Lucas.\nLucas, I would welcome your proposal as kicking the can down the road and not enshrining an approach that I think is bad. I would also like to continue the ALPN/versioning discussion, but it doesn't have to block the applicability draft.\nI can make it shorter. Maybe not as short as NAME would have it, but still shorter :)", "new_text": "default port for HTTP/3 QUIC-HTTP is UDP port 443, analogous to HTTP/1.1 or HTTP/2 over TLS over TCP. Given the prevalence of the assumption in network management practice that a port number maps unambiguously to an application, the use of ports that cannot easily be mapped to a registered service name might"}
{"id": "q-en-ops-drafts-ac2fe523a1c99edc3d713b48d9448651b0b94f8cd2d01f07b16201e890fd8428", "old_text": "network elements such as firewalls that use the port number for application identification. 9. QUIC supports connection migration by the client. If an IP address", "comments": "Okay I made another pass on the text based on NAME comments, however, it didn't get any short but maybe it's slight more clear now. Please re-review! (Also I moved some paragraphs around to improve the text flow; so the PR now looks bigger than it is. So we also need to update the section title now, which is \"Port Selection and Application Endpoint Discovery\"?)\nApplicability Sec 8: >Note that HTTP/3's ALPN token (\"h3\") identifies not only the version of the application protocol, but also the version of QUIC >itself; this approach allows unambiguous agreement between the endpoints on the protocol stack in use. I was going to start a discussion about this at IETF 111. I think we should think hard about this; if there are a lot of versions with small differences to the application, this could result in a ridiculous cloud of RFCs.\nIs there a change to make to this document in WGLC, or is this a note to be careful in the future?\nI personally don't like this linkage, and think the ALPN should just define the application. While linking the version does smooth out version negotiation, it can result in an explosion in the ALPN registry. I'll start a thread on the list about this, since I suppose it can't wait for the discussion at 111 that I was planning.\nThe point this was discussed already at length and the manageability document is just reflecting what has been decided for h3. So if at all you'd need to raise an issue on the h3 draft but I guess that's also too late.\nI disagree that the text of the H3 draft mandates this: That is a long way from saying \"roll a new ALPN for each QUIC version\". Maybe everyone tacitly agrees we should roll a new ALPN, and I'm in the rough. But I'd like to confirm that.\nSee quicwg/base-drafts\nHow about rewording the sentence to reflect reality, roughly Since QUIC v1 (RFC9000) deferred defining a version negotiation mechanism. HTTP/3 (RFC-to-be-whatever) deferred deciding how it would map to other QUIC versions. HTTP/3 requires QUIC v1 and defines the ALPN token (\"h3\") to only apply to that version. This solves a versioning problem at the cost of future flexibility. Application protocol mappings written for QUIC v1 or other QUIC versions may be able to benefit from version negotiation mechanisms that did not exist during the development of HTTP/3 (RFC-to-be-whatever). Coupling QUIC version to ALPN tokens is one approach but not the only approach.\nI believe the text already says that this doesn't have to be applied by other application. The cited text is really only meant to described what was decided from h3 as an example. We can definitely reword this for reflect the consensus more clearly as proposed by Lucas.\nLucas, I would welcome your proposal as kicking the can down the road and not enshrining an approach that I think is bad. I would also like to continue the ALPN/versioning discussion, but it doesn't have to block the applicability draft.\nI can make it shorter. Maybe not as short as NAME would have it, but still shorter :)", "new_text": "network elements such as firewalls that use the port number for application identification. Applications could define an alternate endpoint discovery mechanism to allow the usage of ports other than the default. For example, HTTP/3 (Sections 3.2 and 3.3 of QUIC-HTTP) specifies the use of HTTP Alternative Services RFC7838 for an HTTP origin to advertise the availability of an equivalent HTTP/3 endpoint on a certain UDP port by using the \"h3\" Application-Layer Protocol Negotiation (ALPN) RFC7301 token. ALPN permits the client and server to negotiate which of several protocols will be used on a given connection. Therefore, multiple applications might be supported on a single UDP port based on the ALPN token offered. Applications using QUIC are required to register an ALPN token for use in the TLS handshake. As QUIC version 1 deferred defining a complete version negotiation mechanism, HTTP/3 requires QUIC version 1 and defines the ALPN token (\"h3\") to only apply to that version. So far no single approach has been selected for managing the use of different QUIC versions, neither in HTTP/3 nor in general. Application protocols that use QUIC need to consider how the protocol will manage different QUIC versions. Decisions for those protocols might be informed by choices made by other protocols, like HTTP/3. 9. QUIC supports connection migration by the client. If an IP address"}
{"id": "q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5", "old_text": "5. [EDITOR'S NOTE: give some guidance here about the steps an application should take; however this is still work in progress] 6. QUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established,", "comments": "hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN", "new_text": "5. As QUIC is a general purpose transport protocol, there are no requirements that servers use a particular UDP port for QUIC in general. Instead, the same port number is used as would be used for the same application over TCP. In the case of HTTP the expectation is that port 443 is used, which has already been registered for \"http protocol over TLS/SSL\". However, QUIC-HTTP also specifies the use of Alt-Svc for HTTP/QUIC discovery which allows the server to use and announce a different port number. In general, port numbers serves two purposes: \"first, they provide a demultiplexing identifier to differentiate transport sessions between the same pair of endpoints, and second, they may also identify the application protocol and associated service to which processes connect\" RFC6335. Note that the assumption that an application can be identified in the network based on the port number is less true today, due to encapsulation, mechanisms for dynamic port assignments as well as NATs. However, whenever a non-standard port is used which does not enable easy mapping to a registered service name, this can lead to blocking by network elements such as firewalls that rely on the port number as a first order of filtering. 6. [EDITOR'S NOTE: give some guidance here about the steps an application should take; however this is still work in progress] 7. QUIC exposes some information to the network in the unencrypted part of the header, either before the encryption context is established,"}
{"id": "q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5", "old_text": "Connection IDs), the short header exposes at most only a single Connection ID. 6.1. QUIC supports a server-generated Connection ID, transmitted to the client during connection establishment (see Section 6.1 of QUIC).", "comments": "hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN", "new_text": "Connection IDs), the short header exposes at most only a single Connection ID. 7.1. QUIC supports a server-generated Connection ID, transmitted to the client during connection establishment (see Section 6.1 of QUIC)."}
{"id": "q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5", "old_text": "The best way to obscure an encoding is to appear random to observers, which is most rigorously achieved with encryption. 6.2. While sufficiently robust connection ID generation schemes will mitigate linkability issues, they do not provide full protection.", "comments": "hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN", "new_text": "The best way to obscure an encoding is to appear random to observers, which is most rigorously achieved with encryption. 7.2. While sufficiently robust connection ID generation schemes will mitigate linkability issues, they do not provide full protection."}
{"id": "q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5", "old_text": "migrations to attempt to increase the number of simultaneous migrations at a given time, or through other means. 6.3. QUIC provides a Server Retry packet that can be sent by a server in response to the Client Initial packet. The server may choose a new", "comments": "hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN", "new_text": "migrations to attempt to increase the number of simultaneous migrations at a given time, or through other means. 7.3. QUIC provides a Server Retry packet that can be sent by a server in response to the Client Initial packet. The server may choose a new"}
{"id": "q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5", "old_text": "that the load balancer will redirect the next Client Initial packet to a different server in that pool. 7. Versioning in QUIC may change the protocol's behavior completely, except for the meaning of a few header fields that have been declared", "comments": "hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN", "new_text": "that the load balancer will redirect the next Client Initial packet to a different server in that pool. 8. Versioning in QUIC may change the protocol's behavior completely, except for the meaning of a few header fields that have been declared"}
{"id": "q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5", "old_text": "specification QUIC-TLS. This split is performed to enable light- weight versioning with different cryptographic handshakes. 8. This document has no actions for IANA. 9. See the security considerations in QUIC and QUIC-TLS; the security considerations for the underlying transport protocol are relevant for", "comments": "hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN", "new_text": "specification QUIC-TLS. This split is performed to enable light- weight versioning with different cryptographic handshakes. 9. This document has no actions for IANA. 10. See the security considerations in QUIC and QUIC-TLS; the security considerations for the underlying transport protocol are relevant for"}
{"id": "q-en-ops-drafts-e6f634b52e4772c2220f0ed7122529e50bdc5a7d38f7521bfe658e99a4f335b5", "old_text": "connection SHOULD fail to allow the application to explicitly handle fallback to a less-secure alternative. See fallback. 10. This work is partially supported by the European Commission under Horizon 2020 grant agreement no. 688421 Measurement and Architecture", "comments": "hopefully addresses issue\nI don't think \"the same port number as would be used for the same application over TCP\" is necessary general-purpose guidance. I would argue for something more like:\nCopying from URL There's no requirement that servers use a particular UDP port for HTTP/QUIC or any other QUIC traffic. Using Alt-Svc, the server is able to pick and advertise any port number and a compliant client will handle it just fine. That's already the case, and isn't part of this issue. updates the HTTP draft to highlight this, increasing the odds that implementations will test and support that case. This issue is to track that it might actually be desirable from a privacy standpoint for servers to pick arbitrary port numbers and perhaps even rotate them periodically (though that requires coordination with their Alt-Svc advertisements and lifetimes, which could be challenging) in order to make it more difficult for a network observer to classify traffic (and therefore more difficult to ossify). On the other hand, as we're wrestling with in each of these privacy/manageability debates, removing easy network visibility into the likely protocol by using arbitrary port numbers means that middleboxes will probably resort to other means of attempting to identify protocols and potentially doing it badly, which could result in even worse ossification. (E.g. indexing into the TLS ClientHello to find the ALPN list, then panicking on a different handshake protocol.) There's further discussion on the original issue, but this belongs in the ops drafts, not the protocol drafts.\nthis kind-of-but-not-really-duplicates , which I'll go ahead and close.\nActually, it's an exact duplicate, given that it points to the same issue. ;-) My bad.\nNAME can you check if PR addresses this issue sufficiently? At least for the applicability doc...\nNow also PR for the manageability statement.\nand are a (ready for -03) treatment of this; however, we should consider if we want to say more, e.g. such as general guidance / recommendations for using ALPN in QUIC mappings, for instance? Keeping this open to track.\ncreated new issue for ALPN", "new_text": "connection SHOULD fail to allow the application to explicitly handle fallback to a less-secure alternative. See fallback. 11. This work is partially supported by the European Commission under Horizon 2020 grant agreement no. 688421 Measurement and Architecture"}
{"id": "q-en-ops-drafts-93f72ab78c71d77c35a7574ab92fd94cec2c8bb35f98fdebb0061742adbf0378", "old_text": "QUIC is an end-to-end transport protocol. No information in the protocol header, even that which can be inspected, is mutable by the network. This is achieved through integrity protection of the wire image WIRE-IMAGE. Encryption of most control signaling means that less information is visible to the network than is the case with TCP. Integrity protection can also simplify troubleshooting, because none of the nodes on the network path can modify transport layer", "comments": "URL , can we be more specific here about the control signal we are talking about is QUIC protocol control signal not really other kind of control signal that the network might have? also here it is compared with TCP, should it also mention TCP with TLS?\nI added the term \"transport-layer\" in PR . Does that help. I don't think TLS needs to be mentioned here.", "new_text": "QUIC is an end-to-end transport protocol. No information in the protocol header, even that which can be inspected, is mutable by the network. This is achieved through integrity protection of the wire image WIRE-IMAGE. Encryption of most transport-layer control signaling means that less information is visible to the network than is the case with TCP. Integrity protection can also simplify troubleshooting, because none of the nodes on the network path can modify transport layer"}
{"id": "q-en-ops-drafts-c0c5df788bf6a08db63818d4a842e129836621d2ed065e2fe0c6577f8bc374a4", "old_text": "it describes what is and is not possible with the QUIC transport protocol as defined. QUIC-specific terminology used in this document is defined in QUIC- TRANSPORT.", "comments": "hi NAME NAME NAME PTAnotherL, reviewed the thread and I do think an additional 9065 ref here would be helpful.", "new_text": "it describes what is and is not possible with the QUIC transport protocol as defined. This document focuses solely on network management practices that observe traffic on the wire. Replacement of troubleshooting based on observation with active measurement techniques, for example, is therefore out of scope. A more generalized treatment of network management operations on encrypted transports is given in RFC9065. QUIC-specific terminology used in this document is defined in QUIC- TRANSPORT."}
{"id": "q-en-ops-drafts-69ee0dce4b3856f34254d98c829f756309a6dac682aa4260eba2376416f723c4", "old_text": "2.4. New QUIC connections are established using a handshake, which is distinguishable on the wire and contains some information that can be passively observed. To illustrate the information visible in the QUIC wire image during the handshake, we first show the general communication pattern", "comments": "Could the first 1200-byte QUIC packet be used to suspect a QUIC connection establishment ? [MK] Yes, I think if you look at the first packet as a whole you can be reasonably sure that you look at a quic packet. I guess this is an assumption throughout the whole document, as section 2.4 says: \"New QUIC connections are established using a handshake, which is distinguishable on the wire and contains some information that can be passively observed.\" [MK] Do we need to say more? EV> hummm I guess not even if the text in 2.4 is rather generic while the point in 3.1 is really specific. Perhaps adding which handshake properties are 'distinguishable' already in section 2.4 ?\nproposes to fix this with a forward reference.", "new_text": "2.4. New QUIC connections are established using a handshake, which is distinguishable on the wire (see sec-identifying for details), and contains some information that can be passively observed. To illustrate the information visible in the QUIC wire image during the handshake, we first show the general communication pattern"}
{"id": "q-en-ops-drafts-b9d12fccbf3e277e6fc0b03944a7164422502b5d0c890628ebae0f9b16b99bb9", "old_text": "The client's TLS ClientHello may contain a Server Name Indication (SNI) RFC6066 extension, by which the client reveals the name of the server it intends to connect to, in order to allow the server to present a certificate based on that name. It may also contain an Application-Layer Protocol Negotiation (ALPN) RFC7301 extension, by which the client exposes the names of application-layer protocols it supports; an observer can deduce that one of those protocols will be used if the connection continues. Work is currently underway in the TLS working group to encrypt the contents of the ClientHello in TLS 1.3 TLS-ECH. This would make SNI-", "comments": "\"Here we assume a bidirectional observer\", let's also state that there is no section about unidirectional observer ?\nNot sure if we add something because I think it okay to state assumptions...\n\"unless noted\" states that we'll note when something is unidirectionally observable, I'm reviewing the language now to see if we missed any...", "new_text": "The client's TLS ClientHello may contain a Server Name Indication (SNI) RFC6066 extension, by which the client reveals the name of the server it intends to connect to, in order to allow the server to present a certificate based on that name. SNI information is available to unidirectional observers on the client-to-server path, if present. The TLS ClientHello may also contain an Application-Layer Protocol Negotiation (ALPN) RFC7301 extension, by which the client exposes the names of application-layer protocols it supports; an observer can deduce that one of those protocols will be used if the connection continues. Work is currently underway in the TLS working group to encrypt the contents of the ClientHello in TLS 1.3 TLS-ECH. This would make SNI-"}
{"id": "q-en-ops-drafts-b9d12fccbf3e277e6fc0b03944a7164422502b5d0c890628ebae0f9b16b99bb9", "old_text": "recognition of the handshake, as illustrated in handshake. It can also be inferred during the flow's lifetime, if the endpoints use the spin bit facility described below and in Section 17.3.1 of QUIC- TRANSPORT. 3.8.1.", "comments": "\"Here we assume a bidirectional observer\", let's also state that there is no section about unidirectional observer ?\nNot sure if we add something because I think it okay to state assumptions...\n\"unless noted\" states that we'll note when something is unidirectionally observable, I'm reviewing the language now to see if we missed any...", "new_text": "recognition of the handshake, as illustrated in handshake. It can also be inferred during the flow's lifetime, if the endpoints use the spin bit facility described below and in Section 17.3.1 of QUIC- TRANSPORT. RTT measurement is available to unidirectional observers when the spin bit is enabled. 3.8.1."}
{"id": "q-en-ops-drafts-3794fe95a854c2179a6ea732dd396779ddfc438413854b02ac7ea99bed04f701", "old_text": "for other control processes, and a short header that may be used for data transmission in an established connection. While the long header always exposes some information (such as the version and Connection IDs), the short header only optionally exposes a single Connection ID. Given that exposing this information may make it possible to associate multiple addresses with a single client during rebinding, which has privacy implications, an application may indicate to not support exposure of certain information after the handshake. Specifically, an application that has additional information that the client is not behind a NAT and the server is not behind a load balancer, and therefore it is unlikely that the addresses will be re- bound, may indicate to the transport that is wishes to not expose a Connection ID. 6.1. QUIC supports a server-generated Connection ID, transmitted to the client during connection establishment (see Section 6.1 of QUIC). Servers behind load balancers should propose a Connection ID during the handshake, encoding the identity of the server or information about its load balancing pool, in order to support stateless load balancing. Once the server generates a Connection ID that encodes its identity, every CDN load balancer would be able to forward the packets to that server without needing information about every specific flow it is forwarding. Server-generated Connection IDs must not encode any information other that that needed to route packets to the appropriate backend server(s): typically the identity of the backend server or pool of servers, if the data-center's load balancing system keeps \"local\" state of all flows itself. Care must be exercised to ensure that the information encoded in the Connection ID is not sufficient to identify unique end users. Note that by encoding routing information in the Connection ID, load balancers open up a new attack vector that allows bad actors to direct traffic at a specific backend server or pool. It is therefore recommended that Server-Generated Connection ID includes a cryptographic MAC that the load balancer pool server is able to identify and discard packets featuring an invalid MAC. 6.2. QUIC provides a Server Retry packet that can be sent by a server in response to the Client Initial packet. The server may choose a new Connection ID in that packet and the client will retry by sending", "comments": "Tries to I resisted the urge to reference the QUIC-LB draft, which isn't adopted. The major substantive change is that I removed the prohibition on non-routing information in the CID. I'm happy to discuss this if people have object.\nLGTM. will go ahead and land this, then fix up the lint issues.\nNAME missed this during the inconsistency, will fix in working copy\n(this is done)\nBTW, this was meant to address\nMoving URL to the ops repo.\nsee also -- in any case the text that is there now is not great.\nNAME can you please review the existing text (orginially provided by NAME and state what else is needed? Please have also a look at issue\nThis is a bit OBE, but I think there might still be value in discussing how a server implementer should think about connection IDs. Fortunately, NAME has been doing a bunch of work on this in his load balancer draft. I would suggest that we leave this issue open, and either point to the LB draft or use text from it.\nfixed this\nWrite section to provide guidance; maybe in the same section than guidance in port selection in general", "new_text": "for other control processes, and a short header that may be used for data transmission in an established connection. While the long header always exposes some information (such as the version and Connection IDs), the short header exposes at most only a single Connection ID. 6.1. QUIC supports a server-generated Connection ID, transmitted to the client during connection establishment (see Section 6.1 of QUIC). Servers behind load balancers may need to propose a Connection ID during the handshake, encoding the identity of the server or information about its load balancing pool, in order to support stateless load balancing. Once the server generates a Connection ID that encodes its identity, every CDN load balancer would be able to forward the packets to that server without retaining connection state. Server-generated connection IDs should seek to obscure any encoding, of routing identities or any other information. Exposing the server mapping would allow linkage of multiple IP addresses to the same host if the server also supports migration. Furthermore, this opens an attack vector on specific servers or pools. The best way to obscure an encoding is to appear random to observers, which is most rigorously achieved with encryption. 6.2. While sufficiently robust connection ID generation schemes will mitigate linkability issues, they do not provide full protection. Analysis of the lifetimes of six-tuples (source and destination addresses as well as the migrated CID) may expose these links anyway. In the limit where connection migration in a server pool is rare, it is trivial for an observer to associate two connection IDs. Conversely, in the opposite limit where every server handles multiple simultaneous migrations, even an exposed server mapping may be insufficient information. The most efficient mitigation for these attacks is operational, either by using a load balancing architecture that loads more flows onto a single server-side address, by coordinating the timing of migrations to attempt to increase the number of simultaneous migrations at a given time, or through other means. 6.3. QUIC provides a Server Retry packet that can be sent by a server in response to the Client Initial packet. The server may choose a new Connection ID in that packet and the client will retry by sending"}
{"id": "q-en-ops-drafts-3794fe95a854c2179a6ea732dd396779ddfc438413854b02ac7ea99bed04f701", "old_text": "that the load balancer will redirect the next Client Initial packet to a different server in that pool. 6.3. QUIC provides for multiple Server and Client Connection IDs to be used simultaneously by a given connection, which allows seamless connection migration when one of the endpoints changes IP address and/or UDP port. Section 6.11.5 of QUIC describes how to use this facility to reduce the risk of exposing a link among these addresses to observers on the network. However, analysis of the lifetimes of six-tuples (source and destination addresses as well as the migrated CID) may expose these links anyway. In practice, a finite set of flows will be undergoing migration within any one time window as seen from any given observation point in the network, and any migration must keep at least one endpoint address constant during the migration. Because of this, a key insight here is that this finite set of flows represents the anonymity set for any one flow undergoing migration within it. For endpoints with low volume, this anonymity set will be necessarily small, so there remains a significant risk of linkage exposure through timing-based analysis. The most efficient mitigation for these attacks is operational, by increasing the size of the anonymity set as seen from a passive observer in the Internet, either by using a load balancing architecture that loads more flows onto a single server-side address, by coordinating the timing of migrations to attempt to increase the number of simultaneous migrations at a given time, or through other means. 7. Versioning in QUIC may change the protocol's behavior completely,", "comments": "Tries to I resisted the urge to reference the QUIC-LB draft, which isn't adopted. The major substantive change is that I removed the prohibition on non-routing information in the CID. I'm happy to discuss this if people have object.\nLGTM. will go ahead and land this, then fix up the lint issues.\nNAME missed this during the inconsistency, will fix in working copy\n(this is done)\nBTW, this was meant to address\nMoving URL to the ops repo.\nsee also -- in any case the text that is there now is not great.\nNAME can you please review the existing text (orginially provided by NAME and state what else is needed? Please have also a look at issue\nThis is a bit OBE, but I think there might still be value in discussing how a server implementer should think about connection IDs. Fortunately, NAME has been doing a bunch of work on this in his load balancer draft. I would suggest that we leave this issue open, and either point to the LB draft or use text from it.\nfixed this\nWrite section to provide guidance; maybe in the same section than guidance in port selection in general", "new_text": "that the load balancer will redirect the next Client Initial packet to a different server in that pool. 7. Versioning in QUIC may change the protocol's behavior completely,"}
{"id": "q-en-ops-drafts-edd515d6b30f8fb507ae16bf79205ba4a77779b0b8194faf6fa59514ef9fecc5", "old_text": "semantics as needed for HTTP/2 QUIC-HTTP. Based on current deployment practices, QUIC is encapsulated in UDP and encrypted by default. The current version of QUIC integrates TLS QUIC-TLS to encrypt all payload data and most control information. Given QUIC is an end-to-end transport protocol, all information in the protocol header, even that which can be inspected, is is not meant to be mutable by the network, and is therefore integrity-protected to the extent possible. This document provides guidance for network operation on the management of QUIC traffic. This includes guidance on how to", "comments": "Address .\nThe section on \"Passive network performance measurement and troubleshooting\" is probably a bit too pessimistic. It's probably worth pointing out that one part of network troubleshooting -- \"find out which box to blame\" -- is substantially easier. Thanks to integrity protection, most middleboxes, can drop, delay/rate limit, and corrupt, and that's about it. These are fairly easy behaviors to diagnose. I've had some conversations with customers afraid of not having their TCP headers anymore, and I believe this fear is largely misguided. If people agree this belongs in the draft, I can file a PR.\nI agree the wording is very pessimistic. However, I don't think your observation belongs in this section because it's about measurement of performance metrics. There is a sentence in the introduction: \"Given QUIC is an end-to-end transport protocol, all information in the protocol header, even that which can be inspected, is is not meant to be mutable by the network, and is therefore integrity-protected to the extent possible.\" Maybe it makes sense to add another sentence right there to also emphasis the positive effect of that, like: \"While more limited information compared to TCP is visible to the network, integrity protection can also make troubleshooting easier as none of the nodes on the network path can modify the transport layer information.\"\nLGTM, will fix Jana's suggestion manually.One nit, but LGTM. The nit is to avoid ambiguity that might arise from \"transport\" -- one might read that QUIC integrity-protects UDP headers as well, since those are technically transport.", "new_text": "semantics as needed for HTTP/2 QUIC-HTTP. Based on current deployment practices, QUIC is encapsulated in UDP and encrypted by default. The current version of QUIC integrates TLS QUIC-TLS to encrypt all payload data and most control information. Given that QUIC is an end-to-end transport protocol, all information in the protocol header, even that which can be inspected, is not meant to be mutable by the network, and is therefore integrity- protected. While less information is visible to the network than for TCP, integrity protection can also simplify troubleshooting because none of the nodes on the network path can modify the transport layer information. This document provides guidance for network operation on the management of QUIC traffic. This includes guidance on how to"}
{"id": "q-en-ops-drafts-6c0914863c12431ef23782dc6468ff4fcabe2f4952897e6311902d9561ec87d2", "old_text": "3.1. [EDITOR'S NOTE: Jana noted at the interim in Paris that we should point out that applications need to be re-thought slightly to get the benefits of zero RTT. Add a little text here to discuss this and why it's worth the effort, before we go straight into the dragons.] 3.2. However, data in the frames contained in 0-RTT packets of a such a connection must be treated specially by the application layer. Replay of these packets can cause the data to processed twice. This is further described in HTTP-RETRY. 0-RTT data also does not benefit from perfect forward secrecy (PFS). Applications that cannot treat data that may appear in a 0-RTT connection establishment as idempotent MUST NOT use 0-RTT establishment. For this reason the QUIC transport SHOULD provide an interface for the application to indicate if 0-RTT support is in general desired or a way to indicate whether data is idempotent, and/ or whether PFS is a hard requirement 3.3. [EDITOR'S NOTE: guidance/recommendation to us 0-RTT session resumption rather then sending keep-alives?] 4.", "comments": "NAME -- you mentioned wanting to have a more balanced view of zero RTT in the applicability draft: is this something like what you had in mind?\nI'll go ahead and merge this to have one less editor's note in the ietf-00 submission; file an issue if this doesn't address the \"thinking in 0rtt\" point adequately.\nJana noted at the interim in Paris that we should point out that applications need to be re-thought slightly to get the benefits of zero RTT. Add a little text to discuss this and why it's worth the effort, before we go straight into the dragons.", "new_text": "3.1. A transport protocol that provides 0-RTT connection establishment to recently contacted servers is qualitatively different than one that does not from the point of view of the application using it. Relative tradeoffs between the cost of closing and reopening a connection and trying to keep it open are different; see resumption- v-keepalive. Applications must be slightly rethought in order to make best use of 0-RTT resumption. Most importantly, application operations must be divided into idempotent and non-idempotent operations, as only idempotent operations may appear in 0-RTT packets. This implies that the interface between the application and transport layer exposes idempotence either ecplicitly or implicitly. 3.2. Retransmission or (malicious) replay of data contained in 0-RTT resumption packets could cause the server side to receive two copies of the same data. This is further described in HTTP-RETRY. Data sent during 0-RTT resumption also cannot benefit from perfect forward secrecy (PFS). Data in the first flight sent by the client in a connection established with 0-RTT MUST be idempotent. Applications MUST be designed, and their data MUST be framed, such that multiple reception of idempotent data is recognized as such by the receiverApplications that cannot treat data that may appear in a 0-RTT connection establishment as idempotent MUST NOT use 0-RTT establishment. For this reason the QUIC transport SHOULD provide an interface for the application to indicate if 0-RTT support is in general desired or a way to indicate whether data is idempotent, and/or whether PFS is a hard requirement for the application. 3.3. [EDITOR'S NOTE: see https://github.com/quicwg/ops-drafts/issues/6] 4."}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": " Combining EDHOC and OSCORE draft-ietf-core-oscore-edhoc-latest Abstract This document defines an optimization approach for combining the lightweight authenticated key exchange protocol EDHOC run over CoAP with the first subsequent OSCORE transaction. This combination reduces the number of round trips required to set up an OSCORE Security Context and to complete an OSCORE transaction using that Security Context. 1. This document defines an optimization approach to combine the lightweight authenticated key exchange protocol EDHOC I-D.ietf-lake- edhoc, when running over CoAP RFC7252, with the first subsequent OSCORE RFC8613 transaction. This allows for a minimum number of round trips necessary to setup the OSCORE Security Context and complete an OSCORE transaction, for example when an IoT device gets configured in a network for the first time. This optimization is desirable, since the number of protocol round trips impacts the minimum number of flights, which in turn can have a", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": " Using EDHOC for OSCORE with CoAP Transport draft-ietf-core-oscore-edhoc-latest Abstract This document defines how the lightweight authenticated key exchange protocol EDHOC is used for establishing an OSCORE Security Context, using CoAP for message transferring. Furthermore, this document defines an optimization approach for combining EDHOC run over CoAP with the first subsequent OSCORE transaction. This reduces the number of round trips required to set up an OSCORE Security Context and to complete an OSCORE transaction using that Security Context. 1. Ephemeral Diffie-Hellman Over COSE (EDHOC) I-D.ietf-lake-edhoc is a lightweight authenticated key exchange protocol, especially intended for use in constrained scenarios. As defined in Section 7.2 of I- D.ietf-lake-edhoc, EDHOC messages can be transported over the Constrained Application Protocol (CoAP) RFC7252. This document builds on the EDHOC specification I-D.ietf-lake-edhoc and defines how EDHOC run over CoAP is used for establishing a Security Context for Object Security for Constrained RESTful Environments (OSCORE) RFC8613. In addition, this document defines an optimization approach that combines EDHOC run over CoAP with the first subsequent OSCORE transaction. This allows for a minimum number of round trips necessary to setup the OSCORE Security Context and complete an OSCORE transaction, for example, when an IoT device gets configured in a network for the first time. This optimization is desirable, since the number of protocol round trips impacts the minimum number of flights, which in turn can have a"}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "Without this optimization, it is not possible, not even in theory, to achieve the minimum number of flights. This optimization makes it possible also in practice, since the last message of the EDHOC protocol can be made relatively small (see Section 1 of I-D.ietf- lake-edhoc), thus allowing additional OSCORE protected CoAP data within target MTU sizes.", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "Without this optimization, it is not possible, not even in theory, to achieve the minimum number of flights. This optimization makes it possible also in practice since the last message of the EDHOC protocol can be made relatively small (see Section 1 of I-D.ietf- lake-edhoc), thus allowing additional OSCORE protected CoAP data within target MTU sizes."}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "2. EDHOC is a 3-message key exchange protocol. Section 7.2 of I-D.ietf- lake-edhoc specifies how to transport EDHOC over CoAP: the EDHOC data (referred to as \"EDHOC messages\") are transported in the payload of CoAP requests and responses. This draft deals with the case of the Initiator acting as CoAP Client and the Responder acting as CoAP Server; instead, the case of the Initiator acting as CoAP Server cannot be optimized by using this approach. That is, the CoAP Client sends a POST request containing EDHOC message_1 to a reserved resource at the CoAP Server. This triggers the EDHOC exchange on the CoAP Server, which replies with a 2.04 (Changed) Response containing EDHOC message_2. Finally, the CoAP Client sends EDHOC message_3, as a CoAP POST request to the same resource used for EDHOC message_1. The Content-Format of these CoAP messages may be set to \"application/edhoc\". After this exchange takes place, and after successful verifications specified in the EDHOC protocol, the Client and Server derive the OSCORE Security Context, as specified in Section 7.2.1 of I-D.ietf- lake-edhoc. Then, they are ready to use OSCORE. This sequential way of running EDHOC and then OSCORE is specified in fig-non-combined. As shown in the figure, this mechanism takes 3 round trips to complete. The number of roundtrips can be minimized as follows. Already after receiving EDHOC message_2 and before sending EDHOC message_3, the CoAP Client has all the information needed to derive the OSCORE Security Context. This means that the Client can potentially send at the same time both EDHOC message_3 and the subsequent OSCORE Request. On a semantic level, this approach practically requires to send two separate REST requests at the same time. The high level message flow of running EDHOC and OSCORE combined is shown in fig-combined. Defining the specific details of how to transport the data and of their processing order is the goal of this specification, as defined in edhoc-in-oscore. 3. This section defines the EDHOC Option, used in a CoAP request to signal that the request combines EDHOC message_3 and OSCORE protected data. The EDHOC Option has the properties summarized in fig-edhoc-option, which extends Table 4 of RFC7252. The option is Critical, Safe-to-", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "2. The EDHOC protocol allows two peers to agree on a cryptographic secret, in a mutually-authenticated way and by using Diffie-Hellman ephemeral keys to achieve perfect forward secrecy. The two peers are denoted as Initiator and Responder, as the one sending or receiving the initial EDHOC message_1, respectively. After successful processing of EDHOC message_3, both peers agree on a cryptographic secret that can be used to derive further security material, and especially to establish an OSCORE Security Context RFC8613. The Responder can also send an optional EDHOC message_4 to achieve key confirmation, e.g. in deployments where no protected application message is sent from the Responder to the Initiator. Section 7.2 of I-D.ietf-lake-edhoc specifies how to transport EDHOC over CoAP. That is, the EDHOC data (referred to as \"EDHOC messages\") are transported in the payload of CoAP requests and responses. The default message flow consists in the CoAP Client acting as Initiator and the CoAP Server acting as Responder. Alternatively, the two roles can be reversed. In the rest of this document, EDHOC messages are considered to be transported over CoAP. fig-non-combined shows a Client and Server running EDHOC as Initiator and Responder, respectively. That is, the Client sends a POST request with payload EDHOC message_1 to a reserved resource at the CoAP Server, by default at Uri-Path \"/.well-known/edhoc\". This triggers the EDHOC exchange at the Server, which replies with a 2.04 (Changed) Response with payload EDHOC message_2. Finally, the Client sends a CoAP POST request to the same resource used for EDHOC message_1, with payload EDHOC message_3. The Content-Format of these CoAP messages may be set to \"application/edhoc\". After this exchange takes place, and after successful verifications as specified in the EDHOC protocol, the Client and Server can derive an OSCORE Security Context, as defined in oscore-ctx of this document. After that, they can use OSCORE to protect their communications. As shown in fig-non-combined, this purely-sequential way of first running EDHOC and then using OSCORE takes three round trips to complete. edhoc-in-oscore defines an optimization for combining EDHOC with the first subsequent OSCORE transaction. This reduces the number of round trips required to set up an OSCORE Security Context and to complete an OSCORE transaction using that Security Context. 3. When using EDHOC over CoAP for establishing an OSCORE Security Context, EDHOC messages are exchanged as defined in Section 7.2 of I- D.ietf-lake-edhoc, with the following addition. EDHOC error messages sent as CoAP responses MUST be error responses, i.e. they MUST specify a CoAP error response code. In particular, it is RECOMMENDED that such error responses have response code either 4.00 (Bad Request) in case of client error (e.g. due to a malformed EDHOC message), or 5.00 (Internal Server Error) in case of server error (e.g. due to failure in deriving EDHOC key material). 4. After successful processing of EDHOC message_3 (see Section 5.5 of I- D.ietf-lake-edhoc), the Client and Server derive Security Context parameters for OSCORE (see Section 3.2 of RFC8613) as follows. The Master Secret and Master Salt are derived by using the EDHOC- Exporter interface defined in Section 4.1 of I-D.ietf-lake-edhoc. The EDHOC Exporter Labels to use are \"OSCORE Master Secret\" and \"OSCORE Master Salt\", for deriving the OSCORE Master Secret and the OSCORE Master Salt, respectively. By default, key_length is the key length (in bytes) of the application AEAD Algorithm of the selected cipher suite for the EDHOC session. Also by default, salt_length has value 8. The Initiator and Responder MAY agree out-of-band on a longer key_length than the default and on a different salt_length. The AEAD Algorithm is the application AEAD algorithm of the selected cipher suite for the EDHOC session. The HKDF Algorithm is the HKDF algorithm of the selected cipher suite for the EDHOC session. In case the Client is the Initiator and the Server is the Responder, the Client's OSCORE Sender ID and the Server's OSCORE Sender ID are the byte string EDHOC connection identifier C_R and C_I for the EDHOC session, respectively. The reverse applies in case the Client is the Responder and the Server is the Initiator. Numeric connection identifiers are converted to (naturally byte- string shaped) Sender IDs by using their CBOR encoded form. For example, a C_R of 10 corresponds to a (typically server) Sender ID of hexadecimal '0a', -12 to hexadecimal '2b', whereas a byte- string valued C_R of h'ff' corresponds to a Sender ID of hexadecimal 'ff'. Before deriving an OSCORE Security Context, the two peers MUST ensure that the EDHOC connection identifiers are different. that is, the two peers MUST NOT derive an OSCORE Security Context from an EDHOC session, if C_R is equal to C_I for that session. If the Responder runs EDHOC with the intention of deriving an OSCORE Security Context, the Responder MUST NOT include in EDHOC message_2 a connection identifier C_R equivalent (under conversion to a Sender ID) to the connection identifier C_I received in EDHOC message_1. If the Initiator runs EDHOC with the intention of deriving an OSCORE Security Context, the initiator MUST discontinue the protocol and reply with an EDHOC error message when receiving an EDHOC message_2 that includes a connection identifier C_R equal to C_I. The Client and Server use the parameters above to establish an OSCORE Security Context, as per Section 3.2.1 of RFC8613. From then on, the Client and Server MUST be able to retrieve the OSCORE protocol state using its chosen connection identifier, and optionally other information such as the 5-tuple, i.e., the tuple (source IP address, source port, destination IP address, destination port, transport protocol). 5. This section defines an optimization for combining the EDHOC exchange with the first subsequent OSCORE transaction, thus minimizing the number of round trips between the two peers. This approach can be used only if the default EDHOC message flow is used, i.e., when the Client acts as Initiator and the Server acts as Responder, while it cannot be used in the case with reversed roles. When running the purely-sequential flow of overview, the Client has all the information to derive the OSCORE Security Context already after receiving EDHOC message_2 and before sending EDHOC message_3. Hence, the Client can potentially send both EDHOC message_3 and the subsequent OSCORE Request at the same time. On a semantic level, this requires sending two REST requests at once, as in fig-combined. To this end, the specific approach defined in this section consists of sending EDHOC message_3 inside an OSCORE protected CoAP message. The resulting EDHOC + OSCORE request is in practice the OSCORE Request from fig-non-combined, as still sent to a protected resource and with the correct CoAP method and options, but with the addition that it also transports EDHOC message_3. As EDHOC message_3 may be too large to be included in a CoAP Option, e.g., if containing a large public key certificate chain, it has to be transported in the CoAP payload of the EDHOC + OSCORE request. The rest of this section specifies how to transport the data in the EDHOC + OSCORE request and their processing order. In particular, the use of this approach is explicitly signalled by including an EDHOC Option (see edhoc-option) in the EDHOC + OSCORE request. The processing of the EDHOC + OSCORE request is specified in client- processing for the Client side and in server-processing for the Server side. 5.1. This section defines the EDHOC Option. The option is used in a CoAP request, to signal that the request payload conveys both an EDHOC message_3 and OSCORE protected data, combined together. The EDHOC Option has the properties summarized in fig-edhoc-option, which extends Table 4 of RFC7252. The option is Critical, Safe-to-"}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "EDHOC data and the OSCORE ciphertext, using the newly defined EDHOC option for signalling. 4. The approach defined in this specification consists of sending EDHOC message_3 inside an OSCORE protected CoAP message. The resulting EDHOC + OSCORE request is in practice the OSCORE Request from fig-non-combined, sent to a protected resource and with the correct CoAP method and options, with the addition that it also transports EDHOC message_3. Since EDHOC message_3 may be too large to be included in a CoAP Option, e.g. if containing a large public key certificate chain, it has to be transported through the CoAP payload. The use of this approach is explicitly signalled by including an EDHOC Option (see signalling) in the EDHOC + OSCORE request. 4.1. The Client prepares an EDHOC + OSCORE request as follows.", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "EDHOC data and the OSCORE ciphertext, using the newly defined EDHOC option for signalling. 5.2. The Client prepares an EDHOC + OSCORE request as follows."}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "the CBOR sequence built at step 3. Signal the usage of this approach within the EDHOC + OSCORE request, by including the new EDHOC Option defined in signalling. 4.2. When receiving an EDHOC + OSCORE request, the Server performs the following steps. Check the presence of the EDHOC option defined in signalling, to determine that the received request is an EDHOC + OSCORE request. If this is the case, the Server continues with the steps defined below. Extract CIPHERTEXT_3 from the payload of the EDHOC + OSCORE request, as the first CBOR byte string in the CBOR sequence.", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "the CBOR sequence built at step 3. Signal the usage of this approach within the EDHOC + OSCORE request, by including the new EDHOC Option defined in edhoc- option. 5.3. When receiving a request containing the EDHOC option, i.e. an EDHOC + OSCORE request, the Server MUST perform the following steps. Check that the payload of the EDHOC + OSCORE request is a CBOR sequence composed of two CBOR byte strings. If this is not the case, the Server MUST stop processing the request and MUST respond with a 4.00 (Bad Request) error message. Extract CIPHERTEXT_3 from the payload of the EDHOC + OSCORE request, as the first CBOR byte string in the CBOR sequence."}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "derivation, as per Section 5.4.3 and Section 7.2.1 of I-D.ietf- lake-edhoc, respectively. Extract the OSCORE ciphertext from the payload of the EDHOC + OSCORE request, as the value of the second CBOR byte string in the CBOR sequence.", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "derivation, as per Section 5.4.3 and Section 7.2.1 of I-D.ietf- lake-edhoc, respectively. If the applicability statement used in the EDHOC session specifies that EDHOC message_4 shall be sent, the Server MUST stop the EDHOC processing and consider it failed, as due to a client error. Extract the OSCORE ciphertext from the payload of the EDHOC + OSCORE request, as the value of the second CBOR byte string in the CBOR sequence."}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "is not applicable to the approach defined in this specification. If step 4 (EDHOC processing) fails, the server discontinues the protocol as per Section 5.4.3 of I-D.ietf-lake-edhoc and sends an EDHOC error message, formatted as defined in Section 6.1 of I-D.ietf- lake-edhoc. In particular, the CoAP response conveying the EDHOC error message: MUST have Content-Format set to application/edhoc defined in Section 9.5 of I-D.ietf-lake-edhoc. MUST specify a CoAP error response code, i.e. 4.00 (Bad Request) in case of client error (e.g. due to a malformed EDHOC message_3), or 5.00 (Internal Server Error) in case of server error (e.g. due to failure in deriving EDHOC key material). If step 4 (EDHOC processing) is successfully completed but step 7 (OSCORE processing) fails, the same OSCORE error handling applies as defined in Section 8.2 of RFC8613. 5. An example based on the OSCORE test vector from Appendix C.4 of RFC8613 and the EDHOC test vector from Appendix B.2 of I-D.ietf-lake- edhoc is given in fig-edhoc-opt-2. In particular, the example assumes that: The used OSCORE Partial IV is 0, consistently with the first request protected with the new OSCORE Security Context. The OSCORE Sender ID of the Client is 0x20. This corresponds to the EDHOC Connection Identifier C_R, which is encoded as the bstr_identifier 0x08 in EDHOC message_3. The EDHOC option is registered with CoAP option number 13. 6.", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "is not applicable to the approach defined in this specification. If step 4 (EDHOC processing) fails, the server discontinues the protocol as per Section 5.4.3 of I-D.ietf-lake-edhoc and responds with an EDHOC error message, formatted as defined in Section 6.1 of I-D.ietf-lake-edhoc. In particular, the CoAP response conveying the EDHOC error message MUST have Content-Format set to application/edhoc defined in Section 9.5 of I-D.ietf-lake-edhoc. If step 4 (EDHOC processing) is successfully completed but step 7 (OSCORE processing) fails, the same OSCORE error handling applies as defined in Section 8.2 of RFC8613. 5.4. fig-edhoc-opt-2 shows an example of EDHOC + OSCORE Request, based on the OSCORE test vector from Appendix C.4 of RFC8613 and the EDHOC test vector from Appendix B.2 of I-D.ietf-lake-edhoc. In particular, the example assumes that: The used OSCORE Partial IV is 0, consistently with the first request protected with the new OSCORE Security Context. The OSCORE Sender ID of the Client is 0x00. This corresponds to the EDHOC Connection Identifier C_R, which is encoded as the bstr_identifier 0x37 in EDHOC message_3. The EDHOC option is registered with CoAP option number 21. 6."}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "7. This document has the following actions for IANA. 7.1.", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "7. RFC Editor: Please replace \"[[this Document]]\" with the RFC number of this document and delete this paragraph. This document has the following actions for IANA. 7.1."}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "[ The CoAP option numbers 13 and 21 are both consistent with the properties of the EDHOC Option defined in signalling, and they both allow the EDHOC Option to always result in an overall size of 1 byte. This is because:", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "[ The CoAP option numbers 13 and 21 are both consistent with the properties of the EDHOC Option defined in edhoc-option, and they both allow the EDHOC Option to always result in an overall size of 1 byte. This is because:"}
{"id": "q-en-oscore-edhoc-076110c31f02a6a018b952ac2e471e12c8fd82c72218256fe2c957517d3ba447", "old_text": "unassigned in the \"CoAP Option Numbers\" registry, as first available and consistent option numbers for the EDHOC option. ] ", "comments": "Clean-up after URL As the content-from-edhoc branch seems to be the most up-to-date one (in which the parts changed are introduced in the first place), this builds on , and only the latest commit is actually relevant.\nCompared to the original bstridentifier mapping, this is a different int-to-bstr mapping, but one that's most likely efficient to implement. I have not made any statement on the efficiencies, but some guidance we could give is that from a coding density point of view, numeric Cx should only be used with -24..23; for all other values, doing it in a bstr is just as good and saves a byte in OSCORE. (In fact we may want to make that normative, so that only the single-byte case needs to be implemented where this needs explicit implementation).", "new_text": "unassigned in the \"CoAP Option Numbers\" registry, as first available and consistent option numbers for the EDHOC option. This document suggests 21 (TBD21) as option number to be assigned to the new EDHOC option, since both 13 and 21 are consistent for the use case in question, but different use cases or protocols may make better use of the option number 13. ] 7.2. IANA is asked to enter the following entries to the \"EDHOC Exporter Label\" registry defined in I-D.ietf-lake-edhoc. "}
{"id": "q-en-oscore-48348007ea27481366678702a007b77a06370f5199e6248866f2184624d01805", "old_text": "For each stored security context, the first time after boot the server receives an OSCORE request, the server uses the Repeat option I-D.amsuess-core-repeat-request-tag to get a request with verifiable freshness and uses that to synchronize the replay window. If the server can verify the fresh request, the Partial IV in the fresh request is set as the lower limit of the replay window. 6.5.3.", "comments": "This prescribes explicitly one of the possible ways that the response is sent, and states the (hopefully) obvious about accepting new requests. The alternative to \"MUST NOT be encrypted\" would be \"MUST be encrypted and MUST carry an explicit IV\"; even both would work, but clients need to know what they need to be prepared for. I'd be fine with either of the two (or three) versions of the request encryption, but not stating anything might result in incompatible implementations where one only works this and one only works that way. Being explicit about (to us) obvious things like not responding with an implied IV or not accepting Repeat as Class U (or in an unencrypted request that might then clear the way for any replays) might not be necessary from a specification point of view, but could avoid implementation errors I see very vividly before my inner eye.\nCome to think of it (from the considerations in URL), we have to go the other way and say that the response \"MUST be encrypted, and MUST use an explicit partial IV\". As we want to use this for HTTP too, this is by far the easier option; otherwise, we'd have to additionally specify how the option is transported in HTTP, so a HC proxy can convert it, and then the HC proxy would need to know whether it's transporting OSCORE (so it'd forward the Repeat option for the HTTP client to include it in a new encrypted request) or whether it's working for an OSCORE-unaware client that would not be equipped to answer the Repeat challenge (so it'd repeat the known fresh (because HTTP) request with a Repeat option by itself without bothering the client).\nDid I understand right, that you take back the proposal in the PR? Partial IV generated by the server is allowed in any response. With the resolution of issue : \"if Partial IV present in the message -> use that Partial IV\" Hence the replay draft can specify that replay syncing MUST use the server generated PIV, in which case it will be encrypted. I probably missed the issue?\nYes; I'll still need to update the PR to say the desired. We don't need to change anything about mechanism, but we should be explicit that the server MUST reply with an encrypted Repeat and a generated partial IV.\nSorry, still not clear: OSCORE now allows the use of server-generated Partial IV. The processing for Repeat is defined in the RRT-draft. Why does anything else need to change in the OSCORE draft?\nWrote the promised changes as it's much easier to talk with concrete text; the PR is now updated as 81d47927. This doesn't change any mechanism (at least not in implementations that are based on the current text and written in full awareness of all the relevant considerations). It is just a text change that prescribes what will be obvious to some implementors but not to others. (Especially since it's not so obvious as that I would have gotten it right in the first attempt.)\n(This issue takes some context from issue ). The current draft states that replay protection violation triggers a 5.03 with Max-Age 0 (Section 6.4). While this is a practical solution to issue (which is not mitigated by encrypting the code because FETCH is still safe), it could still be caught by a proxy that'd try to be smart about it (and would, AFAICT, be within its rights). I suggest that the response code be 4.01 Unauthorized with Max-Age=0. The semantics of Unauthorized are \"SHOULD NOT repeat the request without first improving its authentication status to the server\", and while it's a bit of a stretch to say that increasing the sequence number is sufficient to improve the authentication status, the Max-Age gives a good hint (\"Come on, try again right away, don't look for other keys\"), and clients can use the presence of the Max-Age option as a discriminator to tell \"Sorry, no such context\" apart from \"Retry with a new seqno\".\nWe also just identified this response code as not appropriate. The proposal is good, unless it results in overloading the semantics of Max-Age=0. There is already at least one requirement when Max-Age has to be zero: RFC7252, Section 5.6.1: \"If an origin server wishes to prevent caching, it MUST explicitly include a Max-Age Option with a value of zero seconds.\" Is there a setting where the the server would not want the response 4.01 to be cached even though it is not a replay? (Obviously we have to differentiate between Inner and Outer 4.01, but is that enough?)\nI don't know. I briefly thought about using an Echo here, but an HTTP proxy would react to that as a CoAP proxy would to 5.03,Max-Age=0. (See comments 2ff). I did give that some thought, and considering the Echo option thought about whether the server should just reply with an encrypted ,Max-Age=0 (or Echo=short). I decided against suggesting that because responding to a possibly fraudulent message with an owned partial IV means that any adversary that has one valid-but-used message can thus drive any server into partial-IV exhaustion.\nBy the way, just FYI, that Max-Age=0 with 5.03 means \"service unavailable, retry in 0 seconds\", which is wrong (and not what we want). (see: URL)\nThe current text where there MAY be a Max-Age of 0 both on \"4.01, Security context not found\" and on \"4.01, Security context not found\" means that a client can not know any more which is the case (and it should not look into the message). I see the reason for the MAY set Max-Age for most error cases in , but a client needs to keep those apart (otherwise it'd need to so something like try 3 times with a new PIV, and if the server still responds the same, assume that it doesn't know the KID). I suggest that for those two responses, the Max-Age values are enforced as \"MUST NOT set a Max-Age option\" and \"MUST set the Max-Age option to 0\", or find any other discriminator, but an implementation should be able to tell them apart.\nGranted, with the current setup of POST and FETCH there are much fewer cases where replay protection errors can happen (they'd only legally happen under particular package loss situations when starting an observation via a proxy that does not support observation but does do optimizations for safe requests), but I still think that there'd be more clarity if there were a discriminant between the two error cases.\nThe document, as of now, specifies that Partial IV SHOULD NOT be present in responses without observe. (This refer to: https://core-URL ) The processing for protecting and verifying responses, as of now, specifies that: if Observe -> use the Partial IV in the response if not Observe -> use Partial IV of the request Should this part be changed to: if Partial IV present in the message -> use that Partial IV if not present -> use Partial IV of the request\nNeed to read text, but this reflects what I wanted to see.\nSorry NAME when you say \"this\" do you mean That's not currently included (so no text about it yet). The text right now is:\nI was talking about the should change to\nSection 5 - You have defined a 'sid' and said that it could be placed in the unprotected field, however you have also said that the unprotected field must be empty. These statements are contradictory\nAs per Christian Ams\u00fcss' message from 2017-01-05 there still seems to be confusing text in that section.\nAnswering both to NAME and NAME (URL) The point here is: when defining the 'sid' as a new COSE Header parameter, there is no reason related to security that would make this a \"MUST be placed in the protected bucket\", so instead it is a MAY. But when used in OSCOAP, the 'sid' is used in a way that it MUST be in the protected part, so the unprotected SHALL be empty. The 'sid' is analogous to 'kid' from this point of view, it is defined as a COSE parameter that MAY be unprotected, but is used protected.\nMulticast removed in branch v-02 Hello OSCOAP authors, some more questions came up: 5.2 gives the context for the Encstructure as \"Encrypted\", while cose-msg-24 would set a \"Encrypt0\". Why does that differ here? states that the '\"unprotected\" field [...] SHALL be empty', but above states that sid 'can be placed in the unprotected headers bucket'. How do those statements go together? How are sequence numbers (which are sorted integers) mapped to partial IVs? Which endianness is used? (The padding direction follows from cose-msg-24 3.1). I'd assume (and implement) that big-endian is used because it aligns naturally with COSE's left-padding, but as I see it, that does not strictly follow from the draft. I concluded that the tag is always the last bytes of the ciphertext from example A.4, but did not find text in the spec to support that; what made me especially unsure about this is that the COSEEncrypt0 definition of cose-msg-24 5.2 only mentions the ciphertext there, which may be nil. Best regards Christian To use raw power is to make yourself infinitely vulnerable to greater powers. -- Bene Gesserit axiom", "new_text": "For each stored security context, the first time after boot the server receives an OSCORE request, the server uses the Repeat option I-D.amsuess-core-repeat-request-tag to get a request with verifiable freshness. The Request option in the server's response MUST be encrypted, and the server MUST NOT use the same nonce as the request in its reply. If the server can verify a second request as fresh, the Partial IV of the second request is set as the lower limit of the replay window. 6.5.3."}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "4.1.3.4. Observe RFC7641 is an optional feature. An implementation MAY support RFC7252 and the OSCORE option without supporting RFC7641. The Observe option as used here targets the requirements on forwarding of I-D.hartke-core-e2e-security-reqs (Section 2.2.1). To support proxy operations, OSCORE MUST set Outer Observe. If Observe was only sent encrypted end-to-end, an OSCORE-unaware proxy would not expect several responses to a request and notifications would not reach the endpoint. Moreover, intermediaries are allowed to cancel observations at any time; forbidding this behavior would also result in notifications being dropped. An intermediary that supports Observe MUST copy the OSCORE option in the next hop request unchanged. It is worth noting that although intermediaries are allowed to re-send notifications to other clients, when using OSCORE this does not happen, since requests from different clients will have different cache keys. In case of registrations or re-registrations, the CoAP client using Observe with OSCORE MUST set both Inner and Outer Observe to the same value (0). This allows the server to verify that the observation was requested by the client, thereby avoiding unnecessary overhead (processing and transmission of notifications) on the server, since such notifications would not benefit the client. Note that, as defined in Section 3.1 of RFC7641, the target resource for Observe registration is identified by all options in the request that are part of the Cache-Key, including OSCORE. This means that several clients registering to the same protected resource via an intermediary, when using OSCORE, will be effectively registering to different target resources. The intermediary may then register to the protected resource (different target resources) once per each client. The processing of the CoAP Code for Observe messages is described in coap-header. The Outer Observe option in the CoAP request may be legitimately removed by a proxy or ignored by the server. In these cases, the server processes the request as a non-Observe request and produce a non-Observe response. If the OSCORE client receives a response to an Observe request without an Outer Observe value, then it verifies the", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "4.1.3.4. Observe RFC7641 is an optional feature. An implementation MAY support RFC7252 and the OSCORE option without supporting RFC7641, in which case the Observe related processing specified in this section, sequence-numbers and processing can be omitted. The Observe option as used here targets the requirements on forwarding of I-D.hartke-core-e2e-security-reqs (Section 2.2.1). This section specifies Observe processing associated to the Partial IV (observe-partial-iv) and Observe processing in the presence of RFC7641-compliant intermediaries (observe-option-processing). In contrast to e.g. block-wise, the Inner and Outer Observe option are not processed independently. Outer Observe is required to support Observe operations in intermediaries, but the additional use of Inner Observe is needed to protect Observe registrations end-to- end (see observe-option-processing). observe-without-intermed specifies a simplified Observe processing which is applicable in the absence of intermediaries. Note that OSCORE is compliant with the requirement that a client must not register more than once for the same target resource (see Section 3.1 of RFC7641) since the target resource for Observe registration is identified by all options in the request that are part of the Cache-Key, including OSCORE. 4.1.3.4.1. To support proxy operations, the CoAP client using Observe with OSCORE MUST set Outer Observe. If Observe was only sent encrypted end-to-end, an OSCORE-unaware proxy would not expect several responses to a non-Observe request and notifications would not reach the client. Moreover, intermediaries are allowed to cancel observations and inform the server; forbidding this may result in processing and transmission of notifications on the server side which do not reach the client. In case of registrations or re-registrations, the CoAP client using Observe with OSCORE MUST set both Inner and Outer Observe to the same value (0). This allows the server to verify that the observation was requested by the client, thereby avoiding unnecessary processing and transmission of notifications, since such notifications would not benefit the client. An intermediary that supports Observe MUST copy the OSCORE option in the next hop request unchanged. Although intermediaries are allowed to re-send notifications to other clients, when using OSCORE this does not happen, since requests from different clients will have different cache keys. The Outer Observe option in the CoAP request may be legitimately removed by a proxy or ignored by a server. In these cases, the server processes the request as a non-Observe request and produce a non-Observe response. If the OSCORE client receives a response to an Observe request without an Outer Observe value, then it verifies the"}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "Outer Observe value, it stops processing the message, as specified in ver-res. It the server accepts the Observe registration, a Partial IV must be included in all notifications (both successful and error). To secure the order of notifications, the client SHALL maintain a Notification Number for each Observation it registers. The Notification Number is", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "Outer Observe value, it stops processing the message, as specified in ver-res. In order to support Observe processing in OSCORE-unaware intermediaries, for messages with the Observe option the Outer Code SHALL be set to 0.05 (FETCH) for requests and to 2.05 (Content) for responses. 4.1.3.4.2. It the server accepts an Observe registration, a Partial IV must be included in all notifications (both successful and error). To secure the order of notifications, the client SHALL maintain a Notification Number for each Observation it registers. The Notification Number is"}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "Partial IV MUST always be compared with the Notification Number, which thus MUST NOT be forgotten after 128 seconds. Further details of replay protection of notifications are specified in replay- protection. The client MAY ignore the Observe option value. Clients can re-register observations to ensure that the observation is still active and establish freshness again (RFC7641 Section 3.3.1). When an OSCORE observation is refreshed, not only the ETags, but also the partial IV (and thus the payload and OSCORE option) change. The server uses the new request's Partial IV as the 'request_piv' of new responses. 4.1.3.5.", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "Partial IV MUST always be compared with the Notification Number, which thus MUST NOT be forgotten after 128 seconds. Further details of replay protection of notifications are specified in replay- protection. Clients can re-register observations to ensure that the observation is still active and establish freshness again (RFC7641 Section 3.3.1). When an OSCORE protected observation is refreshed, not only the ETags, but also the partial IV (and thus the payload and OSCORE option) change. The server uses the new request's Partial IV as the 'request_piv' of new responses. 4.1.3.5."}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "The sending endpoint SHALL write the Code of the original CoAP message into the plaintext of the COSE object (see plaintext). After that, the sending endpoint writes an Outer Code to the OSCORE message. The Outer Code SHALL be set to 0.02 (POST) or 0.05 (FETCH) for requests. For non-Observe requests the client SHALL set the Outer Code to 0.02 (POST). For responses, the sending endpoint SHALL respond with Outer Code 2.04 (Changed) to 0.02 (POST) requests, and with Outer Code 2.05 (Content) to 0.05 (FETCH) requests. Using FETCH with Observe allows OSCORE to be compliant with the Observe processing in OSCORE-unaware intermediaries. The choice of POST and FETCH RFC8132 allows all OSCORE messages to have payload. The receiving endpoint SHALL discard the Outer Code in the OSCORE message and write the Code of the COSE object plaintext (plaintext) into the decrypted CoAP message. The other currently defined CoAP Header fields are Unprotected (Class U). The sending endpoint SHALL write all other header fields of the", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "The sending endpoint SHALL write the Code of the original CoAP message into the plaintext of the COSE object (see plaintext). After that, the sending endpoint writes an Outer Code to the OSCORE message. With one exeception (see observe-option-processing) the Outer Code SHALL by default be set to 0.02 (POST) for requests and to 2.04 (Changed) for responses. The receiving endpoint SHALL discard the Outer Code in the OSCORE message and write the Code of the COSE object plaintext (plaintext) into the decrypted CoAP message. The other currently defined CoAP Header fields are Unprotected (Class U). The sending endpoint SHALL write all other header fields of the"}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "Additionally to the previous section, the following applies when Observe is supported. Observe allows re-registration of observations (see 3.3.1 of RFC7641). A server receiving an Observe registration identical to a previously stored one (including Partial IV and Token) SHALL treat it as valid and reply with the last notification sent. A client receiving a notification SHALL compare the Partial IV of a received notification with the Notification Number associated to that Observe registration. Observe reordering MUST be linked to OSCORE's ordering of notifications. The client MAY do so by copying the least significant bytes of the Partial IV into the Observe option, before passing it to CoAP processing. If the verification of the response succeeds, and the received Partial IV was greater than the Notification Number, then the client SHALL update the corresponding Notification Number with the received Partial IV. The client MUST stop processing notifications with a Partial IV which has been", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "Additionally to the previous section, the following applies when Observe is supported. 7.4.1.1. A client receiving a notification SHALL compare the Partial IV of a received notification with the Notification Number associated to that Observe registration. The ordering of notifications after OSCORE processing MUST be aligned with the Partial IV. The client MAY do so by copying the least significant bytes of the Partial IV into the Observe option, before passing it to CoAP processing. The client MAY ignore an Outer Observe option value. If the verification of the response succeeds, and the received Partial IV was greater than the Notification Number, then the client SHALL update the corresponding Notification Number with the received Partial IV. The client MUST stop processing notifications with a Partial IV which has been"}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "replay protection data. The operation of validating the Partial IV and updating the replay protection data MUST be atomic. 7.5. To prevent reuse of an AEAD nonce with the same key, or from", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "replay protection data. The operation of validating the Partial IV and updating the replay protection data MUST be atomic. 7.4.1.2. In order to allow intermediaries to re-register their interest in a resource (see 3.3.1 of RFC7641), a server receiving an Observe registration with Token, Kid and Partial IV identical to a previously received registration, and which decrypts without error SHALL not treat it as a replay and SHALL respond with a notification. The notification may be a cached copy of the latest sent notification (with the same Token, Kid and Partial IV) or it may be a newly generated notification with a fresh Partial IV. 7.5. To prevent reuse of an AEAD nonce with the same key, or from"}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "For example, an If-Match Outer option is discarded, Uri-Host Outer option is not discarded, Observe is not discarded. Insert the following steps between step 6 and 7 of ver-req: B. If Observe was present in the received message (in step 1): If the value of Observe in the Outer message is 0: If Observe is present and has value 0 in the decrypted options, discard the Outer Observe; Otherwise, discard both the Outer and Inner (if present) Observe options. If the value of Observe in the Outer message is not 0, discard decrypted Observe option if present. Insert the following steps between step 6 and 7 of ver-req: C. If the message is an Observe registration, store it in memory for later comparison with re-registrations (see replay-protection). D. If the message is an Observe cancellation, delete from memory the previously stored registration related to that observation (see replay-protection). 8.3. If a CoAP response is generated in response to an OSCORE request, the", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "For example, an If-Match Outer option is discarded, Uri-Host Outer option is not discarded, Observe is not discarded. Replace step 3 in ver-req with: B. If Observe is present in the received message, and has value 0, check if the Token, Kid and Partial IV are identical to a previously received Observe registration. In this case, replay verification is postponed until step C. Otherwise verify the 'Partial IV' parameter using the Replay Window, as described in replay-protection. Insert the following steps between step 6 and 7 of ver-req: C. If Observe was present in the received message (in step 1): If the value of Observe in the Outer message is 0: If Observe is present and has value 0 in the decrypted options, discard the Outer Observe. If the Token, Kid and Partial IV are identical to a previously received Observe registration, respond with a notification as described in observe-replay- processing; Otherwise, discard both the Outer and Inner (if present) Observe options and verify the 'Partial IV' parameter using the Replay Window, as described in replay-protection. If the value of Observe in the Outer message is not 0, discard the decrypted Observe option if present. 8.3. If a CoAP response is generated in response to an OSCORE request, the"}
{"id": "q-en-oscore-64f9471f93769265d5f7b47cff06b2cccc551d1fae39ec3334902a8d7370c6d5", "old_text": "Number by one. Compute the AEAD nonce from the Sender ID, Common IV, and Partial IV. Add the following step after step 5: B. If the message is an Observe notification, store it in memory for use with re-registrations (see replay-protection), and delete from memory any older one previously stored. 8.4. A client receiving a response containing the OSCORE option SHALL", "comments": "This PR should (hopefully) conclude the main rewrite of Observe which was triggered by the security analysis and additional reviews of Observe. It should (hopefully) now be clear what part of the Observe related processing can be omitted in case Observe is not supported or in case intermediaries are not used. A new appendix E is covering the latter case.", "new_text": "Number by one. Compute the AEAD nonce from the Sender ID, Common IV, and Partial IV. 8.4. A client receiving a response containing the OSCORE option SHALL"}
{"id": "q-en-oscore-29f2afd352ae630214b6c786f7f6f236d511e1eeb16711e593175ba997098801", "old_text": "options. An Outer Max-Age message field is used to avoid unnecessary caching of OSCORE error responses at OSCORE-unaware intermediary nodes. A server MAY set a Class U Max-Age message field with value zero to OSCORE error responses, which are described in Sections replay- protection, ver-req, and ver-res. Such a message field is then processed according to outer-options. Successful OSCORE responses do not need to include an Outer Max-Age option since the responses are non-cacheable by construction (see coap-header). 4.1.3.2.", "comments": "Solves issue (from review comments by NAME : URL)\nSection 4.1.3.1 - I am unclear why an OSCORE error response can be cached by an OSCORE unaware intermediate would be cached, but a success message is never going to be cached. Based on this, I don't plan to set an outer Max-Age option.\nClosed in 84403c4c4 Section 4.1.3.1 - I am unclear why an OSCORE error response can be cached by an OSCORE unaware intermediate would be cached, but a success message is never going to be cached. Based on this, I don't plan to set an outer Max-Age option. In section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field. Section 6.1 - Is a registry needed for the leading byte of compression? Behavior if bits 0, 1, or 2 is set in the flags byte on decode? Section C - given the way that my system is implemented, it would be nice if the outputs included the first full IV to be used for both the sender and the recipient. That would allow for a test that the combination of ids and common ivs is done correctly. In my case I do not have the shared IV available for testing as I immediately or in the id.", "new_text": "options. An Outer Max-Age message field is used to avoid unnecessary caching of error responses caused by OSCORE processing at OSCORE-unaware intermediary nodes. A server MAY set a Class U Max-Age message field with value zero to such error responses, described in Sections replay-protection, ver-req, and ver-res, since these error responses are cacheable, but subsequent OSCORE requests would never create a hit in the intermediary caching it. Setting the Outer Max-Age to zero relieves the intermediary from uselessly caching responses. Successful OSCORE responses do not need to include an Outer Max-Age option since the responses appear to the OSCORE-unaware intermediary as 2.04 Changed responses, which are non-cacheable (see coap-header). The Outer Max-Age message field is processed according to outer- options. 4.1.3.2."}
{"id": "q-en-oscore-198219c28f57ff3791b14c32649bee22dc97c63992cbe6e2c77da88ef0c1dd83", "old_text": "Every time a client issues a new Observe request, a new Partial IV MUST be used (see cose-object), and so the payload and OSCORE option are changed. The server uses the Partial IV of the new request as the 'request_piv' of all associated notifications (see AAD). The Partial IV of the registration is also used as 'request_piv' of associated cancellations (see AAD). Intermediaries are not assumed to have access to the OSCORE security context used by the endpoints, and thus cannot make requests or", "comments": "Solves issue (from review comments by NAME : URL)\nIn section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field.\nAnother case to make sure that works Client - please cancel my observation Server - I don't have an observation for you, but sure I am willing to tell you I canceled it. What request PIV would I use in the AAD?\nYes, the verification of cancellation would fail since the server does not have the initial registration's partial iv saved, if it does not have the observation stored. So OSCORE would fail, which is not right. We have reverted to previous version (request_piv is not exception for cancellations) in c94314321 , but we want to add some security considerations about Observe and cancellations before closing this issue. Section 4.1.3.1 - I am unclear why an OSCORE error response can be cached by an OSCORE unaware intermediate would be cached, but a success message is never going to be cached. Based on this, I don't plan to set an outer Max-Age option. In section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field. Section 6.1 - Is a registry needed for the leading byte of compression? Behavior if bits 0, 1, or 2 is set in the flags byte on decode? Section C - given the way that my system is implemented, it would be nice if the outputs included the first full IV to be used for both the sender and the recipient. That would allow for a test that the combination of ids and common ivs is done correctly. In my case I do not have the shared IV available for testing as I immediately or in the id.", "new_text": "Every time a client issues a new Observe request, a new Partial IV MUST be used (see cose-object), and so the payload and OSCORE option are changed. The server uses the Partial IV of the new request as the 'request_piv' of all associated notifications (see AAD). Intermediaries are not assumed to have access to the OSCORE security context used by the endpoints, and thus cannot make requests or"}
{"id": "q-en-oscore-198219c28f57ff3791b14c32649bee22dc97c63992cbe6e2c77da88ef0c1dd83", "old_text": "the request (see cose-object). request_piv: contains the value of the 'Partial IV' in the COSE object of the request (see cose-object), with one exception: in case of protection or verification of Observe cancellations, the request_piv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see observe- registration). options: contains the Class I options (see outer-options) present in the original CoAP message encoded as described in Section 3.1", "comments": "Solves issue (from review comments by NAME : URL)\nIn section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field.\nAnother case to make sure that works Client - please cancel my observation Server - I don't have an observation for you, but sure I am willing to tell you I canceled it. What request PIV would I use in the AAD?\nYes, the verification of cancellation would fail since the server does not have the initial registration's partial iv saved, if it does not have the observation stored. So OSCORE would fail, which is not right. We have reverted to previous version (request_piv is not exception for cancellations) in c94314321 , but we want to add some security considerations about Observe and cancellations before closing this issue. Section 4.1.3.1 - I am unclear why an OSCORE error response can be cached by an OSCORE unaware intermediate would be cached, but a success message is never going to be cached. Based on this, I don't plan to set an outer Max-Age option. In section 5.4 you have the text requestpiv: contains the value of the 'Partial IV' in the COSE object of the request (see Section 5), with one exception: in case of protection or verification of Observe cancellations, the requestpiv contains the value of the 'Partial IV' in the COSE object of the corresponding registration (see Section 4.1.3.5.1). I am unclear how/why this is different for observations. A cancelation is a message, so the IV is the request. A re-registration is a request message, any response would correspond to that request. The only interesting question has to do with updating the MID on a re-registration but not how PIVs work for this field. Section 6.1 - Is a registry needed for the leading byte of compression? Behavior if bits 0, 1, or 2 is set in the flags byte on decode? Section C - given the way that my system is implemented, it would be nice if the outputs included the first full IV to be used for both the sender and the recipient. That would allow for a test that the combination of ids and common ivs is done correctly. In my case I do not have the shared IV available for testing as I immediately or in the id.", "new_text": "the request (see cose-object). request_piv: contains the value of the 'Partial IV' in the COSE object of the request (see cose-object). options: contains the Class I options (see outer-options) present in the original CoAP message encoded as described in Section 3.1"}
{"id": "q-en-oscore-d22e6b16aecba289d7fca2889b7fbf69850886d308d2aae0382bfa28e441e35d", "old_text": "Abstract This memo defines Object Security of CoAP (OSCOAP), a method for application layer protection of message exchanges with the Constrained Application Protocol (CoAP), using the CBOR Object Signing and Encryption (COSE) format. OSCOAP provides end-to-end encryption, integrity and replay protection to CoAP payload, options, and header fields, as well as a secure binding between CoAP request and response messages. The use of OSCOAP is signaled with the CoAP option Object-Security, also defined in this memo. 1. The Constrained Application Protocol (CoAP) RFC7252 is a web application protocol, designed for constrained nodes and networks RFC7228. CoAP specifies the use of proxies for scalability and efficiency. At the same time CoAP references DTLS RFC6347 for security. Proxy operations on CoAP messages require DTLS to be terminated at the proxy. The proxy therefore not only has access to the data required for performing the intended proxy functionality, but is also able to eavesdrop on, or manipulate any part of the CoAP payload and metadata, in transit between client and server. The proxy can also inject, delete, or reorder packages without being protected or detected by DTLS. This memo defines Object Security of CoAP (OSCOAP), a data object based security protocol, protecting CoAP message exchanges end-to- end, across intermediary nodes. An analysis of end-to-end security for CoAP messages through intermediary nodes is performed in I- D.hartke-core-e2e-security-reqs, this specification addresses the forwarding case. The solution provides an in-layer security protocol for CoAP which does not depend on underlying layers and is therefore favorable for providing security for \"CoAP over foo\", e.g. CoAP messages passing over both unreliable and reliable transport I-D.ietf-core-coap-tcp- tls, CoAP over IEEE 802.15.4 IE I-D.bormann-6lo-coap-802-15-ie. OSCOAP builds on CBOR Object Signing and Encryption (COSE) I-D.ietf- cose-msg, providing end-to-end encryption, integrity, and replay protection. The use of OSCOAP is signaled with the CoAP option Object-Security, also defined in this memo. The solution transforms an unprotected CoAP message into a protected CoAP message in the following way: the unprotected CoAP message is protected by including payload (if present), certain options, and header fields in a COSE object. The message fields that have been encrypted are removed from the message whereas the Object-Security option and the COSE object are added. We call the result the \"protected\" CoAP message. Thus OSCOAP is a security protocol based on the exchange of protected CoAP messages (see oscoap-ex). OSCOAP provides protection of CoAP payload, certain options, and header fields, as well as a secure binding between CoAP request and response messages. OSCOAP provides replay protection, but like DTLS, OSCOAP only provides relative freshness in the sense that the sequence numbers allows a recipient to determine the relative order of messages. For applications having stronger demands on freshness (e.g. control of actuators), OSCOAP needs to be augmented with mechanisms providing absolute freshness I-D.mattsson-core-coap- actuators. OSCOAP may be used in extremely constrained settings, where DTLS cannot be supported. Alternatively, OSCOAP can be combined with", "comments": "UTF-8 \u00fcber alles.\nAbstract: Explain that OSCOAP works across intermediaries and over any layer.Intro: Explain that OSCOAP works for Observe and Blockwise.Intro: Move details on replay and freshness to some other section.Terminology: Explain the references and add COSE.", "new_text": "Abstract This memo defines Object Security of CoAP (OSCOAP), a method for application layer protection of the Constrained Application Protocol (CoAP), using the CBOR Object Signing and Encryption (COSE). OSCOAP provides end-to-end encryption, integrity and replay protection to CoAP payload, options, and header fields, as well as a secure message binding. OSCOAP is designed for constrained nodes and networks and can be used across intermediaries and over any layer. The use of OSCOAP is signaled with the CoAP option Object-Security, also defined in this memo. 1. The Constrained Application Protocol (CoAP) is a web application protocol, designed for constrained nodes and networks RFC7228. CoAP specifies the use of proxies for scalability and efficiency. At the same time CoAP RFC7252 references DTLS RFC6347 for security. Proxy operations on CoAP messages require DTLS to be terminated at the proxy. The proxy therefore not only has access to the data required for performing the intended proxy functionality, but is also able to eavesdrop on, or manipulate any part of the CoAP payload and metadata, in transit between client and server. The proxy can also inject, delete, or reorder packages without being protected or detected by DTLS. This memo defines Object Security of CoAP (OSCOAP), a data object based security protocol, protecting CoAP message exchanges end-to- end, across intermediary nodes. An analysis of end-to-end security for CoAP messages through intermediary nodes is performed in I- D.hartke-core-e2e-security-reqs, this specification addresses the forwarding case. In addition to the core features defined in RFC7252, OSCOAP supports Observe RFC7641 and Blockwise RFC7959. OSCOAP is designed for constrained nodes and networks and provides an in-layer security protocol for CoAP which does not depend on underlying layers. OSCOAP can be used anywhere that CoAP can be used, including unreliable transport RFC7228, reliable transport I- D.ietf-core-coap-tcp-tls, and non-IP transport I-D.bormann-6lo-coap- 802-15-ie. OSCOAP builds on CBOR Object Signing and Encryption (COSE) I-D.ietf- cose-msg, providing end-to-end encryption, integrity, replay protection, and secure message binding. The use of OSCOAP is signaled with the CoAP option Object-Security, defined in obj-sec- option-section. OSCOAP provides protection of CoAP payload, certain options, and header fields. The solution transforms an unprotected CoAP message into a protected CoAP message in the following way: the unprotected CoAP message is protected by including payload (if present), certain options, and header fields in a COSE object. The message fields that have been encrypted are removed from the message whereas the Object-Security option and the COSE object are added, see oscoap-ex. OSCOAP may be used in extremely constrained settings, where DTLS cannot be supported. Alternatively, OSCOAP can be combined with"}
{"id": "q-en-oscore-d22e6b16aecba289d7fca2889b7fbf69850886d308d2aae0382bfa28e441e35d", "old_text": "meanings. Readers are expected to be familiar with the terms and concepts described in RFC7252 and RFC7641. Readers are also expected to be familiar with RFC7049 and understand I-D.greevenbosch-appsawg-cbor- cddl. Terminology for constrained environments, such as \"constrained device\", \"constrained-node network\", is defined in RFC7228. 2.", "comments": "UTF-8 \u00fcber alles.\nAbstract: Explain that OSCOAP works across intermediaries and over any layer.Intro: Explain that OSCOAP works for Observe and Blockwise.Intro: Move details on replay and freshness to some other section.Terminology: Explain the references and add COSE.", "new_text": "meanings. Readers are expected to be familiar with the terms and concepts described in CoAP RFC7252, Observe RFC7641, Blockwise RFC7959, COSE I-D.ietf-cose-msg, CBOR RFC7049, CDDL I-D.greevenbosch-appsawg-cbor- cddl, and constrained environments RFC7228. 2."}
{"id": "q-en-oscore-d9392865f83b146eed23a78309d3b2fac378ce16f97156af77727b9dd1dc16e0", "old_text": "use for encryption. Its value is immutable once the security context is established. Base Key (master_secret). Variable length, uniformly random byte string containing the key used to derive traffic keys and IVs. Its value is immutable once the security context is established. The Sender Context contains the following parameters:", "comments": ", and\nThis goes against Jim's wish to have things in bits rather than bytes... But HKDF uses octets and to combine the two parameters as wanted by Ludwig, they need the same unit.\nFixed in branch v-02\nOSCOAP refers to HKDF [RFC5869], but it not clearly defined how the parameters map to HKDF-Extract and HKDF-Expand. E.g. IKM. Salt should be made optional.\nIn section 3.2.1. outlen is defined as \"the key/IV size of the AEAD algorithm\" and outputlength as \"is the size of the AEAD key/IV in bytes encoded as an 8-bit unsigned integer\" Isn't that essentially the same?\nI just created a v-02 branch for the next update. I'll make an update as soon as possible trying to address this.\nFixed in brach v-02", "new_text": "use for encryption. Its value is immutable once the security context is established. Master Secret. Variable length, uniformly random byte string containing the key used to derive traffic keys and IVs. Its value is immutable once the security context is established. Master Salt (OPTIONAL). Variable length byte string containing the salt used to derive traffic keys and IVs. Its value is immutable once the security context is established. The Sender Context contains the following parameters:"}
{"id": "q-en-oscore-d9392865f83b146eed23a78309d3b2fac378ce16f97156af77727b9dd1dc16e0", "old_text": "messages received. The 3-tuple (Cid, Sender ID, Partial IV) is called Transaction Identifier (Tid), and SHALL be unique for each Base Key. The Tid is used as a unique challenge in the COSE object of the protected CoAP request. The Tid is part of the Additional Authenticated Data (AAD, see sec-obj-cose) of the protected CoAP response message, which is how responses are bound to requests. 3.2.", "comments": ", and\nThis goes against Jim's wish to have things in bits rather than bytes... But HKDF uses octets and to combine the two parameters as wanted by Ludwig, they need the same unit.\nFixed in branch v-02\nOSCOAP refers to HKDF [RFC5869], but it not clearly defined how the parameters map to HKDF-Extract and HKDF-Expand. E.g. IKM. Salt should be made optional.\nIn section 3.2.1. outlen is defined as \"the key/IV size of the AEAD algorithm\" and outputlength as \"is the size of the AEAD key/IV in bytes encoded as an 8-bit unsigned integer\" Isn't that essentially the same?\nI just created a v-02 branch for the next update. I'll make an update as soon as possible trying to address this.\nFixed in brach v-02", "new_text": "messages received. The 3-tuple (Cid, Sender ID, Partial IV) is called Transaction Identifier (Tid), and SHALL be unique for each Master Secret. The Tid is used as a unique challenge in the COSE object of the protected CoAP request. The Tid is part of the Additional Authenticated Data (AAD, see sec-obj-cose) of the protected CoAP response message, which is how responses are bound to requests. 3.2."}
{"id": "q-en-oscore-d9392865f83b146eed23a78309d3b2fac378ce16f97156af77727b9dd1dc16e0", "old_text": "Context Identifier (Cid) Base Key (master_secret) The following input parameters MAY be pre-established. In case any of these parameters is not pre-established, the default value", "comments": ", and\nThis goes against Jim's wish to have things in bits rather than bytes... But HKDF uses octets and to combine the two parameters as wanted by Ludwig, they need the same unit.\nFixed in branch v-02\nOSCOAP refers to HKDF [RFC5869], but it not clearly defined how the parameters map to HKDF-Extract and HKDF-Expand. E.g. IKM. Salt should be made optional.\nIn section 3.2.1. outlen is defined as \"the key/IV size of the AEAD algorithm\" and outputlength as \"is the size of the AEAD key/IV in bytes encoded as an 8-bit unsigned integer\" Isn't that essentially the same?\nI just created a v-02 branch for the next update. I'll make an update as soon as possible trying to address this.\nFixed in brach v-02", "new_text": "Context Identifier (Cid) Master Secret The following input parameters MAY be pre-established. In case any of these parameters is not pre-established, the default value"}
{"id": "q-en-oscore-d9392865f83b146eed23a78309d3b2fac378ce16f97156af77727b9dd1dc16e0", "old_text": "Defaults are 0x01 for the endpoint initially being client, and 0x00 for the endpoint initially being server Key Derivation Function (KDF) Default is HKDF SHA-256", "comments": ", and\nThis goes against Jim's wish to have things in bits rather than bytes... But HKDF uses octets and to combine the two parameters as wanted by Ludwig, they need the same unit.\nFixed in branch v-02\nOSCOAP refers to HKDF [RFC5869], but it not clearly defined how the parameters map to HKDF-Extract and HKDF-Expand. E.g. IKM. Salt should be made optional.\nIn section 3.2.1. outlen is defined as \"the key/IV size of the AEAD algorithm\" and outputlength as \"is the size of the AEAD key/IV in bytes encoded as an 8-bit unsigned integer\" Isn't that essentially the same?\nI just created a v-02 branch for the next update. I'll make an update as soon as possible trying to address this.\nFixed in brach v-02", "new_text": "Defaults are 0x01 for the endpoint initially being client, and 0x00 for the endpoint initially being server Master Salt Default is the empty string Key Derivation Function (KDF) Default is HKDF SHA-256"}
{"id": "q-en-oscore-d9392865f83b146eed23a78309d3b2fac378ce16f97156af77727b9dd1dc16e0", "old_text": "where: master_secret is defined above salt is a string of zeros of the length of the hash function output in octets info is a serialized CBOR array consisting of: output_length is the size of the AEAD key/IV in bytes encoded as an 8-bit unsigned integer For example, if the algorithm AES-CCM-64-64-128 (see Section 10.2 in I-D.ietf-cose-msg) is used, output_length for the keys is 128 bits and output_length for the IVs is 56 bits. 3.2.2.", "comments": ", and\nThis goes against Jim's wish to have things in bits rather than bytes... But HKDF uses octets and to combine the two parameters as wanted by Ludwig, they need the same unit.\nFixed in branch v-02\nOSCOAP refers to HKDF [RFC5869], but it not clearly defined how the parameters map to HKDF-Extract and HKDF-Expand. E.g. IKM. Salt should be made optional.\nIn section 3.2.1. outlen is defined as \"the key/IV size of the AEAD algorithm\" and outputlength as \"is the size of the AEAD key/IV in bytes encoded as an 8-bit unsigned integer\" Isn't that essentially the same?\nI just created a v-02 branch for the next update. I'll make an update as soon as possible trying to address this.\nFixed in brach v-02", "new_text": "where: salt is the Master Salt as defined above IKM is the Master Secret is defined above info is a serialized CBOR array consisting of: L is the key/IV size of the AEAD algorithm in octets without leading zeroes. For example, if the algorithm AES-CCM-64-64-128 (see Section 10.2 in I-D.ietf-cose-msg) is used, L is 16 bytes for keys and 7 bytes for IVs. 3.2.2."}
{"id": "q-en-oscore-cacfb7688a8219a014904c4adbcf07b2a2f9565c3d165976d5ca628693268181", "old_text": "5. This section defines how to use the COSE format I-D.ietf-cose-msg to wrap and protect data in the unprotected CoAP message. OSCOAP uses the COSE_Encrypt0 structure with an Authenticated Encryption with Additional Data (AEAD) algorithm. The AEAD algorithm AES-CCM-64-64-128 defined in Section 10.2 of I- D.ietf-cose-msg is mandatory to implement. For AES-CCM-64-64-128 the length of Sender Key and Recipient Key SHALL be 128 bits, the length of nonce, Sender IV, and Recipient IV SHALL be 7 bytes, and the maximum Sequence Number SHALL be 2^56-1. The nonce is constructed as described in Section 3.1 of I-D.ietf-cose-msg, i.e. by padding the Partial IV (Sequence Number in network byte order) with zeroes and XORing it with the context IV (Sender IV or Recipient IV). Since OSCOAP only makes use of a single COSE structure, there is no need to explicitly specify the structure, and OSCOAP uses the untagged version of the COSE_Encrypt0 structure (Section 2. of I- D.ietf-cose-msg). If the COSE object has a different structure, the recipient MUST reject the message, treating it as malformed. We denote by Plaintext the data that is encrypted and integrity protected, and by Additional Authenticated Data (AAD) the data that is integrity protected only, in the COSE object. The fields of COSE_Encrypt0 structure are defined as follows (see example in sem-auth-enc). The \"Headers\" field is formed by: The \"protected\" field, which SHALL include: The \"Partial IV\" parameter. The value is set to the Sender Sequence Number. The Partial IV is a byte string (type: bstr), and SHOULD be of minimum length needed to encode the sequence number. The \"kid\" parameter. The value is set to the Sender ID (see sec-context-section). The \"unprotected\" field, which SHALL be empty. The \"ciphertext\" field is computed from the Plaintext (see plaintext) and the Additional Authenticated Data (AAD) (see AAD) and encoded as a byte string (type: bstr), following Section 5.2 of I-D.ietf-cose-msg. 5.1. The Plaintext is formatted as a CoAP message without Header (see plaintext-figure) consisting of: all CoAP Options present in the unprotected message which are encrypted (see coap-headers-and-options), in the order as given by the Option number (each Option with Option Header including delta to previous included encrypted option); and the CoAP Payload, if present, and in that case prefixed by the one-byte Payload Marker (0xFF). 5.2. The Additional Authenticated Data (\"Enc_structure\") as described is Section 5.3 of I-D.ietf-cose-msg includes: the \"context\" parameter, which has value \"Encrypted\" the \"protected\" parameter, which includes the \"protected\" part of the \"Headers\" field; the \"external_aad\" is a serialized CBOR array aad where the exact content is different in requests (external_aad_req) and responses (external_aad_resp). It contains: ver: uint, contains the CoAP version number, as defined in Section 3 of code: uint, contains is the CoAP Code of the unprotected CoAP message, as defined in Section 3 of RFC7252. alg: int, contains the Algorithm from the security context used for the exchange (see sec-context-def-section); id : bstr, is the identifier for the endpoint sending the request and verifying the response; which means that for the endpoint sending the response, the id has value Recipient ID, while for the endpoint receiving the response, id has the value Sender ID. seq : bstr, is the value of the \"Partial IV\" in the COSE object of the request (see Section 5). tag-previous-block: bstr, contains the AEAD Tag of the message containing the previous block in the sequence, as enumerated by Block1 in the case of a request and Block2 in the case of a response, if the message is fragmented using a block option RFC7959. The encryption process is described in Section 5.3 of I-D.ietf-cose- msg. 6.", "comments": "This Partly addresses\nClarify that maximum sequence number is algorithm dependant and define a formula for it.\nMay be simpler to just have a single structure for the externalaad. externalaad = [ ver : uint, code : uint, alg : int, id : bstr, seq : bstr, ? tag-previous-block : bstr ]\nthe \"context\" parameter, which has value \"Encrypted\" This should be \"Encrypt0\", but even better OSCOAP should just point to draft-ietf-cose-msg that already specifies this.\n-00 Section 3.2.3 - You are getting ahead of yourself. The only way that the identifiers would not be unique would be if the sender and recipient collided. I do not know that this would be a problem and therefore talking about it makes no sense. I am assuming that this is supposed to have something to do with the multicast work that I have not looked at. If so it does not belong here but there.\nAgree. I'll delete all text on group communication from the document.\nMulticast removed in branch v-02", "new_text": "5. This section defines how to use COSE I-D.ietf-cose-msg to wrap and protect data in the unprotected CoAP message. OSCOAP uses the untagged COSE_Encrypt0 structure with an Authenticated Encryption with Additional Data (AEAD) algorithm. The key lengths, IV lengths, and maximum sequence number are algorithm dependent. The maximum sequence number SHALL be 2^(nonce length in bits - 1) - 1. The AEAD algorithm AES-CCM-64-64-128 defined in Section 10.2 of I- D.ietf-cose-msg is mandatory to implement. For AES-CCM-64-64-128 the length of Sender Key and Recipient Key is 128 bits, the length of nonce, Sender IV, and Recipient IV is 7 bytes, and the maximum Sequence Number is 2^55 - 1. The nonce is constructed as described in Section 3.1 of I-D.ietf- cose-msg, i.e. by padding the partial IV (Sequence Number in network byte order) with zeroes and XORing it with the context IV (Sender IV or Recipient IV). The first bit in the Sender IV or Recipient IV SHALL be flipped in responses. We denote by Plaintext the data that is encrypted and integrity protected, and by Additional Authenticated Data (AAD) the data that is integrity protected only. The COSE Object SHALL be a COSE_Encrypt0 object with fields defined as follows The \"protected\" field includes: The \"Partial IV\" parameter. The value is set to the Sender Sequence Number. The Partial IV SHALL be of minimum length needed to encode the sequence number. This parameter SHALL be present in requests, and MAY be present in responses. The \"kid\" parameter. The value is set to the Sender ID (see sec-context-section). This parameter SHALL be present in requests. The \"unprotected\" field is empty. The \"ciphertext\" field is computed from the Plaintext (see plaintext) and the Additional Authenticated Data (AAD) (see AAD) following Section 5.2 of I-D.ietf-cose-msg. The encryption process is described in Section 5.3 of I-D.ietf-cose- msg. 5.1. The Plaintext is formatted as a CoAP message without Header (see plaintext-figure) consisting of: all CoAP Options present in the unprotected message that are encrypted (see coap-headers-and-options). The options are encoded as described in Section 3.1 of RFC7252, where the delta is the difference to the previously included encrypted option); and the Payload of unprotected CoAP message, if present, and in that case prefixed by the one-byte Payload Marker (0xFF). 5.2. The external_aad SHALL be a CBOR array as defined below: where: ver: uint, contains the CoAP version number, as defined in Section 3 of RFC7252. code: uint, contains is the CoAP Code of the unprotected CoAP message, as defined in Section 3 of RFC7252. alg: int, contains the Algorithm from the security context used for the exchange (see sec-context-def-section). request_id : bstr, contains the identifier for the endpoint sending the request and verifying the response; which means that for the endpoint sending the response, the id has value Recipient ID, while for the endpoint receiving the response, id has the value Sender ID. request_seq : bstr, contains the value of the \"Partial IV\" in the COSE object of the request (see Section 5). 6."}
{"id": "q-en-perc-wg-9ba2da4f3a25c6c7101effd8b6b56464075081269819b67d4e84e5a9e1205a0b", "old_text": "been changed, then the original value will already be in the OHB, so no update to the OHB is required. 5.3. To decrypt a packet, the endpoint first decrypts and verifies using", "comments": "This is an important security consideration for the re-encryption case, which doesn't appear to be covered by the current text.\nSalt could be the same, but key definitely needs to be different. Requiring both to be different is best, of course.", "new_text": "been changed, then the original value will already be in the OHB, so no update to the OHB is required. A Media Distributor that decrypts, modifies, and re-encrypts packets in this way MUST use an independent key for each recipient, SHOULD use an independent salt for each recipient, and MUST NOT re-encrypt the packet using the sender's keys. If the Media Distributor decrypts and re-encrypts with the same key and salt, it will result in the reuse of a (key, nonce) pair, undermining the security of GCM. 5.3. To decrypt a packet, the endpoint first decrypts and verifies using"}
{"id": "q-en-qlog-b8bf5c0b4135c08724e5fd26e0662042211a45420b79dde67d36566a129d953d", "old_text": "1. This document describes the values of the qlog name (\"category\" + \"event\") and \"data\" fields and their semantics for the HTTP/3 and QPACK protocols. This document is based on draft-34 of the HTTP/3 I-D QUIC-HTTP and draft-21 of the QPACK I-D QUIC-QPACK. QUIC events are defined in a separate document QLOG-QUIC. Feedback and discussion are welcome at https://github.com/quicwg/qlog [1]. Readers are advised to refer to the \"editor's draft\" at that", "comments": "First, these documents can now refer to the published QUIC RFCs. While doing that, let's just replace all [REF] with {{REF}}, which works better for our tooling. Lastly, lets brush up the cross references between documents a bit.\nI checked this locally and everything was good. It's easy enough to fix up later if I did break something.\nqlog alreaady refers to the -34 drafts. Nominally this change should be easy. Although some section references might need double checking.\nlgtm, assuming that the build tool correctly sets the links to the RFCs", "new_text": "1. This document describes the values of the qlog name (\"category\" + \"event\") and \"data\" fields and their semantics for HTTP/3 RFC9114 and QPACK QPACK. Feedback and discussion are welcome at https://github.com/quicwg/qlog [1]. Readers are advised to refer to the \"editor's draft\" at that"}
{"id": "q-en-qlog-b8bf5c0b4135c08724e5fd26e0662042211a45420b79dde67d36566a129d953d", "old_text": "1. This document describes the values of the qlog name (\"category\" + \"event\") and \"data\" fields and their semantics for the QUIC protocol. This document is based on draft-34 of the QUIC I-Ds QUIC-TRANSPORT, QUIC-RECOVERY, and QUIC-TLS. HTTP/3 and QPACK events are defined in a separate document QLOG-H3. Feedback and discussion are welcome at https://github.com/quicwg/qlog [1]. Readers are advised to refer to the \"editor's draft\" at that", "comments": "First, these documents can now refer to the published QUIC RFCs. While doing that, let's just replace all [REF] with {{REF}}, which works better for our tooling. Lastly, lets brush up the cross references between documents a bit.\nI checked this locally and everything was good. It's easy enough to fix up later if I did break something.\nqlog alreaady refers to the -34 drafts. Nominally this change should be easy. Although some section references might need double checking.\nlgtm, assuming that the build tool correctly sets the links to the RFCs", "new_text": "1. This document describes the values of the qlog name (\"category\" + \"event\") and \"data\" fields and their semantics for QUIC; see QUIC- TRANSPORT, QUIC-RECOVERY, and QUIC-TLS. Feedback and discussion are welcome at https://github.com/quicwg/qlog [1]. Readers are advised to refer to the \"editor's draft\" at that"}
{"id": "q-en-quic-bit-grease-babd34006ab1b63b34708bf9cc55aeaff7f97e790310a3e73002a836bd7a9383", "old_text": "treat receipt of a non-empty value as a connection error of type TRANSPORT_PARAMETER_ERROR. Advertising the grease_quic_bit transport parameter indicates that packets sent to this endpoint MAY set a value of 0 for the QUIC Bit. The QUIC Bit is defined as the second-to-most significant bit of the first byte of QUIC packets (that is, the value 0x40). A server MUST respect the value it previously provided for the grease_quic_bit transport parameter if it accepts 0-RTT. A client", "comments": "Inappropriate use of MAY, lack of MUST [NOT]. Thanks to Russ Housley for schooling me on the correct usage.", "new_text": "treat receipt of a non-empty value as a connection error of type TRANSPORT_PARAMETER_ERROR. An endpoint that advertises the grease_quic_bit transport parameter MUST accept packets with the QUIC Bit set to a value of 0. The QUIC Bit is defined as the second-to-most significant bit of the first byte of QUIC packets (that is, the value 0x40). A server MUST respect the value it previously provided for the grease_quic_bit transport parameter if it accepts 0-RTT. A client"}
{"id": "q-en-quic-bit-grease-babd34006ab1b63b34708bf9cc55aeaff7f97e790310a3e73002a836bd7a9383", "old_text": "resulting in connection failures. A server MUST clear the QUIC bit only after processing transport parameters from a client. A server cannot remember that a client negotiated the extension in a previous connection and clear the QUIC Bit based on that information. An endpoint cannot clear the QUIC Bit without knowing whether the peer supports the extension. As Stateless Reset packets (Section 10.3 of QUIC) are only used after a loss of connection state, endpoints are unlikely to be able to clear the QUIC Bit on", "comments": "Inappropriate use of MAY, lack of MUST [NOT]. Thanks to Russ Housley for schooling me on the correct usage.", "new_text": "resulting in connection failures. A server MUST clear the QUIC bit only after processing transport parameters from a client. A server MUST NOT remember that a client negotiated the extension in a previous connection and clear the QUIC Bit based on that information. An endpoint MUST NOT clear the QUIC Bit without knowing whether the peer supports the extension. As Stateless Reset packets (Section 10.3 of QUIC) are only used after a loss of connection state, endpoints are unlikely to be able to clear the QUIC Bit on"}
{"id": "q-en-quic-v2-7ae8850df6d2e0a2974348e958caf4e97c6edb3201b579dcdacb4d4db488fc98", "old_text": "1. QUIC RFC9000 has numerous extension points, including the version number that occupies the second through fifth octets of every long header (see RFC8999). If experimental versions are rare, and QUIC version 1 constitutes the vast majority of QUIC traffic, there is the potential for middleboxes to ossify on the version octets always being 0x00000001. Furthermore, version 1 Initial packets are encrypted with keys derived from a universally known salt, which allow observers to", "comments": "This uses kramdown tricks to make real links for the section references to RFC 9001 and uses a nicer label for those.", "new_text": "1. QUIC QUIC has numerous extension points, including the version number that occupies the second through fifth octets of every long header (see RFC8999). If experimental versions are rare, and QUIC version 1 constitutes the vast majority of QUIC traffic, there is the potential for middleboxes to ossify on the version octets always being 0x00000001. Furthermore, version 1 Initial packets are encrypted with keys derived from a universally known salt, which allow observers to"}
{"id": "q-en-quic-v2-7ae8850df6d2e0a2974348e958caf4e97c6edb3201b579dcdacb4d4db488fc98", "old_text": "3. QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in RFC9000, RFC9001, and RFC9002, with the following changes: The version field of long headers is TBD. Note: Unless this", "comments": "This uses kramdown tricks to make real links for the section references to RFC 9001 and uses a nicer label for those.", "new_text": "3. QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in QUIC, QUIC-TLS, and RFC9002, with the following changes: The version field of long headers is TBD. Note: Unless this"}
{"id": "q-en-quic-v2-7ae8850df6d2e0a2974348e958caf4e97c6edb3201b579dcdacb4d4db488fc98", "old_text": "provisional value 0xff020000, which might change with each edition of this document. The salt used to derive Initial keys in Sec 5.2 of RFC9001 changes to The labels used in RFC9001 to derive packet protection keys (Sec 5.1), header protection keys (Sec 5.4), Retry Integrity Tag keys (Sec 5.8), and key updates (Sec 6.1) change from \"quic key\" to \"quicv2 key\", from \"quic iv\" to \"quicv2 iv\", from \"quic hp\" to \"quicv2 hp\", and from \"quic ku\" to \"quicv2 ku\", to meet the guidance for new versions in Section 9.6 of that document. The key and nonce used for the Retry Integrity Tag (Sec 5.8 of RFC9001) change to: 4.", "comments": "This uses kramdown tricks to make real links for the section references to RFC 9001 and uses a nicer label for those.", "new_text": "provisional value 0xff020000, which might change with each edition of this document. The salt used to derive Initial keys in Section 5.2 of QUIC-TLS changes to: The labels used in QUIC-TLS to derive packet protection keys (Section Section 5.1 of QUIC-TLS), header protection keys (Section Section 5.4 of QUIC-TLS), Retry Integrity Tag keys (Section Section 5.8 of QUIC-TLS), and key updates (Section Section 6.1 of QUIC-TLS) change from \"quic key\" to \"quicv2 key\", from \"quic iv\" to \"quicv2 iv\", from \"quic hp\" to \"quicv2 hp\", and from \"quic ku\" to \"quicv2 ku\", to meet the guidance for new versions in Section Section 9.6 of QUIC-TLS of that document. The key and nonce used for the Retry Integrity Tag (Section 5.8 of QUIC-TLS) change to: 4."}
{"id": "q-en-quic-v2-4633fb110ba71d3edccfa267e43f26923c6a44a78ca2e83876d5b154ea7328b2", "old_text": "QUIC QUIC has numerous extension points, including the version number that occupies the second through fifth bytes of every long header (see RFC8999). If experimental versions are rare, and QUIC version 1 constitutes the vast majority of QUIC traffic, there is the potential for middleboxes to ossify on the version bytes always being 0x00000001. Furthermore, version 1 Initial packets are encrypted with keys derived from a universally known salt, which allow observers to inspect the contents of these packets, which include the TLS Client Hello and Server Hello messages. Again, middleboxes may ossify on the version 1 key derivation and packet formats. Finally QUIC-VN provides two mechanisms for endpoints to negotiate", "comments": ", , , , , .\nJust a style thing (and something we should try an capture in a QUIC style guide).\nThere a bit of inconsistency through the document. Personally I prefer the QUIC version N style.\nis there a missing word here?\nNope, extra word, thanks\nI find this awkward to read in isolation, it requires converting the RFC 9000 types definition format into a different encoding to realize there is a change. A clearer alternative could be \"All version 2 Long Packet Types are different: Initial: 0b01 0-RTT: 0b10 Handshake 0b11 Retry: 0b00\nStarting with This might be a matter of style but to me the term \"changes\" imply that you're modifying the original, not creating something new. How about we term these as differences instead? The the opening paragraph of Section 3 could be something like \"QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in ], ], and ]. However, the following differences apply in version 2.\"\nThis might have been important earlier in the document lifetime but it seems to stick out now.\nThis could be misread to imply the salt is universally known and universally applicable, which I don't think is your intention. In fact, I find the construction of this entire paragraph a bit awkward. Can we rephrase this for better clarity? Something like \"In QUIC version 1, Initial packets are encrypted with the version-specific salt as described in Section 5.2 of [QUIC-TLS]. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Clent Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the 1 key derivation and packet formats.\"\nCouple of nits but otherwise LGTM", "new_text": "QUIC QUIC has numerous extension points, including the version number that occupies the second through fifth bytes of every long header (see QUIC-INVARIANTS). If experimental versions are rare, and QUIC version 1 constitutes the vast majority of QUIC traffic, there is the potential for middleboxes to ossify on the version bytes always being 0x00000001. In QUIC version 1, Initial packets are encrypted with the version- specific salt as described in Section 5.2 of QUIC-TLS. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Client Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the version 1 key derivation and packet formats. Finally QUIC-VN provides two mechanisms for endpoints to negotiate"}
{"id": "q-en-quic-v2-4633fb110ba71d3edccfa267e43f26923c6a44a78ca2e83876d5b154ea7328b2", "old_text": "to implement version negotiation to protect against downgrade attacks. I-D.duke-quic-version-aliasing is a more robust, but much more complicated, proposal to address these ossification problems. By design, it requires incompatible version negotiation. QUICv2 enables exercise of compatible version negotiation mechanism. 2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\",", "comments": ", , , , , .\nJust a style thing (and something we should try an capture in a QUIC style guide).\nThere a bit of inconsistency through the document. Personally I prefer the QUIC version N style.\nis there a missing word here?\nNope, extra word, thanks\nI find this awkward to read in isolation, it requires converting the RFC 9000 types definition format into a different encoding to realize there is a change. A clearer alternative could be \"All version 2 Long Packet Types are different: Initial: 0b01 0-RTT: 0b10 Handshake 0b11 Retry: 0b00\nStarting with This might be a matter of style but to me the term \"changes\" imply that you're modifying the original, not creating something new. How about we term these as differences instead? The the opening paragraph of Section 3 could be something like \"QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in ], ], and ]. However, the following differences apply in version 2.\"\nThis might have been important earlier in the document lifetime but it seems to stick out now.\nThis could be misread to imply the salt is universally known and universally applicable, which I don't think is your intention. In fact, I find the construction of this entire paragraph a bit awkward. Can we rephrase this for better clarity? Something like \"In QUIC version 1, Initial packets are encrypted with the version-specific salt as described in Section 5.2 of [QUIC-TLS]. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Clent Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the 1 key derivation and packet formats.\"\nCouple of nits but otherwise LGTM", "new_text": "to implement version negotiation to protect against downgrade attacks. 2. The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\","}
{"id": "q-en-quic-v2-4633fb110ba71d3edccfa267e43f26923c6a44a78ca2e83876d5b154ea7328b2", "old_text": "3. QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in QUIC, QUIC-TLS, and RFC9002, with the following changes. 3.1. The version field of long headers is 0x709a50c4. 3.2. Initial packets use a packet type field of 0b01. 0-RTT packets use a packet type field of 0b10. Handshake packets use a packet type field of 0b11. Retry packets use a packet type field of 0b00. 3.3.", "comments": ", , , , , .\nJust a style thing (and something we should try an capture in a QUIC style guide).\nThere a bit of inconsistency through the document. Personally I prefer the QUIC version N style.\nis there a missing word here?\nNope, extra word, thanks\nI find this awkward to read in isolation, it requires converting the RFC 9000 types definition format into a different encoding to realize there is a change. A clearer alternative could be \"All version 2 Long Packet Types are different: Initial: 0b01 0-RTT: 0b10 Handshake 0b11 Retry: 0b00\nStarting with This might be a matter of style but to me the term \"changes\" imply that you're modifying the original, not creating something new. How about we term these as differences instead? The the opening paragraph of Section 3 could be something like \"QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in ], ], and ]. However, the following differences apply in version 2.\"\nThis might have been important earlier in the document lifetime but it seems to stick out now.\nThis could be misread to imply the salt is universally known and universally applicable, which I don't think is your intention. In fact, I find the construction of this entire paragraph a bit awkward. Can we rephrase this for better clarity? Something like \"In QUIC version 1, Initial packets are encrypted with the version-specific salt as described in Section 5.2 of [QUIC-TLS]. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Clent Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the 1 key derivation and packet formats.\"\nCouple of nits but otherwise LGTM", "new_text": "3. QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in QUIC, QUIC-TLS, and RFC9002. However, the following differences apply in version 2. 3.1. The Version field of long headers is 0x709a50c4. 3.2. All version 2 long header packet types are different. The Type field values are: Initial: 0b01 0-RTT: 0b10 Handshake: 0b11 Retry: 0b00 3.3."}
{"id": "q-en-quic-v2-4633fb110ba71d3edccfa267e43f26923c6a44a78ca2e83876d5b154ea7328b2", "old_text": "TLS session tickets and NEW_TOKEN tokens are specific to the QUIC version of the connection that provided them. Clients SHOULD NOT use a session ticket or token from a QUICv1 connection to initiate a QUICv2 connection, or vice versa. Servers MUST validate the originating version of any session ticket or token and not accept one issued from a different version. A", "comments": ", , , , , .\nJust a style thing (and something we should try an capture in a QUIC style guide).\nThere a bit of inconsistency through the document. Personally I prefer the QUIC version N style.\nis there a missing word here?\nNope, extra word, thanks\nI find this awkward to read in isolation, it requires converting the RFC 9000 types definition format into a different encoding to realize there is a change. A clearer alternative could be \"All version 2 Long Packet Types are different: Initial: 0b01 0-RTT: 0b10 Handshake 0b11 Retry: 0b00\nStarting with This might be a matter of style but to me the term \"changes\" imply that you're modifying the original, not creating something new. How about we term these as differences instead? The the opening paragraph of Section 3 could be something like \"QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in ], ], and ]. However, the following differences apply in version 2.\"\nThis might have been important earlier in the document lifetime but it seems to stick out now.\nThis could be misread to imply the salt is universally known and universally applicable, which I don't think is your intention. In fact, I find the construction of this entire paragraph a bit awkward. Can we rephrase this for better clarity? Something like \"In QUIC version 1, Initial packets are encrypted with the version-specific salt as described in Section 5.2 of [QUIC-TLS]. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Clent Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the 1 key derivation and packet formats.\"\nCouple of nits but otherwise LGTM", "new_text": "TLS session tickets and NEW_TOKEN tokens are specific to the QUIC version of the connection that provided them. Clients SHOULD NOT use a session ticket or token from a QUIC version 1 connection to initiate a QUIC version 2 connection, or vice versa. Servers MUST validate the originating version of any session ticket or token and not accept one issued from a different version. A"}
{"id": "q-en-quic-v2-4633fb110ba71d3edccfa267e43f26923c6a44a78ca2e83876d5b154ea7328b2", "old_text": "6. QUIC version 2 provides protection against some forms of ossification. Devices that assume that all long headers will contain encode version 1, or that the version 1 Initial key derivation formula will remain version-invariant, will not correctly process version 2 packets. However, many middleboxes such as firewalls focus on the first packet in a connection, which will often remain in the version 1 format due", "comments": ", , , , , .\nJust a style thing (and something we should try an capture in a QUIC style guide).\nThere a bit of inconsistency through the document. Personally I prefer the QUIC version N style.\nis there a missing word here?\nNope, extra word, thanks\nI find this awkward to read in isolation, it requires converting the RFC 9000 types definition format into a different encoding to realize there is a change. A clearer alternative could be \"All version 2 Long Packet Types are different: Initial: 0b01 0-RTT: 0b10 Handshake 0b11 Retry: 0b00\nStarting with This might be a matter of style but to me the term \"changes\" imply that you're modifying the original, not creating something new. How about we term these as differences instead? The the opening paragraph of Section 3 could be something like \"QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in ], ], and ]. However, the following differences apply in version 2.\"\nThis might have been important earlier in the document lifetime but it seems to stick out now.\nThis could be misread to imply the salt is universally known and universally applicable, which I don't think is your intention. In fact, I find the construction of this entire paragraph a bit awkward. Can we rephrase this for better clarity? Something like \"In QUIC version 1, Initial packets are encrypted with the version-specific salt as described in Section 5.2 of [QUIC-TLS]. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Clent Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the 1 key derivation and packet formats.\"\nCouple of nits but otherwise LGTM", "new_text": "6. QUIC version 2 provides protection against some forms of ossification. Devices that assume that all long headers will encode version 1, or that the version 1 Initial key derivation formula will remain version-invariant, will not correctly process version 2 packets. However, many middleboxes such as firewalls focus on the first packet in a connection, which will often remain in the version 1 format due"}
{"id": "q-en-quic-v2-4633fb110ba71d3edccfa267e43f26923c6a44a78ca2e83876d5b154ea7328b2", "old_text": "This version of QUIC provides no change from QUIC version 1 relating to the capabilities available to applications. Therefore, all Application Layer Protocol Negotiation (ALPN) (RFC7301) codepoints specified to operate over QUICv1 can also operate over this version of QUIC. In particular, both the \"h3\" I-D.ietf-quic-http and \"doq\" I-D.ietf-dprive-dnsoquic ALPNs can operate over QUICv2. All QUIC extensions defined to work with version 1 also work with version 2.", "comments": ", , , , , .\nJust a style thing (and something we should try an capture in a QUIC style guide).\nThere a bit of inconsistency through the document. Personally I prefer the QUIC version N style.\nis there a missing word here?\nNope, extra word, thanks\nI find this awkward to read in isolation, it requires converting the RFC 9000 types definition format into a different encoding to realize there is a change. A clearer alternative could be \"All version 2 Long Packet Types are different: Initial: 0b01 0-RTT: 0b10 Handshake 0b11 Retry: 0b00\nStarting with This might be a matter of style but to me the term \"changes\" imply that you're modifying the original, not creating something new. How about we term these as differences instead? The the opening paragraph of Section 3 could be something like \"QUIC version 2 endpoints MUST implement the QUIC version 1 specification as described in ], ], and ]. However, the following differences apply in version 2.\"\nThis might have been important earlier in the document lifetime but it seems to stick out now.\nThis could be misread to imply the salt is universally known and universally applicable, which I don't think is your intention. In fact, I find the construction of this entire paragraph a bit awkward. Can we rephrase this for better clarity? Something like \"In QUIC version 1, Initial packets are encrypted with the version-specific salt as described in Section 5.2 of [QUIC-TLS]. Protecting Initial packets in this way allows observers to inspect their contents, which includes the TLS Clent Hello or Server Hello messages. Again, there is the potential for middleboxes to ossify on the 1 key derivation and packet formats.\"\nCouple of nits but otherwise LGTM", "new_text": "This version of QUIC provides no change from QUIC version 1 relating to the capabilities available to applications. Therefore, all Application Layer Protocol Negotiation (ALPN) (RFC7301) codepoints specified to operate over QUIC version 1 can also operate over this version of QUIC. In particular, both the \"h3\" I-D.ietf-quic-http and \"doq\" I-D.ietf-dprive-dnsoquic ALPNs can operate over QUIC version 2. All QUIC extensions defined to work with version 1 also work with version 2."}
{"id": "q-en-quic-v2-2ad73ac24e3f22a47122f28c461f4ac704bed18c963bde3f4648bc7fdd2bcf0f", "old_text": "version of QUIC. In particular, both the \"h3\" I-D.ietf-quic-http and \"doq\" I-D.ietf-dprive-dnsoquic ALPNs can operate over QUIC version 2. All QUIC extensions defined to work with version 1 also work with version 2. 8.", "comments": "Though this would likely be monstrously inadvisable, I am fairly confident that I could construct an interoperable (or at least deployable) extension to QUIC that works in QUICv1 but not QUICv2 or vice versa. It's OK to establish expectations, but some softening of the statement might help.\nThis seems very accurate", "new_text": "version of QUIC. In particular, both the \"h3\" I-D.ietf-quic-http and \"doq\" I-D.ietf-dprive-dnsoquic ALPNs can operate over QUIC version 2. Unless otherwise stated, all QUIC extensions defined to work with version 1 also work with version 2. 8."}
{"id": "q-en-quic-v2-756efdefd2644a83da1168a6540f4e06ac2e669e7df50ddbdc75f2c826c56f53", "old_text": "1. QUIC QUIC has numerous extension points, including the version number that occupies the second through fifth bytes of every long header (see QUIC-INVARIANTS). If experimental versions are rare, and QUIC version 1 constitutes the vast majority of QUIC traffic, there is the potential for middleboxes to ossify on the version bytes always being 0x00000001. In QUIC version 1, Initial packets are encrypted with the version- specific salt as described in QUIC-TLS. Protecting Initial packets", "comments": "here (URL) it is better that we call it QUIC version 1 not just QUIC. As later it is referred as QUIC version 1.here (URL) I would add reference to VN draft instead of calling it \"that document\"here (URL) i would add \"packet\" after \"Initial\" just to be clear, as we have now a initial version in VN draft.\nThanks, fixed here URL", "new_text": "1. QUIC version 1QUIC has numerous extension points, including the version number that occupies the second through fifth bytes of every long header (see QUIC-INVARIANTS). If experimental versions are rare, and QUIC version 1 constitutes the vast majority of QUIC traffic, there is the potential for middleboxes to ossify on the version bytes always being 0x00000001. In QUIC version 1, Initial packets are encrypted with the version- specific salt as described in QUIC-TLS. Protecting Initial packets"}
{"id": "q-en-quic-v2-756efdefd2644a83da1168a6540f4e06ac2e669e7df50ddbdc75f2c826c56f53", "old_text": "and validate the version_information transport parameter specified in QUIC-VN to prevent version downgrade attacks. Note that version 2 meets that document's definition of a compatible version with version 1, and version 1 is compatible with version 2. Therefore, servers can use compatible negotiation to switch a connection between the two versions. Endpoints that support both", "comments": "here (URL) it is better that we call it QUIC version 1 not just QUIC. As later it is referred as QUIC version 1.here (URL) I would add reference to VN draft instead of calling it \"that document\"here (URL) i would add \"packet\" after \"Initial\" just to be clear, as we have now a initial version in VN draft.\nThanks, fixed here URL", "new_text": "and validate the version_information transport parameter specified in QUIC-VN to prevent version downgrade attacks. Note that version 2 meets the QUIC-VN definition of a compatible version with version 1, and version 1 is compatible with version 2. Therefore, servers can use compatible negotiation to switch a connection between the two versions. Endpoints that support both"}
{"id": "q-en-quic-v2-756efdefd2644a83da1168a6540f4e06ac2e669e7df50ddbdc75f2c826c56f53", "old_text": "If the server sends a Retry packet, it MUST use the original version. The client ignores Retry packets using other versions. The client MUST NOT use a different version in the subsequent Initial that contains the Retry token. The server MAY encode the QUIC version in its Retry token to validate that the client did not switch versions, and drop the packet if it switched. QUIC version 2 uses the same transport parameters to authenticate the Retry as QUIC version 1. After switching to a negotiated version", "comments": "here (URL) it is better that we call it QUIC version 1 not just QUIC. As later it is referred as QUIC version 1.here (URL) I would add reference to VN draft instead of calling it \"that document\"here (URL) i would add \"packet\" after \"Initial\" just to be clear, as we have now a initial version in VN draft.\nThanks, fixed here URL", "new_text": "If the server sends a Retry packet, it MUST use the original version. The client ignores Retry packets using other versions. The client MUST NOT use a different version in the subsequent Initial packet that contains the Retry token. The server MAY encode the QUIC version in its Retry token to validate that the client did not switch versions, and drop the packet if it switched. QUIC version 2 uses the same transport parameters to authenticate the Retry as QUIC version 1. After switching to a negotiated version"}
{"id": "q-en-quic-v2-d5f271f47502418c07fc9c2221ce35f8d25c78bf563460ecafdcf730e61f3cea", "old_text": "3.1. The Version field of long headers is 0x709a50c4. This randomly chosen value is meant to avoid creating patterns in specified version numbers. This version number will not change in subsequent versions of this draft, unless there is a behavior change that breaks compatibility.", "comments": "As requested, all new version number, salts, etc. for QUICv2. Verification of the test vectors would be especially valuable.\nWoof. Thank you very much for quickly checking, and sorry I botched every part of that. All fixed now, I hope.\nThe new test vectors works well with my QUIC implementation in Haskell.\nThanks Kazu!\nWFM. Not validated with a real build, but a script. See URL for the script that I used.", "new_text": "3.1. The Version field of long headers is 0x6b3343cf. This was generated by taking the first four bytes of the sha256sum of \"QUICv2 version number\". This version number will not change in subsequent versions of this draft, unless there is a behavior change that breaks compatibility."}
{"id": "q-en-quic-v2-d5f271f47502418c07fc9c2221ce35f8d25c78bf563460ecafdcf730e61f3cea", "old_text": "The salt used to derive Initial keys in QUIC-TLS changes to: 3.3.2. The labels used in QUIC-TLS to derive packet protection keys", "comments": "As requested, all new version number, salts, etc. for QUICv2. Verification of the test vectors would be especially valuable.\nWoof. Thank you very much for quickly checking, and sorry I botched every part of that. All fixed now, I hope.\nThe new test vectors works well with my QUIC implementation in Haskell.\nThanks Kazu!\nWFM. Not validated with a real build, but a script. See URL for the script that I used.", "new_text": "The salt used to derive Initial keys in QUIC-TLS changes to: This is the first 20 bytes of the sha256sum of \"QUICv2 salt\". 3.3.2. The labels used in QUIC-TLS to derive packet protection keys"}
{"id": "q-en-quic-v2-d5f271f47502418c07fc9c2221ce35f8d25c78bf563460ecafdcf730e61f3cea", "old_text": "The key and nonce used for the Retry Integrity Tag (QUIC-TLS) change to: 4. QUIC version 2 is not intended to deprecate version 1. Endpoints", "comments": "As requested, all new version number, salts, etc. for QUICv2. Verification of the test vectors would be especially valuable.\nWoof. Thank you very much for quickly checking, and sorry I botched every part of that. All fixed now, I hope.\nThe new test vectors works well with my QUIC implementation in Haskell.\nThanks Kazu!\nWFM. Not validated with a real build, but a script. See URL for the script that I used.", "new_text": "The key and nonce used for the Retry Integrity Tag (QUIC-TLS) change to: The secret is the sha256sum of \"QUICv2 retry secret\". The key and nonce are derived from this secret with the labels \"quicv2 key\" and \"quicv2 iv\", respectively. 4. QUIC version 2 is not intended to deprecate version 1. Endpoints"}
{"id": "q-en-quicwg-base-drafts-7e44aed3927f10f7a5a79e04a1d040efb9e7050903e4ee0590205316aa2fddfe", "old_text": "2.1.2. An encoder MUST NOT insert an entry into the dynamic table (or duplicate an existing entry) if doing so would evict an entry with unacknowledged references. For header blocks that might rely on the newly added entry, the encoder can use a literal representation. To ensure that the encoder is not prevented from adding new entries, the encoder can avoid referencing entries that are close to eviction.", "comments": "A dynamic table entry that has not been acknowledged by the decoder must not be evicted by the encoder, even if there has never been any references to it. This is already alluded to in the current spec at URL by the phrase \"acknowledged by the decoder\", but since \"acknowledged\" is used interchangeably for acknowledging the insertion instruction and acknowledging references, a reader could understand this to mean \"all references acknowledged\". In addition, at three other places eviction of entries with unacknowledged references are banned, with unambigously no ban on eviction of unacknowledged entries with no references. I argue that more explicit language would be beneficial. This issue has already been discussed at URL and below. Note that avoiding this kind of corruption could also be achieved by banning an encoder from inserting entries that it does not immediately reference, but this is probably way too restrictive. For example, an encoder might wish to achieve best latency and speculatively improve future compression ratio at the expense of potentially wasting current bandwidth by encoding header fields as literals while inserting them at the same time for potiential future use. This is already alluded to in the spec at multiple places. Below is a specific scenario when this issue might cause corruption: Suppose MaxEntries is 5. Suppose that there is a fresh connection, with an empty dynamic table, and there is a header list to be sent that has two header fields which fit in the dynamic table together. Suppose one encoder implementation inserts these two header fields into the dynamic table, and encodes the header list using two Indexed Header Field instructions, referring to absolute indices 1 and 2. This header block will have an abstract Largest Reference value of 2, which is encoded as 3 on the wire. A spec compliant decoder should have no difficultly correctly decoding this header block. Suppose another encoder implementation starts by adding 10 unique entries, all different from the two header fields in the header list to be sent, to the dynamic table, but does not emit references to any of them. Of course in the end only the last couple will stay in the dynamic table, the rest will be evicted. This can be considered spec compliant depending on how one interprets the spec, see above. Suppose that after this, this encoder implementation encodes our header list by inserting both header fields into the dynamic table, with absolute indices 11 and 12, and using two Indexed Header Field instructions on the request stream. The abstract Largest Reference value is 12, which is encoded as 3 on the wire since 2 * MaxEntries == 10. Now suppose that out of the 12 instructions on the encoder stream, only the first two arrive before all the request stream data are received. The remaining 10 instructions on the encoder stream are delayed on the wire. Now a spec compliant decoder sees the same header block on the request stream, bit by bit, as above, and two insertions on the encoder stream. It will emit dynamic table entires 1 and 2, which are different from the encoded header list.\nYou should really create a new issue for these things. Because this comment relates only to the issue, not the proposed fix. I haven't read the proposed changes, though I will if you think that there is a fix in there that is worth keeping after this. Anyhow, the example is invalid here: Because: And you can only ever fit 5 entries in that table.\nThat's exactly the issue: \"it has first been acknowledged by the decoder\" is ambigous. \"Acknowledged\" is being used throughout the text with two different meanings, most often to refer to references, not insertions. The ban on evicting an entry the insertion of which has not been acknowledged, that you are quoting, only occurs once out of four different locations talking about when an entry can be evicted. The other three places clearly only ban evicting an entry with unacknowledged references (note the use of the same word), giving the impression that acknowledgement of the insertion is not necessary before eviction. Since the design is correct if you read it right, but wording is not as clear as I think it should be, I consider this issue editorial. The change I propose in this PR is meant to be an improvement on clarity.\nOK, having read the changes, I agree with them. They consolidate the fix by creating two conditions that cause an entry to be \"blocking\".\nRebased on . Please let me know if anybody can come up with a term better than \"blocking\" which is confusing given that streams can be \"blocked\".\nNAME Thank you for your feedback.\nThanks for this. I don't have a better name for blocking entries right now, but I will keep thinking on it as I agree it could confuse people even more.\nThis looks reasonable.", "new_text": "2.1.2. A dynamic table entry is considered blocking and cannot be evicted until its insertion has been acknowledged and there are no outstanding unacknowledged references to the entry. In particular, a dynamic table entry that has never been referenced can still be blocking. A blocking entry is unrelated to a blocked stream, which is a stream that a decoder cannot decode as a result of references to entries that are not yet available. Any encoder that uses the dynamic table has to keep track of blocked entries, whereas blocked streams are optional. An encoder MUST NOT insert an entry into the dynamic table (or duplicate an existing entry) if doing so would evict a blocking entry. In this case, the encoder can send literal representations of header fields. To ensure that the encoder is not prevented from adding new entries, the encoder can avoid referencing entries that are close to eviction."}
{"id": "q-en-quicwg-base-drafts-7e44aed3927f10f7a5a79e04a1d040efb9e7050903e4ee0590205316aa2fddfe", "old_text": "Determining which entries are too close to eviction to reference is an encoder preference. One heuristic is to target a fixed amount of available space in the dynamic table: either unused space or space that can be reclaimed by evicting unreferenced entries. To achieve this, the encoder can maintain a draining index, which is the smallest absolute index in the dynamic table that it will emit a reference for. As new entries are inserted, the encoder increases", "comments": "A dynamic table entry that has not been acknowledged by the decoder must not be evicted by the encoder, even if there has never been any references to it. This is already alluded to in the current spec at URL by the phrase \"acknowledged by the decoder\", but since \"acknowledged\" is used interchangeably for acknowledging the insertion instruction and acknowledging references, a reader could understand this to mean \"all references acknowledged\". In addition, at three other places eviction of entries with unacknowledged references are banned, with unambigously no ban on eviction of unacknowledged entries with no references. I argue that more explicit language would be beneficial. This issue has already been discussed at URL and below. Note that avoiding this kind of corruption could also be achieved by banning an encoder from inserting entries that it does not immediately reference, but this is probably way too restrictive. For example, an encoder might wish to achieve best latency and speculatively improve future compression ratio at the expense of potentially wasting current bandwidth by encoding header fields as literals while inserting them at the same time for potiential future use. This is already alluded to in the spec at multiple places. Below is a specific scenario when this issue might cause corruption: Suppose MaxEntries is 5. Suppose that there is a fresh connection, with an empty dynamic table, and there is a header list to be sent that has two header fields which fit in the dynamic table together. Suppose one encoder implementation inserts these two header fields into the dynamic table, and encodes the header list using two Indexed Header Field instructions, referring to absolute indices 1 and 2. This header block will have an abstract Largest Reference value of 2, which is encoded as 3 on the wire. A spec compliant decoder should have no difficultly correctly decoding this header block. Suppose another encoder implementation starts by adding 10 unique entries, all different from the two header fields in the header list to be sent, to the dynamic table, but does not emit references to any of them. Of course in the end only the last couple will stay in the dynamic table, the rest will be evicted. This can be considered spec compliant depending on how one interprets the spec, see above. Suppose that after this, this encoder implementation encodes our header list by inserting both header fields into the dynamic table, with absolute indices 11 and 12, and using two Indexed Header Field instructions on the request stream. The abstract Largest Reference value is 12, which is encoded as 3 on the wire since 2 * MaxEntries == 10. Now suppose that out of the 12 instructions on the encoder stream, only the first two arrive before all the request stream data are received. The remaining 10 instructions on the encoder stream are delayed on the wire. Now a spec compliant decoder sees the same header block on the request stream, bit by bit, as above, and two insertions on the encoder stream. It will emit dynamic table entires 1 and 2, which are different from the encoded header list.\nYou should really create a new issue for these things. Because this comment relates only to the issue, not the proposed fix. I haven't read the proposed changes, though I will if you think that there is a fix in there that is worth keeping after this. Anyhow, the example is invalid here: Because: And you can only ever fit 5 entries in that table.\nThat's exactly the issue: \"it has first been acknowledged by the decoder\" is ambigous. \"Acknowledged\" is being used throughout the text with two different meanings, most often to refer to references, not insertions. The ban on evicting an entry the insertion of which has not been acknowledged, that you are quoting, only occurs once out of four different locations talking about when an entry can be evicted. The other three places clearly only ban evicting an entry with unacknowledged references (note the use of the same word), giving the impression that acknowledgement of the insertion is not necessary before eviction. Since the design is correct if you read it right, but wording is not as clear as I think it should be, I consider this issue editorial. The change I propose in this PR is meant to be an improvement on clarity.\nOK, having read the changes, I agree with them. They consolidate the fix by creating two conditions that cause an entry to be \"blocking\".\nRebased on . Please let me know if anybody can come up with a term better than \"blocking\" which is confusing given that streams can be \"blocked\".\nNAME Thank you for your feedback.\nThanks for this. I don't have a better name for blocking entries right now, but I will keep thinking on it as I agree it could confuse people even more.\nThis looks reasonable.", "new_text": "Determining which entries are too close to eviction to reference is an encoder preference. One heuristic is to target a fixed amount of available space in the dynamic table: either unused space or space that can be reclaimed by evicting non-blocking entries. To achieve this, the encoder can maintain a draining index, which is the smallest absolute index in the dynamic table that it will emit a reference for. As new entries are inserted, the encoder increases"}
{"id": "q-en-quicwg-base-drafts-7e44aed3927f10f7a5a79e04a1d040efb9e7050903e4ee0590205316aa2fddfe", "old_text": "Knowledge that a header block with references to the dynamic table has been processed permits the encoder to evict entries to which no unacknowledged references remain, regardless of whether those references were potentially blocking (see blocked-insertion). When a stream is reset or abandoned, the indication that these header blocks will never be processed serves a similar function; see stream- cancellation. The decoder chooses when to emit Insert Count Increment instructions (see insert-count-increment). Emitting an instruction after adding", "comments": "A dynamic table entry that has not been acknowledged by the decoder must not be evicted by the encoder, even if there has never been any references to it. This is already alluded to in the current spec at URL by the phrase \"acknowledged by the decoder\", but since \"acknowledged\" is used interchangeably for acknowledging the insertion instruction and acknowledging references, a reader could understand this to mean \"all references acknowledged\". In addition, at three other places eviction of entries with unacknowledged references are banned, with unambigously no ban on eviction of unacknowledged entries with no references. I argue that more explicit language would be beneficial. This issue has already been discussed at URL and below. Note that avoiding this kind of corruption could also be achieved by banning an encoder from inserting entries that it does not immediately reference, but this is probably way too restrictive. For example, an encoder might wish to achieve best latency and speculatively improve future compression ratio at the expense of potentially wasting current bandwidth by encoding header fields as literals while inserting them at the same time for potiential future use. This is already alluded to in the spec at multiple places. Below is a specific scenario when this issue might cause corruption: Suppose MaxEntries is 5. Suppose that there is a fresh connection, with an empty dynamic table, and there is a header list to be sent that has two header fields which fit in the dynamic table together. Suppose one encoder implementation inserts these two header fields into the dynamic table, and encodes the header list using two Indexed Header Field instructions, referring to absolute indices 1 and 2. This header block will have an abstract Largest Reference value of 2, which is encoded as 3 on the wire. A spec compliant decoder should have no difficultly correctly decoding this header block. Suppose another encoder implementation starts by adding 10 unique entries, all different from the two header fields in the header list to be sent, to the dynamic table, but does not emit references to any of them. Of course in the end only the last couple will stay in the dynamic table, the rest will be evicted. This can be considered spec compliant depending on how one interprets the spec, see above. Suppose that after this, this encoder implementation encodes our header list by inserting both header fields into the dynamic table, with absolute indices 11 and 12, and using two Indexed Header Field instructions on the request stream. The abstract Largest Reference value is 12, which is encoded as 3 on the wire since 2 * MaxEntries == 10. Now suppose that out of the 12 instructions on the encoder stream, only the first two arrive before all the request stream data are received. The remaining 10 instructions on the encoder stream are delayed on the wire. Now a spec compliant decoder sees the same header block on the request stream, bit by bit, as above, and two insertions on the encoder stream. It will emit dynamic table entires 1 and 2, which are different from the encoded header list.\nYou should really create a new issue for these things. Because this comment relates only to the issue, not the proposed fix. I haven't read the proposed changes, though I will if you think that there is a fix in there that is worth keeping after this. Anyhow, the example is invalid here: Because: And you can only ever fit 5 entries in that table.\nThat's exactly the issue: \"it has first been acknowledged by the decoder\" is ambigous. \"Acknowledged\" is being used throughout the text with two different meanings, most often to refer to references, not insertions. The ban on evicting an entry the insertion of which has not been acknowledged, that you are quoting, only occurs once out of four different locations talking about when an entry can be evicted. The other three places clearly only ban evicting an entry with unacknowledged references (note the use of the same word), giving the impression that acknowledgement of the insertion is not necessary before eviction. Since the design is correct if you read it right, but wording is not as clear as I think it should be, I consider this issue editorial. The change I propose in this PR is meant to be an improvement on clarity.\nOK, having read the changes, I agree with them. They consolidate the fix by creating two conditions that cause an entry to be \"blocking\".\nRebased on . Please let me know if anybody can come up with a term better than \"blocking\" which is confusing given that streams can be \"blocked\".\nNAME Thank you for your feedback.\nThanks for this. I don't have a better name for blocking entries right now, but I will keep thinking on it as I agree it could confuse people even more.\nThis looks reasonable.", "new_text": "Knowledge that a header block with references to the dynamic table has been processed permits the encoder to evict entries to which no unacknowledged references remain (see blocked-insertion). When a stream is reset or abandoned, the indication that these header blocks will never be processed serves a similar function (see stream- cancellation). The decoder chooses when to emit Insert Count Increment instructions (see insert-count-increment). Emitting an instruction after adding"}
{"id": "q-en-quicwg-base-drafts-7e44aed3927f10f7a5a79e04a1d040efb9e7050903e4ee0590205316aa2fddfe", "old_text": "Before a new entry is added to the dynamic table, entries are evicted from the end of the dynamic table until the size of the dynamic table is less than or equal to (table capacity - size of new entry) or until the table is empty. The encoder MUST NOT evict a dynamic table entry unless it has first been acknowledged by the decoder. If the size of the new entry is less than or equal to the dynamic table capacity, then that entry is added to the table. It is an", "comments": "A dynamic table entry that has not been acknowledged by the decoder must not be evicted by the encoder, even if there has never been any references to it. This is already alluded to in the current spec at URL by the phrase \"acknowledged by the decoder\", but since \"acknowledged\" is used interchangeably for acknowledging the insertion instruction and acknowledging references, a reader could understand this to mean \"all references acknowledged\". In addition, at three other places eviction of entries with unacknowledged references are banned, with unambigously no ban on eviction of unacknowledged entries with no references. I argue that more explicit language would be beneficial. This issue has already been discussed at URL and below. Note that avoiding this kind of corruption could also be achieved by banning an encoder from inserting entries that it does not immediately reference, but this is probably way too restrictive. For example, an encoder might wish to achieve best latency and speculatively improve future compression ratio at the expense of potentially wasting current bandwidth by encoding header fields as literals while inserting them at the same time for potiential future use. This is already alluded to in the spec at multiple places. Below is a specific scenario when this issue might cause corruption: Suppose MaxEntries is 5. Suppose that there is a fresh connection, with an empty dynamic table, and there is a header list to be sent that has two header fields which fit in the dynamic table together. Suppose one encoder implementation inserts these two header fields into the dynamic table, and encodes the header list using two Indexed Header Field instructions, referring to absolute indices 1 and 2. This header block will have an abstract Largest Reference value of 2, which is encoded as 3 on the wire. A spec compliant decoder should have no difficultly correctly decoding this header block. Suppose another encoder implementation starts by adding 10 unique entries, all different from the two header fields in the header list to be sent, to the dynamic table, but does not emit references to any of them. Of course in the end only the last couple will stay in the dynamic table, the rest will be evicted. This can be considered spec compliant depending on how one interprets the spec, see above. Suppose that after this, this encoder implementation encodes our header list by inserting both header fields into the dynamic table, with absolute indices 11 and 12, and using two Indexed Header Field instructions on the request stream. The abstract Largest Reference value is 12, which is encoded as 3 on the wire since 2 * MaxEntries == 10. Now suppose that out of the 12 instructions on the encoder stream, only the first two arrive before all the request stream data are received. The remaining 10 instructions on the encoder stream are delayed on the wire. Now a spec compliant decoder sees the same header block on the request stream, bit by bit, as above, and two insertions on the encoder stream. It will emit dynamic table entires 1 and 2, which are different from the encoded header list.\nYou should really create a new issue for these things. Because this comment relates only to the issue, not the proposed fix. I haven't read the proposed changes, though I will if you think that there is a fix in there that is worth keeping after this. Anyhow, the example is invalid here: Because: And you can only ever fit 5 entries in that table.\nThat's exactly the issue: \"it has first been acknowledged by the decoder\" is ambigous. \"Acknowledged\" is being used throughout the text with two different meanings, most often to refer to references, not insertions. The ban on evicting an entry the insertion of which has not been acknowledged, that you are quoting, only occurs once out of four different locations talking about when an entry can be evicted. The other three places clearly only ban evicting an entry with unacknowledged references (note the use of the same word), giving the impression that acknowledgement of the insertion is not necessary before eviction. Since the design is correct if you read it right, but wording is not as clear as I think it should be, I consider this issue editorial. The change I propose in this PR is meant to be an improvement on clarity.\nOK, having read the changes, I agree with them. They consolidate the fix by creating two conditions that cause an entry to be \"blocking\".\nRebased on . Please let me know if anybody can come up with a term better than \"blocking\" which is confusing given that streams can be \"blocked\".\nNAME Thank you for your feedback.\nThanks for this. I don't have a better name for blocking entries right now, but I will keep thinking on it as I agree it could confuse people even more.\nThis looks reasonable.", "new_text": "Before a new entry is added to the dynamic table, entries are evicted from the end of the dynamic table until the size of the dynamic table is less than or equal to (table capacity - size of new entry) or until the table is empty. The encoder MUST NOT evict a blocking dynamic table entry (see blocked-insertion). If the size of the new entry is less than or equal to the dynamic table capacity, then that entry is added to the table. It is an"}
{"id": "q-en-quicwg-base-drafts-7e44aed3927f10f7a5a79e04a1d040efb9e7050903e4ee0590205316aa2fddfe", "old_text": "connection error of type \"HTTP_QPACK_ENCODER_STREAM_ERROR\". Reducing the dynamic table capacity can cause entries to be evicted (see eviction). This MUST NOT cause the eviction of entries with outstanding references (see reference-tracking). Changing the capacity of the dynamic table is not acknowledged as this instruction does not insert an entry. 4.4.", "comments": "A dynamic table entry that has not been acknowledged by the decoder must not be evicted by the encoder, even if there has never been any references to it. This is already alluded to in the current spec at URL by the phrase \"acknowledged by the decoder\", but since \"acknowledged\" is used interchangeably for acknowledging the insertion instruction and acknowledging references, a reader could understand this to mean \"all references acknowledged\". In addition, at three other places eviction of entries with unacknowledged references are banned, with unambigously no ban on eviction of unacknowledged entries with no references. I argue that more explicit language would be beneficial. This issue has already been discussed at URL and below. Note that avoiding this kind of corruption could also be achieved by banning an encoder from inserting entries that it does not immediately reference, but this is probably way too restrictive. For example, an encoder might wish to achieve best latency and speculatively improve future compression ratio at the expense of potentially wasting current bandwidth by encoding header fields as literals while inserting them at the same time for potiential future use. This is already alluded to in the spec at multiple places. Below is a specific scenario when this issue might cause corruption: Suppose MaxEntries is 5. Suppose that there is a fresh connection, with an empty dynamic table, and there is a header list to be sent that has two header fields which fit in the dynamic table together. Suppose one encoder implementation inserts these two header fields into the dynamic table, and encodes the header list using two Indexed Header Field instructions, referring to absolute indices 1 and 2. This header block will have an abstract Largest Reference value of 2, which is encoded as 3 on the wire. A spec compliant decoder should have no difficultly correctly decoding this header block. Suppose another encoder implementation starts by adding 10 unique entries, all different from the two header fields in the header list to be sent, to the dynamic table, but does not emit references to any of them. Of course in the end only the last couple will stay in the dynamic table, the rest will be evicted. This can be considered spec compliant depending on how one interprets the spec, see above. Suppose that after this, this encoder implementation encodes our header list by inserting both header fields into the dynamic table, with absolute indices 11 and 12, and using two Indexed Header Field instructions on the request stream. The abstract Largest Reference value is 12, which is encoded as 3 on the wire since 2 * MaxEntries == 10. Now suppose that out of the 12 instructions on the encoder stream, only the first two arrive before all the request stream data are received. The remaining 10 instructions on the encoder stream are delayed on the wire. Now a spec compliant decoder sees the same header block on the request stream, bit by bit, as above, and two insertions on the encoder stream. It will emit dynamic table entires 1 and 2, which are different from the encoded header list.\nYou should really create a new issue for these things. Because this comment relates only to the issue, not the proposed fix. I haven't read the proposed changes, though I will if you think that there is a fix in there that is worth keeping after this. Anyhow, the example is invalid here: Because: And you can only ever fit 5 entries in that table.\nThat's exactly the issue: \"it has first been acknowledged by the decoder\" is ambigous. \"Acknowledged\" is being used throughout the text with two different meanings, most often to refer to references, not insertions. The ban on evicting an entry the insertion of which has not been acknowledged, that you are quoting, only occurs once out of four different locations talking about when an entry can be evicted. The other three places clearly only ban evicting an entry with unacknowledged references (note the use of the same word), giving the impression that acknowledgement of the insertion is not necessary before eviction. Since the design is correct if you read it right, but wording is not as clear as I think it should be, I consider this issue editorial. The change I propose in this PR is meant to be an improvement on clarity.\nOK, having read the changes, I agree with them. They consolidate the fix by creating two conditions that cause an entry to be \"blocking\".\nRebased on . Please let me know if anybody can come up with a term better than \"blocking\" which is confusing given that streams can be \"blocked\".\nNAME Thank you for your feedback.\nThanks for this. I don't have a better name for blocking entries right now, but I will keep thinking on it as I agree it could confuse people even more.\nThis looks reasonable.", "new_text": "connection error of type \"HTTP_QPACK_ENCODER_STREAM_ERROR\". Reducing the dynamic table capacity can cause entries to be evicted (see eviction). This MUST NOT cause the eviction of blocking entries (see blocked-insertion). Changing the capacity of the dynamic table is not acknowledged as this instruction does not insert an entry. 4.4."}
{"id": "q-en-quicwg-base-drafts-ab74dcae1c0cc04e8d1f2625abf403c70d74e01db1c6fadc8aaace9bca92f56c", "old_text": "4.2.4. The CANCEL_PUSH frame (type=0x3) is used to request cancellation of a server push prior to the push stream being created. The CANCEL_PUSH frame identifies a server push by Push ID (see frame-push-promise), encoded as a variable-length integer.", "comments": "Not sure if this is enough, but it enacts Brad's suggested fix.\nYup, that's what I had in mind.\nWhy is this limited to being prior to the push stream being created, given that you specify how the server handles it if it has created the stream.\nI don't read this as saying that you have to avoid it. The point is that you also have to reset/STOPSENDING on the stream if it exists, so there is no real point to using CANCELPUSH in addition.\nSounds like the intent is that CANCELPUSH should be used between the client receiving a PUSHPROMISE and before receiving the stream (i.e. when it is ambiguous as to whether the stream has been created by the server from the clients perspective). I think replacing \"created\" with \"received\" would make this more clear.", "new_text": "4.2.4. The CANCEL_PUSH frame (type=0x3) is used to request cancellation of a server push prior to the push stream being received. The CANCEL_PUSH frame identifies a server push by Push ID (see frame-push-promise), encoded as a variable-length integer."}
{"id": "q-en-quicwg-base-drafts-72fb8f58807d57ff4aa3db7731e78827100656af3e2ad2dd48e5326445ea1236", "old_text": "acknowledgement is received that establishes loss or delivery of packets. When an ACK frame is received that establishes loss after a threshold number of consecutive PTOs (pto_count is more than kPersistentCongestionThreshold, see cc-consts-of-interest), the network is considered to be experiencing persistent congestion, and the sender's congestion window MUST be reduced to the minimum congestion window (kMinimumWindow). This response of collapsing the congestion window on persistent congestion is functionally similar to a sender's response on a Retransmission Timeout (RTO) in TCP RFC5681. 7.6.", "comments": "Follow-up from\nConsider the following sequence of packets being sent In our old loss recovery code (before unifying TLP and RTO, ), we used the following logic for determining if an RTO was not spurious, which would then lead to collapse of the congestion window: In the example above would be packet 14. This means that to count as a verified RTO, an ACK must acknowledge 15 or 16, but not any packets smaller or equal than 14. With , we now collapse the congestion window when is larger than 2 and we receive an ACK which leads us to declare any packet lost. In the example above this means any ACK received after sending packet 15 which declares a packet lost would suffice. Even worse, this logic already triggers if we receive the ACK immediately after sending 15, i.e. when the peer didn't even have a chance to receive it. This seems a lot more aggressive than what we had before before. Is this intentional?\nThis is unintentional. I belIeve Jana is intending to fix this in PR , so please take a look at that.\nIf I understand correctly, is a fix for and only makes the description of the constant and the pseudo code consistent.\nI believe the text is now correct, but the pseudocode is still incorrect in the way you describe. I think we should introduce a largestsentbefore_congestion variable as a replacement, but I'm open to other solutions.\nI'm removing design. I consider this a bug fix, but still very important.\nSee which I believe fixes the text.\nI've been using to mark things that will result in changes that affect implementations. This would seem to fit that description. Whether you consider it worth discussing is different. If it isn't worth discussing, maybe it will just be fixed by and everything will be fine without discussion in Tokyo. WIthout the label, this won't make the change log, and that might be a problem.\nFair point, I think fixes the text, but I'll re-label this as design to ensure it's included in the changelogs.\nThe spurious RTO detection calls an RTO spurious if any unacked packets sent prior to the RTO are acked after the RTO. The implication here is that any packet that was received but not acked can cause the RTO to be labeled spurious, even if there were more packets that were in fact lost. This is not always right. Consider the following example: Packets 5, 6, 7 are sent Packet 5 is received, but 6 and 7 are dropped Ack of 5 is dropped TLPs (packets 8 and 9) are sent and dropped RTO (packet 10) is sent and received ACK acks 10 and 5 Since 5 was sent prior to RTO, RTO is marked spurious. A crisp definition of spurious RTO may be useful to continue. Here's what I think is sensible: A spurious RTO is an unnecessary RTO. If the RTO timer had not gone off, things would have been fine. Based on this definition, clearly the RTO was necessary in the example, and therefore not spurious. We need to change the mechanism for detecting spurious RTOs, and I'll propose the following. An RTO is spurious if an ack is received after the RTO fires that acks only packets sent prior to the RTO. In other words, RTO is spurious if ack is received after the RTO, and (instead of , as the draft currently states).\nThis was recently changed because Lars ran into an issue() where the RTO'd packet was acknowledged for the first time in the same ACK as the pre-RTO packets. In that case, I think it was due to an overly aggressive MinRTO and some pretty coarse grained timers, but I think we want to declare that RTO spurious, since everything would have been fine if it hadn't been sent?\nNAME : does not change the semantics substantively of what was in the draft. The intent was always to declare spurious RTO if (packet prior to RTO was newly acked). I'm proposing that we change that to (no packet sent after RTO was newly acked).\nIn light of , I think this is OBE?\nYes, this is OBE. Closing.", "new_text": "acknowledgement is received that establishes loss or delivery of packets. When an ACK frame is received that establishes loss of all in-flight packets sent prior to a threshold number of consecutive PTOs (pto_count is more than kPersistentCongestionThreshold, see cc- consts-of-interest), the network is considered to be experiencing persistent congestion, and the sender's congestion window MUST be reduced to the minimum congestion window (kMinimumWindow). This response of collapsing the congestion window on persistent congestion is functionally similar to a sender's response on a Retransmission Timeout (RTO) in TCP RFC5681. 7.6."}
{"id": "q-en-quicwg-base-drafts-a555c536b8bfcfbcb382ba3791ea40eb1b9b801f60f3fbe662033260a76e24b9", "old_text": "Consequently, a Version Negotiation packet consumes an entire UDP datagram. See version-negotiation for a description of the version negotiation process.", "comments": "This keeps things simple.\nI have no idea how that happened. Forced it.\nAre we ok with a server sending both a VN packet and a Retry in response to a single UDP datagram?\nNAME Does that happen? IIUC VN is sent only when there's a version mismatch. Retry is sent only when there's no version mismatch.\nNAME A client could coalesce two packets of different versions. As I've said on the issue, I still feel uneasy with the fact that a client could coalesce around 70 packets into a single datagram, and I'd prefer more rigid rules to solve this problem.\nNAME I'd argue that a client can't do that, because the draft states \"Senders MUST NOT coalesce QUIC packets for different connections into a single UDP datagram\", OTOH I agree that there's nothing that requires the server to drop following QUIC packets with different versions within a combined datagram. As I stated on URL, I (still) think the approach in this PR would require having minimal state. Would you mind elaborating (possibly on the issue) why you think differently?\nSure, the issue here comes from multi-threaded processing, and reusing packet buffers. I have one thread (or rather, one go-routine) that reads UDP datagrams from the network, splits the coalesced packets and parses the unencrypted part of the header, and then passes the QUIC packets to their respective connections (which are running in a separate go-routine each). In order to reduce allocations, I reuse the buffers I read my packets into. When handling coalesced packets, this means I have to wait until all coalesced parts have been processed before reusing the buffer. Therefore, I can't just parse the first coalesced packet, pass it on for handling in another threat, and then continue parsing. This would create a race condition, since the handling of the packet might be executed before the parsing is finished, and the packet buffer would be returned already. So I have to split the coalesced packets first, and then handle them. This wouldn't be a problem at all if a coalesced packet just had a few parts, as it's supposed to be. But since we're maximizing flexibility for no reason, an attacker might send me datagram consisting of around 70 packets, which I would all have to completely parse.\nNAME Thank you for elaborating. I think that we should be dispatching each UDP datagram (instead of each QUIC packet) to the respective connections. That's how load balancers are expected to work, and what you are implementing is essentially a load balancer. The benefit of dispatching at the datagram level is that you'd have less things to do on the receiving thread. I understand your point on flexibility. However, in my view, the flexibility is not only how you coalesce, but the redundancy in all the long header fields. We have deliberately chosen the design because it is clear and easy to analyze (especially in regard to encryption). My preference goes to retaining the clarity by avoiding introducing restrictions.\nI think this PR addresses the amplification problem directly, by tying the number of response datagrams to the number of incoming datagrams, so I am in favor of this PR as a spec solution. NAME : quic-go can still choose to drop datagrams with a large (> 3?) coalesced packets, which might help you (slightly). I agree with NAME also that since all coalesced packets are supposed to go to the same connection, the implementation would be better aligned if it were to pass the datagram along to the connection instead of the individual packets... but that's an impl choice, and it's certainly not the only way to do this.\nI'm merging this. Please reopen the issue if you think this ought to be discussed further, especially in Tokyo.", "new_text": "Consequently, a Version Negotiation packet consumes an entire UDP datagram. A server MUST NOT send more than one Version Negotiation packet in response to a single UDP datagram. See version-negotiation for a description of the version negotiation process."}
{"id": "q-en-quicwg-base-drafts-a555c536b8bfcfbcb382ba3791ea40eb1b9b801f60f3fbe662033260a76e24b9", "old_text": "A server MAY send Retry packets in response to Initial and 0-RTT packets. A server can either discard or buffer 0-RTT packets that it receives. A server can send multiple Retry packets as it receives Initial or 0-RTT packets. A client MUST accept and process at most one Retry packet for each connection attempt. After the client has received and processed an", "comments": "This keeps things simple.\nI have no idea how that happened. Forced it.\nAre we ok with a server sending both a VN packet and a Retry in response to a single UDP datagram?\nNAME Does that happen? IIUC VN is sent only when there's a version mismatch. Retry is sent only when there's no version mismatch.\nNAME A client could coalesce two packets of different versions. As I've said on the issue, I still feel uneasy with the fact that a client could coalesce around 70 packets into a single datagram, and I'd prefer more rigid rules to solve this problem.\nNAME I'd argue that a client can't do that, because the draft states \"Senders MUST NOT coalesce QUIC packets for different connections into a single UDP datagram\", OTOH I agree that there's nothing that requires the server to drop following QUIC packets with different versions within a combined datagram. As I stated on URL, I (still) think the approach in this PR would require having minimal state. Would you mind elaborating (possibly on the issue) why you think differently?\nSure, the issue here comes from multi-threaded processing, and reusing packet buffers. I have one thread (or rather, one go-routine) that reads UDP datagrams from the network, splits the coalesced packets and parses the unencrypted part of the header, and then passes the QUIC packets to their respective connections (which are running in a separate go-routine each). In order to reduce allocations, I reuse the buffers I read my packets into. When handling coalesced packets, this means I have to wait until all coalesced parts have been processed before reusing the buffer. Therefore, I can't just parse the first coalesced packet, pass it on for handling in another threat, and then continue parsing. This would create a race condition, since the handling of the packet might be executed before the parsing is finished, and the packet buffer would be returned already. So I have to split the coalesced packets first, and then handle them. This wouldn't be a problem at all if a coalesced packet just had a few parts, as it's supposed to be. But since we're maximizing flexibility for no reason, an attacker might send me datagram consisting of around 70 packets, which I would all have to completely parse.\nNAME Thank you for elaborating. I think that we should be dispatching each UDP datagram (instead of each QUIC packet) to the respective connections. That's how load balancers are expected to work, and what you are implementing is essentially a load balancer. The benefit of dispatching at the datagram level is that you'd have less things to do on the receiving thread. I understand your point on flexibility. However, in my view, the flexibility is not only how you coalesce, but the redundancy in all the long header fields. We have deliberately chosen the design because it is clear and easy to analyze (especially in regard to encryption). My preference goes to retaining the clarity by avoiding introducing restrictions.\nI think this PR addresses the amplification problem directly, by tying the number of response datagrams to the number of incoming datagrams, so I am in favor of this PR as a spec solution. NAME : quic-go can still choose to drop datagrams with a large (> 3?) coalesced packets, which might help you (slightly). I agree with NAME also that since all coalesced packets are supposed to go to the same connection, the implementation would be better aligned if it were to pass the datagram along to the connection instead of the individual packets... but that's an impl choice, and it's certainly not the only way to do this.\nI'm merging this. Please reopen the issue if you think this ought to be discussed further, especially in Tokyo.", "new_text": "A server MAY send Retry packets in response to Initial and 0-RTT packets. A server can either discard or buffer 0-RTT packets that it receives. A server can send multiple Retry packets as it receives Initial or 0-RTT packets. A server MUST NOT send more than one Retry packet in response to a single UDP datagram. A client MUST accept and process at most one Retry packet for each connection attempt. After the client has received and processed an"}
{"id": "q-en-quicwg-base-drafts-897877c5f73741229135d8563a16a7a0dc2f53be4c81904d574433adf650a953", "old_text": "4. As discussed above, frames are carried on QUIC streams and used on control streams, request streams, and push streams. This section describes HTTP framing in QUIC. For a comparison with HTTP/2 frames, see h2-frames. 4.1.", "comments": "So a friendly pointer to detail that could be easily misinterpreted or missed? It seems like a table footnote is required but I'm not sure how to do that in Markdown.\nWhile each frame's description indicates the stream type(s) on which it is allowed, there isn't one unified place to indicate which frames are expected on which streams. QPACK subdivides the instruction space per stream, which has the nice property that you can't put something on the wrong stream. While that's overkill for H3 (too dramatic a departure from H2, multiple IANA registries, etc.), having a table to indicate which frames are permitted on which streams would be helpful.\nI like the table idea for both. Want a PR?\nText always welcome.\nI'll submit two PRs in next day or so.\nMinor nit; otherwise good. Given that we have two frames that fall into this camp, might it be worth doing something like \"(*) = only as the first frame\"?", "new_text": "4. HTTP frames are carried on QUIC streams, as described in stream- mapping. HTTP/3 defines three stream types: control stream, request stream, and push stream. This section describes HTTP/3 frame formats and the streams types on which they are permitted; see stream-frame- mapping for an overiew. A comparison between HTTP/2 and HTTP/3 frames is provided in h2-frames. 4.1."}
{"id": "q-en-quicwg-base-drafts-e00da513e2a64d8c768d76d1ae8374eb0046d1423aa163adac53763adb43ab69", "old_text": "mapping for an overiew. A comparison between HTTP/2 and HTTP/3 frames is provided in h2-frames. 4.1. All frames have the following format:", "comments": "This is my stab at adding the indication discussed in .\nLgtm. An minor Improvement might be to highlight in the new sentence that \"specific guidance is provided in the relevant section\" but that might be obvious. I'm happy without anything further.", "new_text": "mapping for an overiew. A comparison between HTTP/2 and HTTP/3 frames is provided in h2-frames. Certain frames can only occur as the first frame of a particular stream type; these are indicated in stream-frame-mapping with a (1). Specific guidance is provided in the relevant section. 4.1. All frames have the following format:"}
{"id": "q-en-quicwg-base-drafts-387ec9d11597a0313cdf25c3ab03ac9d5552d1b4ca050b348b94da9ef239f207", "old_text": "packets arrive. If a peer could timeout within an Probe Timeout (PTO, see Section 6.2.2 of QUIC-RECOVERY), it is advisable to test for liveness before sending any data that cannot be retried safely. 10.3.", "comments": "What data can and cannot be retried safely?\nThat's not something that QUIC knows. HTTP does though. As advice, I don't think that this needs much more expansion, but I'd be happy to take a PR.\nI'm not so sure this is a HTTP-only issue. Unlike TLS 1.3, there is QUIC data being sent under encryption, and that might have side effects.\nLGTM, one nit. (\"using applications\" is fine, it's just a bit awkward.)", "new_text": "packets arrive. If a peer could timeout within an Probe Timeout (PTO, see Section 6.2.2 of QUIC-RECOVERY), it is advisable to test for liveness before sending any data that cannot be retried safely. Note that it is likely that only applications or application protocols will know what information can be retried. 10.3."}
{"id": "q-en-quicwg-base-drafts-ff486ab8f9d3c87f4d77eae4fb1a1680690c566984dedd778070be359ac936c0", "old_text": "can be mounted using spoofed source addresses. In determining this limit, servers only count the size of successfully processed packets. Clients MUST pad UDP datagrams that contain only Initial packets to at least 1200 bytes. Once a client has received an acknowledgment for a Handshake packet it MAY send smaller datagrams. Sending padded datagrams ensures that the server is not overly constrained by the amplification restriction.", "comments": "This change does two things: Clarify that if UDP datagram contains Initial packets, it must be at least 1200 bytes in size; and Rule out padding the UDP datagram with junk (the \"padding packets\" part) as NAME suggested earlier. Issue is only about (1). Not that I am against (2), but this is just so that we are clear.\nI think that the reinterpretation of the text here has introduced the second problem NAME mentions. This also has the unfortunate consequence of disallowing small acknowledgment datagrams (for HelloRetryRequest in particular). This might need to be more carefully phrased.\nAs NAME said, this is an editorial change, so I'm going to merge. The substantive one is in NAME\nNAME -- you're right, but I think this was unintentional earlier. I think if we want to make a case for allowing junk, that should be discussed separately. The proposed text captures what I believe our intent was.\nstates: This seemingly says that the first UDP datagram that has two coalesced packets, Initial and 0-RTT, can be smaller than 1200 bytes. I don't believe that's the intention.\nAgreed, I believe the \"only\" needs to removed unless I'm missing some other intent.\nI don't think that is sufficient. The intent of this was to allow the client to send Initial packets that acknowledge server Initial packets without padding to 1200.\nGiven PADDING is supposed to count towards bytes in flight, but it's not ack-eliciting, a special case for ACK-only packets makes sense. I'd be inclined to specify ACK-only packets as the exception, opposed to basing it on receiving an ACK of Handshake packets or some other mechanism.\nAs stated in URL, I prefer to stop requiring the use of full-sized packets once the client observes a response from server. Reasons: QUIC stacks would have the logic to change rules based on if the peer's address has been validated due to the 3-packet rule on the server side. It's just about using the same logic for padding on the client side. Implementations that prefer to pad all the Initial packets it sends can do so anyways. Regardless of what we decide, the server side logic would be indifferent; i.e. create new connection context only when observing a full-sized Initial packet.\ntl;dr: we should always pad to 1200 for UDP datagrams that contain Initial packets. The reason we pad to 1200 is to provide a server with plenty of inbound bytes so that it can send outbound bytes prior to path validation. Padding to 1200 has sub-optimal performance in certain cases. In particular, it makes datagrams containing a simple ACK enormous. So the original thought was to try to find a case where we could allow for smaller datagrams to be sent. I think that in both alternative designs we have a problem with the interaction with our DoS defense. And while those problems might only exist in the abstract for now, I think that the best answer is to pad to 1200 always. (I will suggest rewording the text separately.) An ACK-only Initial packet is sent at the tail of the Initial phase. That is the obvious candidate for consideration here. But there are other cases where small Initial packets might be sent. The looser requirement NAME suggests is less than ideal for HelloRetryRequest. A client can receive a HelloRetryRequest and be forced to send another ClientHello. We can't assume here that the server is using the HelloRetryRequest to validate the client address (though it certainly can). If the ClientHello message is small, the number of bytes that a server can send is still limited. If the client doesn't pad, the server can still count the original ClientHello toward its budget, but it doesn't get significant additional budget from the client. And that budget doesn't increase when packets are lost. So it could be that the server is stuck with 3x whatever it gets in ACKs, which is not a lot. It's worse if you consider the possibility of the ServerHello being enormous (hello PQ crypto). At this point, the server won't have validated the client address and so is still limited in what it can send. Ideally, we want the packets to be big so that the server gets more budget for sending and isn't limited to sending tiny packets. But if all the client sends is small ACK-only packets (under either proposed amendment, either what NAME or NAME describe), the increased allowance is very small. Based on this, I would suggest that we reluctantly say that all Initial-carrying datagrams need to be 1200 bytes or more. For the case of the ACK at the tail, most of the designs proposed for key discard will result in that packet not being sent. If it is sent, there are plenty of cases where it will be a packet with Initial (ACK), Handshake (ACK + ...Finished) and 1-RTT data, so the requirement is met without wasting bytes at all. But for ACK of big ServerHello messages or HelloRetryRequest, the overhead is regrettably necessary.\nSGTM. I agree this greatly incentivizes good coalescing, but otherwise I don't see any downsides. And simple is good. Even when suggesting the special case for ACK-only packets, it felt like a premature optimization.\nWorks for me. It's a corner case anyways. Unless HRR is involved, client will not send an Initial in response to the server's Initial, assuming we decide to continue dropping the Initial keys when the client receives a handshake packet.\nA potential friendly amendment, but then we should proceed. I would say after confirming with chairs, because there was a pretty strong design decision involved here, but technically this is only an editorial change.", "new_text": "can be mounted using spoofed source addresses. In determining this limit, servers only count the size of successfully processed packets. Clients MUST ensure that UDP datagrams containing Initial packets are sized to at least 1200 bytes, adding padding to packets in the datagram as necessary. Once a client has received an acknowledgment for a Handshake packet it MAY send smaller datagrams. Sending padded datagrams ensures that the server is not overly constrained by the amplification restriction."}
{"id": "q-en-quicwg-base-drafts-53340fc7b3ab7bc44572765db78a409ef579706f4264edde20a2e1f38ca7cc76", "old_text": "for responses in request-response. Due to reordering, data on a push stream can arrive before the corresponding PUSH_PROMISE, in which case both the associated client request and the pushed request headers are unknown. Clients which receive a new push stream with an as-yet-unknown Push ID can buffer the stream data in expectation of the matching PUSH_PROMISE. A client can use stream flow control (see section 4.1 of QUIC-TRANSPORT) to limit the amount of data a server", "comments": "As discussed on Slack, currently the text makes no mention of the case where the PUSHPROMISE arrives behind (some of) the data on the push stream itself. However, it does mention this situation for the DUPLICATEPUSH frame. This proposed change recommends buffering pushed data for an unknown PUSHID and using flow control to prevent this from being a problem. I am unsure if more text is needed (e.g., what happens if the PUSHID is never resolved (which would mean a misbehaving server)), but I think the proposed change should suffice. This is also related to issue . Thanks to NAME for his help.\nLGTM. Non-normative language seems to be the best form of guidance here\nthe push id that links a push stream to a push promise is now an unframed varint that comes right after the stream preface. this is right now the only unframed data (except made for the unidirectional stream prefaces) in the specs and I am proposing to make that a frame as well for consistency and to simplify implementation and allow code reuse. motivation: the specs already define frames that only carry a single varint (mostly push related) max push id duplicate push cancel push and goaway! here instead we decided to leave it unframed just because we can since it is at the beginning of the stream cost: two extra bytes per push stream the text will need to specify that on a push stream this new frame (PUSH_ID ?) MUST be the first one, otherwise it is a protocol error. I am happy to put up a PR, in case we reach consensus\nThe varint type and length of a frame is also unframed. So you are asking to trade one unframed varint for two. Is your assertion that this is unique and difficult code to write, or that the unframed content is somehow difficult to manage?\nThis seems to be going back to some of the early discussion we had on a thread I can't find. I'm not against discussing things again because we have more experience. However, I don't see a compelling reason for the change. My implementation needs to parse unframed data because of the stream header and for QPACK streams. I think this change might make my implementation more complex. Reading multiple varints requires more work: more checks to see if the stream has enough data, more checks to ensure only 1 frame sent at a specific time (this is the same as SETTINGS)\nNAME those are in the frame header -- so they are part of the frame. I am not sure what your point is here then. Here I am assuming that an implementation has generic code to parse frames, and that parsing anything that is not part of a frame requires special handling. While this is implementation specific and there can definitely be counter examples, from an abstraction and consistency point of view I see no reason for doing it the way we are doing it now. (If we discount the unidirectional stream preface) I believe there should be two type of streams: framed (control, request, and push streams) and unframed (QPACK, and potentially extension streams). The push stream right now is a hybrid, that only becomes a framed stream after receiving that first PUSH_ID. If I am recalling correctly when we discussed potentially having DATA frames with zero-length to transition to unframed data, it was also considered a bit unusual. Here I am arguing that we are sort of doing the reverse here where we have an unframed stream that then transitions to framed. I think this generates complexity. And really the benefit we get is saving two bytes and one frame type. NAME I do vaguely remember that discussion and not 'wasting' frame types may have been one of the arguments when we made that decision, but that is not a problem now that we have a 62bit space for frame types.\nI don't care about the cost, I care about things like having to deal with PUSH_ID (or whatever) frames in places other than the start of a push stream. And I don't believe that it is difficult to take the special-case code for pulling a varint off a partially-available stream and use that for push ID. We've not had too much trouble with that so far in our implementation.\nit is definitely not difficult, but it is a special case and as any special case forces an increased complexity in the possible codepaths, even if not that big in this case. I am making my argument for the sake of consistency mostly, and for the fact that I believe it would also help to better define error handling. With the current text, if I receive something bogus on a push stream (after the preface) I'll have to keep the stream around (because the push stream may come out of order) waiting for a PUSH_PROMISE with that ID that may never come. So I believe that framing could help error handling as well. I do not have super strong feelings about this proposal but at the same time I'd like us to try and recall what the motivations are for the current text and for making this special since it was a while back and personally I can't recall what they were.\nThis sounds like the case NAME is looking at. I'd argue this is a good reason for the current design, frame processing should only take place once the PUSH_PROMISE is received. But I'd like to understand how framing could help avoid errors.\nI was not trying to solve the same problem as here, but I was mentioning the fact that I believe framing would slightly help in a subset of cases. Take a server with a buggy push implementation, rather than a malicious one. Something that doesn't send the PUSHID at all. On the client you'll start seeing a flow of data and you can't do anything better than parsing the first few bytes of that as a PUSHID, and basically see random IDs. With Framing, if a server forgets to send the PUSHID, it is clear at the receiver that it is a protocol error. That being said there are many other classes of bugs that may trigger the same behavior even with frames, so this is minor. NAME now, that seems like an implementation dependent claim :) as says (and I like it) a client should buffer any data after receiving the PUSHID and before it has a PUSH PROMISE for it. and that is sufficient. no mention of frames there.\nRight now in draft 19 we have: Parse varint - stream type Parse varint - push ID Parse varint - frame type (HEADERS). The assertion then is, if step 2 gets skipped by the sender, the receiver will parse the HEADERS frame type as a varint expecting it to be the push ID (value 0x1, with no restriction on min or max encoding). It will then parse the HEADERS frame length as a varint expecting it to be the HEADERS frame type (length is unlikely to be 1). Doesn't the client just blow up at that point?\nIt was just an example. And as I said I don't feel strongly about the error handling nit. But if you must, yes if you assume headers is the first frame after the pushid then yes you would read 0x1 which is conveniently a server initiated bidirectional stream which is not allowed anyway, so you could reject it immediately. But I find this just fortuitous. For example, one could make a counter argument: why are we framing SETTINGS at all given that it is the first message on the control stream ? We also defined it quite clearly in this table what to do with it [Certain frames can only occur as the first frame of a particular stream type; these are indicated in Table 1 with a (1).] URL Do you think there is zero value in doing that for the PUSHID as well? I personally think there is some value. Anyway, I think we are diverting this the wrong route. All I am looking for here is a good explanation for why it is defined like it is now, and I am failing to find it. As far as the implementation goes it makes little difference, and I guess it is more of a consistency problem for me. That naked varint on the wire triggers my OCD :) So instead of going back and forth like this, perhaps NAME can shed some light on the good reasons why it is this way.\n+1 to levelling up, I'll take acknowledge my guilt here. But before I move on: If your at the stage of parsing QUIC STREAM frame payload, you should already know exactly what the stream ID is. Some historic points I remember: In Paris MT presented some ideas for unidirecitonal streams - URL This relates to issue URL and PR URL Push and control streams were identified then. And we were happy with reserving specific stream IDs for certain things. All non-reserved streams were server push streams. Subsequent data on the stream only makes logical sense to handle if you know what push it relates to, so those bytes go first. Roll forward some, QPACK design requires additional streams so those get reserved to. Around Kista, the topic of unidirectional stream types comes up. Without types, we make uncoordinated unidirecitonal stream extensions quite a difficult thing. So add more bytes at the start of stream and remove reservations. The SETTINGS frame can contain a set of parameters, indeed it SHOULD contain at least one GREASE setting. A single SETTINGS frame that describes the length of all parameters is quite useful, you know when the peer has finished sending all of them. I don't see as much value for PUSH_PROMISE because you're just framing a single integer that describes its own length already.\noh gosh yeah, I confused PUSH_ID with a stream id. scratch that completely\nOne can imagine a scenario where we'd want to place something before the SETTINGS frame on the control stream. On the other hand, it's less likely we'd ever want anything other than the Push ID to begin the push stream.\nDiscussed in London; no.\nSeems like a good editorial improvement.", "new_text": "for responses in request-response. Due to reordering, data on a push stream can arrive before the corresponding PUSH_PROMISE, in which case both the associated client request and the pushed request headers are unknown. Clients that receive a new push stream with an as-yet-unknown Push ID can buffer the stream data in expectation of the matching PUSH_PROMISE. A client can use stream flow control (see section 4.1 of QUIC-TRANSPORT) to limit the amount of data a server"}
{"id": "q-en-quicwg-base-drafts-5954584f9dd32dd8a638944c14d0d5d22913849f7b5f8ced211a92d9b2e125e5", "old_text": "19.4. An endpoint uses a RESET_STREAM frame (type=0x04) to abruptly terminate a stream. After sending a RESET_STREAM, an endpoint ceases transmission and retransmission of STREAM frames on the identified stream. A receiver", "comments": "Saying it \"terminates a stream\" dangerously makes it sound like it has the same meaning as HTTP/2, which is false.\nThanks for all these little patches Martin, they are a real help.", "new_text": "19.4. An endpoint uses a RESET_STREAM frame (type=0x04) to abruptly terminate the sending part of a stream. After sending a RESET_STREAM, an endpoint ceases transmission and retransmission of STREAM frames on the identified stream. A receiver"}
{"id": "q-en-quicwg-base-drafts-9a11f0e7fb48d8e4ccbee887bd8778bf4f81b3e1bfd7ff9951a7b5d3099c9e2d", "old_text": "The final size is the amount of flow control credit that is consumed by a stream. Assuming that every contiguous byte on the stream was sent once, the final size is the number of bytes sent. More generally, this is one higher than the largest byte offset sent on the stream. For a stream that is reset, the final size is carried explicitly in a RESET_STREAM frame. Otherwise, the final size is the offset plus the", "comments": "The transport draft states the following (Section 4.4): This verbiage, introduced in PR , is ambiguous. Byte offset may refer to the Offset field in the STREAM frame; in this case, the definition above is incorrect. What we want to say is that the final size is one higher than the offset of the byte with the largest offset sent on the stream. If no bytes were sent, the final size is zero.", "new_text": "The final size is the amount of flow control credit that is consumed by a stream. Assuming that every contiguous byte on the stream was sent once, the final size is the number of bytes sent. More generally, this is one higher than the offset of the byte with the largest offset sent on the stream, or zero if no bytes were sent. For a stream that is reset, the final size is carried explicitly in a RESET_STREAM frame. Otherwise, the final size is the offset plus the"}
{"id": "q-en-quicwg-base-drafts-7ee2eccadd9303ef1fb4e13406dd72e00d0f5c775368e9c4e70654e782e75f19", "old_text": "Endpoints MUST NOT send this extension in a TLS connection that does not use QUIC (such as the use of TLS with TCP defined in TLS13). A fatal unsupported_extension alert MUST be sent if this extension is received when the transport is not QUIC. 8.3.", "comments": "No point in levying requirements on them.\nSection 8.2 says: However, this directly contradicts TLS 1.3 URL, which says: In order to say this, this document would need to Update 8446, and even then it doesn't make sense because the presumptive point of this rule is to detect attempts to negotiate QUIC with non-QUIC TLS endpoints, which don't need to conform to this RFC (and may predate it).\nI don't think QUIC can tell anyone what do with regard to any protocol that is not QUIC. And since QUIC v1 carries its own TLS record wrapping it makes even less sense.\nI agree with the problem statement, however I wonder if this opens a session ticket obtained using TLS13-over-TCP to be used in QUIC. Is that what we want?\nNAME why would this text have an impact on that? Couldn't I negotiate TLS 1.3 over TCP w/o extension and then send the resume over QUIC w/ extension?\nThe point here is to insist on the alert if the extension is understood. This wasn't intended to supersede 8446.\nNAME My bad. I was confused with the earlier versions of the QUIC-TLS draft that had the transport_parameters extension in the session tickets.\nNAME I think you could phrase that, but is it useful, given that we can't rely on it?\nProbably not. I'm OK with culling the text.\nThis might not be possible if a cloud load balancer only forwards QUIC with TLS and terminates TLS otherwise.", "new_text": "Endpoints MUST NOT send this extension in a TLS connection that does not use QUIC (such as the use of TLS with TCP defined in TLS13). A fatal unsupported_extension alert MUST be sent by an implementation that supports this extension if the extension is received when the transport is not QUIC. 8.3."}
{"id": "q-en-quicwg-base-drafts-a4716c9243462cd490be3c2a65319b387d12be0a5336c776ad7c290822d1acd8", "old_text": "maximum value for a Push ID that the server can use in a PUSH_PROMISE frame. Consequently, this also limits the number of push streams that the server can initiate in addition to the limit set by the QUIC MAX_STREAM_ID frame. The MAX_PUSH_ID frame is always sent on the control stream. Receipt of a MAX_PUSH_ID frame on any other stream MUST be treated as a", "comments": "Makes two minor fixes in the GOAWAY text, both editorial: Correctly references the MAXSTREAMS frame instead of the MAXSTREAM_ID frame Notes that servers need not (and cannot) send a GOAWAY if the client has used up all the possible stream IDs; the client simply won't be making any more requests.\nThis is a whatever...", "new_text": "maximum value for a Push ID that the server can use in a PUSH_PROMISE frame. Consequently, this also limits the number of push streams that the server can initiate in addition to the limit set by the QUIC MAX_STREAMS frame. The MAX_PUSH_ID frame is always sent on the control stream. Receipt of a MAX_PUSH_ID frame on any other stream MUST be treated as a"}
{"id": "q-en-quicwg-base-drafts-a4716c9243462cd490be3c2a65319b387d12be0a5336c776ad7c290822d1acd8", "old_text": "of the connection. An endpoint that completes a graceful shutdown SHOULD use the HTTP_NO_ERROR code when closing the connection. 6.3. An HTTP/3 implementation can immediately close the QUIC connection at", "comments": "Makes two minor fixes in the GOAWAY text, both editorial: Correctly references the MAXSTREAMS frame instead of the MAXSTREAM_ID frame Notes that servers need not (and cannot) send a GOAWAY if the client has used up all the possible stream IDs; the client simply won't be making any more requests.\nThis is a whatever...", "new_text": "of the connection. An endpoint that completes a graceful shutdown SHOULD use the HTTP_NO_ERROR code when closing the connection. If a client has consumed all available bidirectional stream IDs with requests, the server need not send a GOAWAY frame, since the client is unable to make further requests. 6.3. An HTTP/3 implementation can immediately close the QUIC connection at"}
{"id": "q-en-quicwg-base-drafts-de44c104cbe81e135d1554fd67cad510859fff1ae6d424ddd14f0e0c5331696c", "old_text": "permitted; receipt of a second stream which claims to be a control stream MUST be treated as a connection error of type HTTP_WRONG_STREAM_COUNT. The sender MUST NOT close the control stream. If the control stream is closed at any point, this MUST be treated as a connection error of type HTTP_CLOSED_CRITICAL_STREAM. A pair of unidirectional streams is used rather than a single bidirectional stream. This allows either peer to send data as soon", "comments": "Technically design, but barely.\nOriginally posted by NAME in URL There are three reasonable places to blow up: When you decide to stop reading a critical stream (i.e. when STOPSENDING is sent) When you discover the peer has stopped reading a critical stream (i.e. when STOPSENDING is received) When you discover the peer has closed a critical stream (when RESETSTREAM or FIN is received) My inclination would be to advise against FIN/SS/R_S on a critical stream and mandate connection termination on receipt of any of them.\nCompletely agree. This seems like the simplest and most obvious solution.\nSGTM mike\nNote that this PR does NOT address , just a comment that came up there.\nRe-reading this, I think the existing text is pretty good; it just needs a little reframing. The existing requirement is to error out \"If the control stream is closed at any point,\" without reference to whether it was closed because the peer send a FIN/RESET on their control stream or you obeyed a STOP_SENDING on yours. I think I'll add a requirement not to request closure of a control stream; do you think we need more elucidation here?\nnits, but LGTM", "new_text": "permitted; receipt of a second stream which claims to be a control stream MUST be treated as a connection error of type HTTP_WRONG_STREAM_COUNT. The sender MUST NOT close the control stream, and the receiver MUST NOT request that the sender close the control stream. If either control stream is closed at any point, this MUST be treated as a connection error of type HTTP_CLOSED_CRITICAL_STREAM. A pair of unidirectional streams is used rather than a single bidirectional stream. This allows either peer to send data as soon"}
{"id": "q-en-quicwg-base-drafts-a20799360eff419bf2c6a086bebfaa9dca814c33e1a80274d4d9a528a7015836", "old_text": "The DUPLICATE_PUSH frame carries a single variable-length integer that identifies the Push ID of a resource that the server has previously promised (see frame-push-promise). This frame allows the server to use the same server push in response to multiple concurrent requests. Referencing the same server push", "comments": "Skirts the editorial/design line. Previously, DUPLICATEPUSH referred to a Push ID in a previously-sent PUSHPROMISE frame; that frame would have caused a connection error on arrival. This says that the error should be sent regardless of which frame arrives first. May need reconciling with .\nTwo questions: Should DUPLICATEPUSH respect max push ID limit? PUSHPROMISE says the following: provided in a MAXPUSHID frame (Section 4.2.8) and MUST NOT use the same Push ID in multiple PUSHPROMISE frames What should a client do if it sees a DUPLICATEPUSH for an ID of a PUSHPROMISE that it never received? This could be caused by a race condition, so it seems sensible to allow this to clients to support this without error. We should probably document that like we did for other racy things. E.g. a Push ID that has not yet been mentioned by a PUSHPROMISE frame.\n(2) is covered in : For (1), yes it should -- the DUPLICATE might show up first, but the implication is that the ID has already been used in a PUSHPROMISE frame. If that would be invalid, we should be explicit that the DUPLICATEPUSH is also invalid.\nSGTM\nI think that we have a frame name problem, otherwise, this is fine.I think some wires got crossed in the suggested changes. Your comment points out that this might need reconciliation with but it seems to borrow the text directly from that PR. The outcome is that this PR is inconsistent with itself because at line the same error with PUSHPROMISE causes a MALFORMEDFRAME. I think the best path is to make this change be a MALFORMED_FRAME error and reconcile on if the group want that change.", "new_text": "The DUPLICATE_PUSH frame carries a single variable-length integer that identifies the Push ID of a resource that the server has previously promised (see frame-push-promise), though that promise might not be received before this frame. A server MUST NOT use a Push ID that is larger than the client has provided in a MAX_PUSH_ID frame (frame-max-push-id). A client MUST treat receipt of a DUPLICATE_PUSH that contains a larger Push ID than the client has advertised as a connection error of type HTTP_MALFORMED_FRAME. This frame allows the server to use the same server push in response to multiple concurrent requests. Referencing the same server push"}
{"id": "q-en-quicwg-base-drafts-30d097dd5875ca4bacef4fe220125e09d6067910765b2e189967df16db602a85", "old_text": "A TCP connection error is signaled with QUIC RESET_STREAM frame. A proxy treats any error in the TCP connection, which includes receiving a TCP segment with the RST bit set, as a stream error of type HTTP_CONNECT_ERROR (http-error-codes). Correspondingly, a proxy MUST send a TCP segment with the RST bit set if it detects an error with the stream or the QUIC connection. 4.3.", "comments": "LGTM\nS 5.2. I don't believe this is possible on UNIX. What system call do I use to generate RST? Neither close() nor shutdown() does it.\nYou set SO_LINGER to {1, 0}, then close.\nI don't think there's a question here about the -http draft, is there?\nThe question is if it's possible on standard OSes. Sounds like it is on Unix. How about Windows? NAME\nThis is the same requirement as in HTTP/2. A \"MUST (but we know you probably won't)\" ?\nWell, that was a joke in 6919, so probably we shouldn't do that.\nThis is not that. We know that some implementations use TCP half-closed in HTTP. Some even rely on it to the point that things break if you don't do it. Not that things don't break for those people, but they have a narrow set of peers that they interact with. Of course, whether that set of endpoints is ALSO the same set of endpoints using CONNECT is hard to know.\nNote that this particular issue is not about half-closed. In half-closed, you generate a FIN and then keep receiving data. This is about sending an RST.\nPerhaps the text should be \"SHOULD\", and if the platform doesn't allow it, to \"close the connection\".\nThere's risk with that too, though. We encountered issues where the TCP connection was closed cleanly instead of aborted, and the client interpreted a truncated response as the (cacheable) response. It gets ugly. If you can't force a RST on the other side, you perhaps shouldn't be implementing CONNECT.\nDiscussed in Tokyo; MUST close the connection, SHOULD close with a RST.", "new_text": "A TCP connection error is signaled with QUIC RESET_STREAM frame. A proxy treats any error in the TCP connection, which includes receiving a TCP segment with the RST bit set, as a stream error of type HTTP_CONNECT_ERROR (http-error-codes). Correspondingly, if a proxy detects an error with the stream or the QUIC connection, it MUST close the TCP connection. If the underlying TCP implementation permits it, the proxy SHOULD send a TCP segment with the RST bit set. 4.3."}
{"id": "q-en-quicwg-base-drafts-dd3e656a80f9d25dbdfff8c14dd3b6198ceba3ec9b44888c2ed5d6fab494f4e2", "old_text": "bytes associated with an HTTP request or response payload. DATA frames MUST be associated with an HTTP request or response. If a DATA frame is received on either control stream, the recipient MUST respond with a connection error (errors) of type HTTP_WRONG_STREAM. 7.2.2.", "comments": "Use 'on a control stream' instead of 'on either control stream'. This is consistent with the changes made by .", "new_text": "bytes associated with an HTTP request or response payload. DATA frames MUST be associated with an HTTP request or response. If a DATA frame is received on a control stream, the recipient MUST respond with a connection error (errors) of type HTTP_WRONG_STREAM. 7.2.2."}
{"id": "q-en-quicwg-base-drafts-6eae9e6a0a7e19f80d4cf115dc28d125727eb7f19bcb31e34f04aec812cc1bd3", "old_text": "HTTP/3 endpoint is authoritative for other origins without an explicit signal. A server that does not wish clients to reuse connections for a particular origin can indicate that it is not authoritative for a request by sending a 421 (Misdirected Request) status code in", "comments": "in what I believe is the simplest way possible -- point to 8164 and require that be successfully retrieved prior to sending any scheme other than .\nIt would be good if it could be clarified what will happen if someone needs/tries to send a HTTP request with an http:// uri over H3. I primarily see this to occur when you have a case where you have an HTTP proxy that receives an HTTP request and then have an HTTP/3 connection with the authorative server. If the expectation is that all requests are upgraded by the HTTP/3 sending side, then that needs to be made explicit.\nI might propose this comes down to a discovery problem for the proxy. What method(s) do you envisage the proxy used to decide to make an HTTP/3 connection to the authorative server? This seems related to the discussion in HTTP WG - URL If Alt-Svc is being used, the flow might be that the proxy attempts to make a cleartext HTTP/1.1 connection to the authoritative server and does an upgrade, as described in RFC 7838, to HTTP/3. In which case I don't think any action is required. If the proxy wants to opportunistically upgrade via some happy eyeballs approach described on URL, we might want to provide guidance that either 1) it shouldn't be done for client requests with http:// or 2) that it should always be done.\nThis is another variant of -- HTTP/2 and HTTP/3 explicitly include the scheme and the authority with all requests, so the server knows what it's being asked for. The question is how the client decides that a QUIC endpoint is authoritative for this particular URL. RFC 7838 describes how an https://-schemed request might come to be sent to an HTTP/3 endpoint and the resolution of the issue was to ask the HTTP WG to define other ways. Adding RFC 8164 as well, an http://-schemed resource might also be requested over HTTP/3; it's up to the HTTPbis group whether they want to enable a direct upgrade there as well. I don't currently see any text required for this, but if you have suggestions, I'm happy to review or expand the discussion to the list.\nI'd rather duplicate this to and rely on for this. We were not very careful in that document to avoid mention of the specific HTTP version, but the technique is perfectly portable between versions. We only really intended to put HTTP/1.x off limits.\nPerhaps \"not very careful\" was careful enough -- it says \"such as HTTP/2,\" and mandates that the protocol needs a way to carry the scheme. HTTP/3 does, so we're covered. Do we think a reference to 8164 is warranted in the section about finding an HTTP/3 endpoint?\nBy \"not very careful\" I was mainly referring to the title :) As for the text, I don't know. It will depend on what else it says.\nDiscussed in London; reference 8164 and require the same check when sending a scheme other than \"https.\"", "new_text": "HTTP/3 endpoint is authoritative for other origins without an explicit signal. Prior to making requests for an origin whose scheme is not \"https,\" the client MUST ensure the server is willing to serve that scheme. If the client intends to make requests for an origin whose scheme is \"http\", this means that it MUST obtain a valid \"http-opportunistic\" response for the origin as described in RFC8164 prior to making any such requests. Other schemes might define other mechanisms. A server that does not wish clients to reuse connections for a particular origin can indicate that it is not authoritative for a request by sending a 421 (Misdirected Request) status code in"}
{"id": "q-en-quicwg-base-drafts-be83751042f5dfe51c9a5285eecc3e19978dad0dd8dae9b4b624621b45a03e64", "old_text": "An optional reference to a specification that describes the use of the setting. The entries in the following table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for integer", "comments": "I went with \"Default\" because that and \"Initial Value\" now have specific, related (but distinct) usages in the context of pre-SETTINGS settings. Swapping them seems like it would create unnecessary confusion.\nYeah that makes sense on a reread of section 7.2.5.2\nNot a new issue but: RFC 7540 has a column for . I think it makes sense that HTTP/3 does similar. Originally posted by NAME in URL", "new_text": "An optional reference to a specification that describes the use of the setting. The value of the setting unless otherwise indicated. SHOULD be the most restrictive possible value. The entries in the following table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for integer"}
{"id": "q-en-quicwg-base-drafts-de312725532fa6cb756edc6d1ae47010d41094cb9e320f977be214ba60b139f5", "old_text": "To entities other than its intended recipient, a stateless reset will appear to be a packet with a short header. For the stateless reset to appear as a valid QUIC packet and be smaller than the received packet, the Unpredictable Bits field needs to include at least 46 bits of data (or 6 bytes, less the two fixed bits). To ensure the stateless reset packet is not smaller than other packets received on the connection, an endpoint SHOULD also ensure the total packet length is at least the minimum chosen CID length plus 22 bytes. 22 bytes allows for 1 type byte, 4 packet number and data bytes, 16 bytes for AEAD expansion, and an extra byte to allow the peer to send a smaller stateless reset than the packet it receives. The Stateless Reset Token corresponds to the minimum expansion of the packet protection AEAD. More unpredictable bytes might be necessary if the endpoint could have negotiated a packet protection scheme with a larger minimum AEAD expansion. An endpoint SHOULD NOT send a stateless reset that is significantly larger than the packet it receives. Endpoints MUST discard packets that are too small to be valid QUIC packets. With the set of AEAD functions defined in QUIC-TLS, packets that are smaller than 21 bytes are never valid. Endpoints MUST send stateless reset packets formatted as a packet with a short header. However, endpoints MUST treat any packet ending", "comments": "PR had a mistake in it, and some very confusing language. The rules for generating stateless resets was all mixed up in some complementary rules about generating packets that might trigger stateless resets. This attempts to split out the rules for generating stateless resets (minimum size is 5 + 16 = 21), and for generating packets that might trigger a stateless reset (min_cid + 22).\nDo we need discussion on the fixed QUIC header bit in the triggering packet and outgoing reset packet?\nNAME if you can lay out what you think the security concerns are in another issue, that would helpful. I realize that there are plenty of compromises in this design, which make this less than ideal. But first I'd like to try to understand where you think this is falling short.\nDo we need discussion on the fixed QUIC header bit in the triggering packet and outgoing reset packet? Originally posted by NAME in URL\nMaybe I'm misunderstanding the question, but isn't the spec is completely clear on that? A stateless reset has the first bit unset and the second bit set.\nAgreed, I think this is already clear and I don't see any need to change it.\nThanks both. I just wanted to check and didn't feel like I had the brain cells to answer that. I don't like relying on the image only, but the text says \"two fixed bits\", and stateless reset - unlike Version Negotiation - is version-specific. So I'm closing with no action.\nI believe this PR addresses the stated issue, which is that the text is unclear. However, I am still concerned about the security properties of this design.", "new_text": "To entities other than its intended recipient, a stateless reset will appear to be a packet with a short header. For the stateless reset to appear as a valid QUIC packet, the Unpredictable Bits field needs to include at least 38 bits of data (or 5 bytes, less the two fixed bits). A minimum size of 21 bytes does not guarantee that a stateless reset is difficult to distinguish from other packets if the recipient requires the use of a connection ID. To prevent a resulting stateless reset from being trivially distinguishable from a valid packet, all packets sent by an endpoint SHOULD be padded to at least 22 bytes longer than the minimum connection ID that the endpoint might use. An endpoint that sends a stateless reset in response to packet that is 43 bytes or less in length SHOULD send a stateless reset that is one byte shorter than the packet it responds to. These values assume that the Stateless Reset Token is the same as the minimum expansion of the packet protection AEAD. Additional unpredictable bytes are necessary if the endpoint could have negotiated a packet protection scheme with a larger minimum expansion. An endpoint MUST NOT send a stateless reset that is three times or more larger than the packet it receives to avoid being used for amplification. reset-looping describes additional limits on stateless reset size. Endpoints MUST discard packets that are too small to be valid QUIC packets. With the set of AEAD functions defined in QUIC-TLS, packets that are smaller than 21 bytes are never valid. Endpoints MUST send stateless reset packets formatted as a packet with a short header. However, endpoints MUST treat any packet ending"}
{"id": "q-en-quicwg-base-drafts-de312725532fa6cb756edc6d1ae47010d41094cb9e320f977be214ba60b139f5", "old_text": "endpoint that occasionally uses different connection IDs might introduce some uncertainty about this. Finally, the last 16 bytes of the packet are set to the value of the Stateless Reset Token. This stateless reset design is specific to QUIC version 1. An endpoint that supports multiple versions of QUIC needs to generate a stateless reset that will be accepted by peers that support any", "comments": "PR had a mistake in it, and some very confusing language. The rules for generating stateless resets was all mixed up in some complementary rules about generating packets that might trigger stateless resets. This attempts to split out the rules for generating stateless resets (minimum size is 5 + 16 = 21), and for generating packets that might trigger a stateless reset (min_cid + 22).\nDo we need discussion on the fixed QUIC header bit in the triggering packet and outgoing reset packet?\nNAME if you can lay out what you think the security concerns are in another issue, that would helpful. I realize that there are plenty of compromises in this design, which make this less than ideal. But first I'd like to try to understand where you think this is falling short.\nDo we need discussion on the fixed QUIC header bit in the triggering packet and outgoing reset packet? Originally posted by NAME in URL\nMaybe I'm misunderstanding the question, but isn't the spec is completely clear on that? A stateless reset has the first bit unset and the second bit set.\nAgreed, I think this is already clear and I don't see any need to change it.\nThanks both. I just wanted to check and didn't feel like I had the brain cells to answer that. I don't like relying on the image only, but the text says \"two fixed bits\", and stateless reset - unlike Version Negotiation - is version-specific. So I'm closing with no action.\nI believe this PR addresses the stated issue, which is that the text is unclear. However, I am still concerned about the security properties of this design.", "new_text": "endpoint that occasionally uses different connection IDs might introduce some uncertainty about this. This stateless reset design is specific to QUIC version 1. An endpoint that supports multiple versions of QUIC needs to generate a stateless reset that will be accepted by peers that support any"}
{"id": "q-en-quicwg-base-drafts-7b1a5a17f71f21f4616ced5ffe78b33ff093be40ae0f7f4f4058c945e438ac1e", "old_text": "delay SHOULD NOT be considered an RTT sample. Until the server has validated the client's address on the path, the amount of data it can send is limited, as specified in Section 8.1 of QUIC-TRANSPORT. Data at Initial encryption MUST be retransmitted before Handshake data and data at Handshake encryption MUST be retransmitted before any ApplicationData data. If no data can be sent, then the PTO alarm MUST NOT be armed until data has been received from the client. Since the server could be blocked until more packets are received from the client, it is the client's responsibility to send packets to", "comments": "This is implied in some spots, but never stated with normative language.\nUnder \"Handshakes and New Paths\", it says \"Data at Initial encryption MUST be retransmitted before Handshake data and data at Handshake encryption MUST be retransmitted before any ApplicationData\". I moved that down to this section, since I think it fits slightly better here.\nIn terms of packet size, this is similar to what the old Handshake text recommended and what PTO should have said all along, as it aligns well with TCP's TLP and RTO behavior.\nThe previous text read like it only described retransmission of lost data, and this now reads like it only describes retransmission in PTOs. Are both intended? Is there a restriction anywhere that retransmitted data must not be sent at a higher encryption level than it was originally sent, or similar?\nAll of this text is in the PTO section, so only applies to PTO. I think transport is usually the best place to talk about where data is retransmitted, but it might be worth an editorial note that data is never retransmitted at a different packet number space than the original one.\nThanks for opening other issues, NAME I'm going to merge this, since I think it's a net betterment over what we have. We can continue the other discussions on the other issues/PRs.\nAt the New York QUIC interim, we dealt with the problem of the server being blocked by the amplification factor by making the client arm the PTO until it knew the server had validated the path, even if it had nothing to send. From the recovery : When reviewing URL, I realized that if we allowed the server to send a PING-only packet when the PTO fired and the amplification factor had been reached, then that would elicit a padded Initial ACK frame in response, which also fixes the issue. NAME though that was a simpler solution than the current approach and this is a bit of an edge case, so simple is probably good. The original issue was\nIn a strict sense, this means that a client can use N bytes to trigger the server to send 3N + epsilon bytes, but this fits in nicely with Probes generally not being subject to sending limits. It is somewhat gross but more elegant that a client timer with nothing in flight.\nI think this might be concerning, though I could well be wrong. Consider the case of an attacker acting as a client, using the IP address of a victim as the source address. The attacker can spoof an Initial packet that carries a Client Hello and another Initial packet that carries an ACK for the server's Initial packet that carried the Server Hello. An attacker can build the correct second packet assuming that the server uses the DCID of the client's Initial packet as it's CID (or the CID is zero length), and that the server's first packet number can be guessed. If that ACK contains a delay that indicate to the server that the RTT is very small (e.g., 1ms), then the PTO could be as small as that value. Then, if the server's connection timeout is say 1000ms, the server might send 1000 packets to the victim before dropping the connection. Am I missing something?\nPTO always uses exponential backoff, so if the first PTO was 1ms, that'd be ~10 PTOs, not 1000. In terms of total bytes sent, 10 packets with a 1 byte payload are still smaller than a single full-sized packet.\nIsn't this concern why we designed the Retry mechanism in the first place? If we are concerned about address spoofing attacks, the simple solution might be to generalize retry. As in, \"if the server can predict that its first flight will contain more than 3 times the amount of data received with the client's Initial packets, then it SHOULD verify the source address using Retry.\"\nMy concern is that the current anti-deadlock mechanism is fairly awkward and rarely needed, which makes me concerned it will be incorrectly implemented. By sending a PING only packet, the server is essentially informing the client it has something to send, but can't send it due to the amplification limit. The deadlock happens when the client's Initial is acknowledged, but the client doesn't receive the server's Initial, so it can't decrypt any handshake packets. In this state, the client has nothing to send. It can happen even if the server's first flight fits into 3 packets. But it's expected that typically servers will bundle their ACK of the client's Initial with the server Initial, and this will be rare in practice.\nNAME Fair. Due to the risk of amplification attacks, the server is probably not willing to retransmit Initial packets on timer. In that case, the current spec is a bit too lose. It should say that \"If the sender is not willing to retransmit Initial packets due to protection against amplification attacks, it MUST NOT acknowledge the client's initial packet.\" So basically, either the server verifies the client's address with the Retry/Token mechanism, or it MUST NOT acknowledge client initial packets before continuity can be verified by obtaining an ACK from the client.\nThat's an interesting suggestion, but I think it require some non-trivial changes to the server's logic in addition to not sending ACKs? Something similar to what's described in . It may be worth opening up an issue for allowing the server to not arm a timer until the path has been validated, because I think that's a wider issue than just this PR?\nI agree that having the server repeat a PING on timer may also work, and that the ACK of the randomly chosen Server's CID would make a reasonable address ownership test. ACK of the PING would naturally unlock any transmission that is pending address ownership. Kazuho objects that this his hackable if the chosen Server's CID is guessable. We don't have a \"non-guessable\" requirement in the spec. In that case, the natural fix is the challenge/response mechanism.\nNAME Ah. Thank you for pointing that out. Though it seems that the amount of the data that the server can send could go like 4x the amount of data that the client has sent rather than the current 3x. To paraphrase, I think what we might be trading here is the initial amount of data the server can send vs. the capability of sending PINGs. My view is that the former is a scarce resource, and that I'd prefer having the more space in the former even if requires us to have some complexity.\nNAME An Initial ping is maybe 45 bytes. So 10 pings makes the amplification ~3.4. I was under the impression that the 3 multiplier was semi-arbitrary, so I'm unconcerned about a fractional increase.\nI have a couple of thoughts. First, this change is not necessarily an overall simplification. It makes things slightly simpler at the client, but it adds about as much code to the server. Additional code has to be added at the server to the PTO response to only send a PING and not a retransmission when the path is not yet validated. Second, I think it does open up an interesting attack vector that is currently absent. NAME notes that there is amplification, but I don't think byte amplification is the interesting one here, since the total load is going to be < 1 MTU. The interesting amplification here is packet amplification, which is a useful attack since UDP and packet processing costs at endpoints / clients is a relatively high cost. And you can get a fair number of packets back from the server for spoofing a couple of packets, in spite of the exponential backoff. I'll grant that the attack is a bit of a stretch, and does require that the attacker guess the server's CID and packet number. Though, as NAME pointed out to me offline, we allow the server to use a zero-length CID. I don't think this change is necessary. And on balance, I don't think this change is a good one.\nI do like the idea of sending a PING frame from the server as a signal to the client that it is blocked. I think we can suggest that as an optimization that a clever server might implement under the 3x limit, or when it reaches the 3x limit.\nNAME While endpoints can do that, I am not sure if it is a good idea to endorse such a behavior in the specification. The reason we have so far been fine with having the 3x limit being applied until the Handshake packets are exchanged is because we assume that the size of ServerHello would not be that large compared to the size of the ClientHello, and therefore that the chance of hitting 3x limit before path validation is minimal. It's a hack. If we are to address the corner cases that the server hits after sending 3x amount of data, then I think we should fix the hack than putting another hack (i.e. the PING that signals 3x limit) above the existing hack. I mean, if we think that defining a way that allows the server to unblock the 3x limit is important, we should be considering allowing the use of PATHCHALLENGE / PATHRESPONSE frames in Initial packets.\nI believe this is simpler to implement assuming one is implementing both a client and server. I already have special case code to send a PING when there are bytes in flight but nothing to send, so this is very similar. I agree it's not necessary, but I find the simplification appealing, and when we first discussed this problem, no one suggested this solution, so I thought it was worth writing up. If we keep the existing text that the client has to keep sending, I'm not sure doing this on the server side is a valuable optimization, so I'd prefer to stick with one mechanism or the other. To me, the main negative is the server sending extra datagrams. And maybe that's enough to stick with what we have.\nWhen a PTO fires, the endpoint is supposed to try and retransmit something if there is data to retransmit. In this case, you'd need to special-case it so that the server doesn't in fact retransmit but sends a PING instead. That's what I meant by adding more code.\nIt also SHOULD send a PING only packet if there are bytes in flight but nothing to send.\nNot new data, any data. This was deliberate. See Section 5.3.1: \"When the PTO timer expires, and there is new or previously sent unacknowledged data, it MUST be sent.\" The point of this text was to send data if available, new or previously sent. In the case that we are discussing the server is supposed to send a PING, but it might have data to send.\nDiscussed in ZRH. NAME to decide if he wants to pursue this, potentially discuss with NAME and/or NAME\nI realized there is one major benefit of keeping the existing design, which is the client is guaranteed to have at least one RTT measurement in this circumstance, whereas the server may not. For this reason, I'm inclined to close this with no action unless others feel strongly otherwise.\nNAME came up with the same point simultaneously, so let's close this with no action.\nThere are multiple mentions of spurious retransmission in the recovery draft, but no text on what detecting this may require from an implementation. Maybe there should be? \"Spuriously declaring packets as lost leads to unnecessary retransmissions and may result in degraded performance due to the actions of the congestion controller upon detecting loss. Implementations that detect spurious retransmissions and increase the reordering threshold in packets or time MAY choose to start with smaller initial reordering thresholds to minimize recovery latency.\" From comments on\nSome discussion of implementation strategies here would absolutely be helpful; it doesn't obviously fall out of Quinn's current architecture.\nNAME and NAME indicated my effort at adding more text wasn't that useful, so I'm closing this.\nAs of URL, if the client's Initial is lost, there is no required behavior that leads to it being retransmitted. Because the client has received no ACKs, no is set, and no loss is detected, leading to the client only sending probe packets, which are not required to bear any particular data (\"A probe packet MAY carry retransmitted unacknowledged data\") so long as they are ACK-eliciting. The server, having no connection state, cannot do anything with these packets. Quinn probes are currently a single PING frame (plus padding, when appropriate) for simplicity, so implementing the new logic immediately broke our tests. If recovery logic now requires probe packets to bear specific data to function correctly in certain circumstances, this should be specified explicitly.\nI also hit this issue too. After several hours tries and error, I came to the conclusion that implementation should mark the in-flight Initial packet lost in this case, and resend it. Draft-22 has explicit statement that implementation should not mark in-flight crypto packet lost, but draft-23 does not have it anymore. BTW, draft-23 does not allow PING in Initial or Handshake packet.\nWhy not retransmit the data without declaring the in-flight packet lost?\nI think it is rather implementation detail to internally mark in-flight packet lost but keep it in order to catch late ACK.\nThere's a related problem where if the server's first flight goes missing, the only response the new logic has is to send a probe at unspecified, i.e. highest known, encryption level. The server will generally have 1-RTT keys at this point, so it sends a 1-RTT probe, which the client, not having received anything prior, cannot decrypt and drops. As above, this was previously handled by explicitly tracking whether any data was in-flight, but that got removed.\nThe packets should not be marked as lost, but data needs to be retransmitted before sending a PING. I sent out a PR , feel free to make further suggestions there.\nI'm marking this design because it adds some normative language, though I'll note that it doesn't change what I originally intended the text to say.", "new_text": "delay SHOULD NOT be considered an RTT sample. Until the server has validated the client's address on the path, the amount of data it can send is limited to three times the amount of data received, as specified in Section 8.1 of QUIC-TRANSPORT. If no data can be sent, then the PTO alarm MUST NOT be armed. Since the server could be blocked until more packets are received from the client, it is the client's responsibility to send packets to"}
{"id": "q-en-quicwg-base-drafts-7b1a5a17f71f21f4616ced5ffe78b33ff093be40ae0f7f4f4058c945e438ac1e", "old_text": "ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram. It is possible that the sender has no new or previously-sent data to send. As an example, consider the following sequence of events: new application data is sent in a STREAM frame, deemed lost, then retransmitted in a new packet, and then the original transmission is acknowledged. In the absence of any new application data, a PTO timer expiration now would find the sender with no new or previously- sent data to send. When there is no data to send, the sender SHOULD send a PING or other ack-eliciting frame in a single packet, re-arming the PTO timer. Alternatively, instead of sending an ack-eliciting packet, the sender MAY mark any packets still in flight as lost. Doing so avoids", "comments": "This is implied in some spots, but never stated with normative language.\nUnder \"Handshakes and New Paths\", it says \"Data at Initial encryption MUST be retransmitted before Handshake data and data at Handshake encryption MUST be retransmitted before any ApplicationData\". I moved that down to this section, since I think it fits slightly better here.\nIn terms of packet size, this is similar to what the old Handshake text recommended and what PTO should have said all along, as it aligns well with TCP's TLP and RTO behavior.\nThe previous text read like it only described retransmission of lost data, and this now reads like it only describes retransmission in PTOs. Are both intended? Is there a restriction anywhere that retransmitted data must not be sent at a higher encryption level than it was originally sent, or similar?\nAll of this text is in the PTO section, so only applies to PTO. I think transport is usually the best place to talk about where data is retransmitted, but it might be worth an editorial note that data is never retransmitted at a different packet number space than the original one.\nThanks for opening other issues, NAME I'm going to merge this, since I think it's a net betterment over what we have. We can continue the other discussions on the other issues/PRs.\nAt the New York QUIC interim, we dealt with the problem of the server being blocked by the amplification factor by making the client arm the PTO until it knew the server had validated the path, even if it had nothing to send. From the recovery : When reviewing URL, I realized that if we allowed the server to send a PING-only packet when the PTO fired and the amplification factor had been reached, then that would elicit a padded Initial ACK frame in response, which also fixes the issue. NAME though that was a simpler solution than the current approach and this is a bit of an edge case, so simple is probably good. The original issue was\nIn a strict sense, this means that a client can use N bytes to trigger the server to send 3N + epsilon bytes, but this fits in nicely with Probes generally not being subject to sending limits. It is somewhat gross but more elegant that a client timer with nothing in flight.\nI think this might be concerning, though I could well be wrong. Consider the case of an attacker acting as a client, using the IP address of a victim as the source address. The attacker can spoof an Initial packet that carries a Client Hello and another Initial packet that carries an ACK for the server's Initial packet that carried the Server Hello. An attacker can build the correct second packet assuming that the server uses the DCID of the client's Initial packet as it's CID (or the CID is zero length), and that the server's first packet number can be guessed. If that ACK contains a delay that indicate to the server that the RTT is very small (e.g., 1ms), then the PTO could be as small as that value. Then, if the server's connection timeout is say 1000ms, the server might send 1000 packets to the victim before dropping the connection. Am I missing something?\nPTO always uses exponential backoff, so if the first PTO was 1ms, that'd be ~10 PTOs, not 1000. In terms of total bytes sent, 10 packets with a 1 byte payload are still smaller than a single full-sized packet.\nIsn't this concern why we designed the Retry mechanism in the first place? If we are concerned about address spoofing attacks, the simple solution might be to generalize retry. As in, \"if the server can predict that its first flight will contain more than 3 times the amount of data received with the client's Initial packets, then it SHOULD verify the source address using Retry.\"\nMy concern is that the current anti-deadlock mechanism is fairly awkward and rarely needed, which makes me concerned it will be incorrectly implemented. By sending a PING only packet, the server is essentially informing the client it has something to send, but can't send it due to the amplification limit. The deadlock happens when the client's Initial is acknowledged, but the client doesn't receive the server's Initial, so it can't decrypt any handshake packets. In this state, the client has nothing to send. It can happen even if the server's first flight fits into 3 packets. But it's expected that typically servers will bundle their ACK of the client's Initial with the server Initial, and this will be rare in practice.\nNAME Fair. Due to the risk of amplification attacks, the server is probably not willing to retransmit Initial packets on timer. In that case, the current spec is a bit too lose. It should say that \"If the sender is not willing to retransmit Initial packets due to protection against amplification attacks, it MUST NOT acknowledge the client's initial packet.\" So basically, either the server verifies the client's address with the Retry/Token mechanism, or it MUST NOT acknowledge client initial packets before continuity can be verified by obtaining an ACK from the client.\nThat's an interesting suggestion, but I think it require some non-trivial changes to the server's logic in addition to not sending ACKs? Something similar to what's described in . It may be worth opening up an issue for allowing the server to not arm a timer until the path has been validated, because I think that's a wider issue than just this PR?\nI agree that having the server repeat a PING on timer may also work, and that the ACK of the randomly chosen Server's CID would make a reasonable address ownership test. ACK of the PING would naturally unlock any transmission that is pending address ownership. Kazuho objects that this his hackable if the chosen Server's CID is guessable. We don't have a \"non-guessable\" requirement in the spec. In that case, the natural fix is the challenge/response mechanism.\nNAME Ah. Thank you for pointing that out. Though it seems that the amount of the data that the server can send could go like 4x the amount of data that the client has sent rather than the current 3x. To paraphrase, I think what we might be trading here is the initial amount of data the server can send vs. the capability of sending PINGs. My view is that the former is a scarce resource, and that I'd prefer having the more space in the former even if requires us to have some complexity.\nNAME An Initial ping is maybe 45 bytes. So 10 pings makes the amplification ~3.4. I was under the impression that the 3 multiplier was semi-arbitrary, so I'm unconcerned about a fractional increase.\nI have a couple of thoughts. First, this change is not necessarily an overall simplification. It makes things slightly simpler at the client, but it adds about as much code to the server. Additional code has to be added at the server to the PTO response to only send a PING and not a retransmission when the path is not yet validated. Second, I think it does open up an interesting attack vector that is currently absent. NAME notes that there is amplification, but I don't think byte amplification is the interesting one here, since the total load is going to be < 1 MTU. The interesting amplification here is packet amplification, which is a useful attack since UDP and packet processing costs at endpoints / clients is a relatively high cost. And you can get a fair number of packets back from the server for spoofing a couple of packets, in spite of the exponential backoff. I'll grant that the attack is a bit of a stretch, and does require that the attacker guess the server's CID and packet number. Though, as NAME pointed out to me offline, we allow the server to use a zero-length CID. I don't think this change is necessary. And on balance, I don't think this change is a good one.\nI do like the idea of sending a PING frame from the server as a signal to the client that it is blocked. I think we can suggest that as an optimization that a clever server might implement under the 3x limit, or when it reaches the 3x limit.\nNAME While endpoints can do that, I am not sure if it is a good idea to endorse such a behavior in the specification. The reason we have so far been fine with having the 3x limit being applied until the Handshake packets are exchanged is because we assume that the size of ServerHello would not be that large compared to the size of the ClientHello, and therefore that the chance of hitting 3x limit before path validation is minimal. It's a hack. If we are to address the corner cases that the server hits after sending 3x amount of data, then I think we should fix the hack than putting another hack (i.e. the PING that signals 3x limit) above the existing hack. I mean, if we think that defining a way that allows the server to unblock the 3x limit is important, we should be considering allowing the use of PATHCHALLENGE / PATHRESPONSE frames in Initial packets.\nI believe this is simpler to implement assuming one is implementing both a client and server. I already have special case code to send a PING when there are bytes in flight but nothing to send, so this is very similar. I agree it's not necessary, but I find the simplification appealing, and when we first discussed this problem, no one suggested this solution, so I thought it was worth writing up. If we keep the existing text that the client has to keep sending, I'm not sure doing this on the server side is a valuable optimization, so I'd prefer to stick with one mechanism or the other. To me, the main negative is the server sending extra datagrams. And maybe that's enough to stick with what we have.\nWhen a PTO fires, the endpoint is supposed to try and retransmit something if there is data to retransmit. In this case, you'd need to special-case it so that the server doesn't in fact retransmit but sends a PING instead. That's what I meant by adding more code.\nIt also SHOULD send a PING only packet if there are bytes in flight but nothing to send.\nNot new data, any data. This was deliberate. See Section 5.3.1: \"When the PTO timer expires, and there is new or previously sent unacknowledged data, it MUST be sent.\" The point of this text was to send data if available, new or previously sent. In the case that we are discussing the server is supposed to send a PING, but it might have data to send.\nDiscussed in ZRH. NAME to decide if he wants to pursue this, potentially discuss with NAME and/or NAME\nI realized there is one major benefit of keeping the existing design, which is the client is guaranteed to have at least one RTT measurement in this circumstance, whereas the server may not. For this reason, I'm inclined to close this with no action unless others feel strongly otherwise.\nNAME came up with the same point simultaneously, so let's close this with no action.\nThere are multiple mentions of spurious retransmission in the recovery draft, but no text on what detecting this may require from an implementation. Maybe there should be? \"Spuriously declaring packets as lost leads to unnecessary retransmissions and may result in degraded performance due to the actions of the congestion controller upon detecting loss. Implementations that detect spurious retransmissions and increase the reordering threshold in packets or time MAY choose to start with smaller initial reordering thresholds to minimize recovery latency.\" From comments on\nSome discussion of implementation strategies here would absolutely be helpful; it doesn't obviously fall out of Quinn's current architecture.\nNAME and NAME indicated my effort at adding more text wasn't that useful, so I'm closing this.\nAs of URL, if the client's Initial is lost, there is no required behavior that leads to it being retransmitted. Because the client has received no ACKs, no is set, and no loss is detected, leading to the client only sending probe packets, which are not required to bear any particular data (\"A probe packet MAY carry retransmitted unacknowledged data\") so long as they are ACK-eliciting. The server, having no connection state, cannot do anything with these packets. Quinn probes are currently a single PING frame (plus padding, when appropriate) for simplicity, so implementing the new logic immediately broke our tests. If recovery logic now requires probe packets to bear specific data to function correctly in certain circumstances, this should be specified explicitly.\nI also hit this issue too. After several hours tries and error, I came to the conclusion that implementation should mark the in-flight Initial packet lost in this case, and resend it. Draft-22 has explicit statement that implementation should not mark in-flight crypto packet lost, but draft-23 does not have it anymore. BTW, draft-23 does not allow PING in Initial or Handshake packet.\nWhy not retransmit the data without declaring the in-flight packet lost?\nI think it is rather implementation detail to internally mark in-flight packet lost but keep it in order to catch late ACK.\nThere's a related problem where if the server's first flight goes missing, the only response the new logic has is to send a probe at unspecified, i.e. highest known, encryption level. The server will generally have 1-RTT keys at this point, so it sends a 1-RTT probe, which the client, not having received anything prior, cannot decrypt and drops. As above, this was previously handled by explicitly tracking whether any data was in-flight, but that got removed.\nThe packets should not be marked as lost, but data needs to be retransmitted before sending a PING. I sent out a PR , feel free to make further suggestions there.\nI'm marking this design because it adds some normative language, though I'll note that it doesn't change what I originally intended the text to say.", "new_text": "ack-eliciting packets, to avoid an expensive consecutive PTO expiration due to a single lost datagram. When the PTO timer expires, and there is new or previously sent unacknowledged data, it MUST be sent. Data that was previously sent with Initial encryption MUST be sent before Handshake data and data previously sent at Handshake encryption MUST be sent before any ApplicationData data. It is possible the sender has no new or previously-sent data to send. As an example, consider the following sequence of events: new application data is sent in a STREAM frame, deemed lost, then retransmitted in a new packet, and then the original transmission is acknowledged. When there is no data to send, the sender SHOULD send a PING or other ack-eliciting frame in a single packet, re-arming the PTO timer. Alternatively, instead of sending an ack-eliciting packet, the sender MAY mark any packets still in flight as lost. Doing so avoids"}
{"id": "q-en-quicwg-base-drafts-6daca996b5a46bf5edf8aaaec24cb7e2eb553bd683c324648ac60e6fe297dd97", "old_text": "versions are four-byte sequences with no additional constraints on format. Leading zeros SHOULD be omitted for brevity. Syntax: Where multiple versions are listed, the order of the values reflects the server's preference (with the first value being the most", "comments": "Use the correct ABNF grammar to express repetition and fix the Reserved Frame Type link.\nThere is some baggage with this parameter - URL People were a bit grumpy with changing their parser 18 months ago, I think they might be even grumpier now that we have even more client implementations in the wild. What is the benefit of this change? I think we need an issue for due process and discussion.\nOh joy. This caused me to go look at our parser in Chrome, and realized we're still doing the old (pre 1097) quic=;quic= format. So Chrome has work to do no matter how we resolve this. That being said, I'd really rather not have two different ways to do this, so +1 to NAME 's comment on the motivation.\nI can remove the additional unquoted token change and keep this PR purely as editorial fix. Before the PR, the current text does not have correct ABNF grammar. I added the unquoted token variant since it is permitted (but not required) by the grammar at URL Since HTTP/3 over QUIC is not the only user of this Alt-Svc header, I can imagine that generic parsers are written such that they already understand the unquoted token value. Thus, adding support for this should not be an additional burden? I have no strong opinion here. The additional CPU cost of handling both unquoted and quoted forms should be neglible. Disallowing the unquoted form is a restriction of the RFC 7838 grammar. Should I bring this to the list and/or create a new issue?\nLet's create a new issue. You're correct that non-HTTP/3 implementations might need to handle this - hence the original issue which related back to structured headers (an attempt at defining some best practice in this regard -URL). Part of the problem is not CPU cost of DQUOTEs but expressing \"a list of versions\" and making sure that implementations know how to do that. Alt-Svc defines the parameter syntax but not how to handle repetition of parameters.\nI removed the single quote change, this should be a purely editorial fix now. I'll create a new issue for the unquoted parameter value. The \"list of versions\" situation does not change, if you only support one version you would write: If you support multiple, you would still use one parameter, but you have to quote it since a comma is not valid in a .\nAccording to URL, the Alt-Svc parameter value can be unquoted: This means that this can be written: instead of only: The current text in URL forbids the unquoted variant which is an additional constraint on the RFC 7838 grammar. Should unquoted parameter values also be permitted? Note that this does not change the situation when multiple versions are supported since an unquoted \"token\" type is not allowed to contain commas. See also the discussion in .\nRFC 7838 allows unquoted strings, but there's nothing preventing new specifications of Alt-Svc parameters from being more restrictive. I'm not sure the added flexibility of allowing unquoted versions buys us much here, and it comes at an implementation cost.\nIf alt-svc can be specified in configuration files, it is likely to result in interop issues if unquoted is not permitted.\nI support allowing unquoted token for alt-svc quic parameter. quic=1 and quic=\"1\" are semantically the same, and there is no reason to disallow unquoted token. Moreover, requiring quoted-string means that we cannot use generic alt-svc parser because such parser can parse quic=1 without an error. You need to write your own parser to detect that the DQUOTE is really there. If you think that detecting it is not necessary, then requiring quoted-string is not necessary in the first place.\nPeople and implementers will be surprised (and do wrong) if the rules and constraints of RFC7838 aren't kept.\nTo highlight my earlier point, on slack the following was posted The issue appears to not be the unquoted ma=3600, but it does suggest we will see unquoted strings in the wild, and URL is said to also use unquoted ma=3600.\nBut that's the ma= field, not the quic= field, right? (This issue is about the quic= field, I think)\nAs I said elsewhere, the current text in draft-23 stems from a Structured Headers issue related to wanting to advertise multiple versions of QUIC. URL Alt-Svc isn't a structured one but if it were (e.g. if someone wanted to write a generic Structured Header parser) then there was a danger that the rules preventing dictionary key repetition would have prevented an endpoint sending multiple parameters e.g. . To mitigate the risk, the quic parameter was changed to a DQUOTED comma-separated list as part of URL This issue addresses the correctness within the Alt-Svc spec but does it reopen the can related to Structured Headers?\nI think that the problem here is independent from whether structured headers should be applied or not. - Structured header issue: duplicate keys. - Invalid grammar according to RFC 7838 since \"token\" does not include commas. - valid grammar according to both RFC 7838 and the QUIC description. However, if only a single version has to be advertised, then the quoting is not necessary: - valid grammar according to both RFC 7838 and the QUIC description. - valid grammar according to both RFC 7838, but QUIC currently forbids it. In I proposed to fix the grammar (editorial issue). However, at the same time I could also make the grammar match RFC 7838 (the last bullet point above). NAME I would mind not making the semantics more restrictive (e.g. restricting the values of the QUIC parameter to hexadecimal only), but restricting the parser grammar would impose extra restrictions on those who have to generate the header. It is conceivable that implementations will continue accepting the RFC 7838 grammar, and end up accepting while others would reject it and accept only. By making sure that the grammars align, I hope that this becomes a non-issue. So far NAME NAME and NAME seems to support this proposal and NAME is not sure. Can this be considered consensus? :smiley:", "new_text": "versions are four-byte sequences with no additional constraints on format. Leading zeros SHOULD be omitted for brevity. Syntax of the \"quic\" parameter value: Where multiple versions are listed, the order of the values reflects the server's preference (with the first value being the most"}
{"id": "q-en-quicwg-base-drafts-c9515b49797866f664fc270fad18092057a0c65e256ae5eaf58ac19c99663818", "old_text": "4.5. In order to be usable for 0-RTT, TLS MUST provide a NewSessionTicket message that contains the \"early_data\" extension with a max_early_data_size of 0xffffffff; the amount of data which the client can send in 0-RTT is controlled by the \"initial_max_data\" transport parameter supplied by the server. A client MUST treat receipt of a NewSessionTicket that contains an \"early_data\" extension with any other value as a connection error of type PROTOCOL_VIOLATION. A client that wishes to send 0-RTT packets uses the \"early_data\" extension in the ClientHello message of a subsequent handshake (see", "comments": "The current text was unclear to me: how does one enforce a MUST on TLS? The updated text hopefully clarifies that it's legal for a server to send a NewSessionTicket without the \"early_data\" extension - that indicates that the server supports resumption but not 0-RTT, as it does in TLS.", "new_text": "4.5. To communicate their willingness to process 0-RTT data, servers send a NewSessionTicket message that contains the \"early_data\" extension with a max_early_data_size of 0xffffffff; the amount of data which the client can send in 0-RTT is controlled by the \"initial_max_data\" transport parameter supplied by the server. Servers MUST NOT send the \"early_data\" extension with a max_early_data_size set to any value other than 0xffffffff. A client MUST treat receipt of a NewSessionTicket that contains an \"early_data\" extension with any other value as a connection error of type PROTOCOL_VIOLATION. A client that wishes to send 0-RTT packets uses the \"early_data\" extension in the ClientHello message of a subsequent handshake (see"}
{"id": "q-en-quicwg-base-drafts-569ecc7f498b326c50be2924c35ef7291f97ef4696a6b0ddb87933a91b002b5f", "old_text": "migrates away from a local address SHOULD retire all connection IDs used on that address once it no longer plans to use that address. An endpoint can request that its peer retire connection IDs by sending a NEW_CONNECTION_ID frame with an increased Retire Prior To field. Upon receipt, the peer SHOULD retire the corresponding connection IDs and send the corresponding RETIRE_CONNECTION_ID frames in a timely manner. Failing to do so can cause packets to be delayed, lost, or cause the original endpoint to send a stateless reset in response to a connection ID it can no longer route correctly. An endpoint MAY discard a connection ID for which retirement has been requested once an interval of no less than 3 PTO has elapsed since an acknowledgement is received for the NEW_CONNECTION_ID frame requesting that retirement. Subsequent incoming packets using that connection ID could elicit a response with the corresponding stateless reset token.", "comments": "NAME The text in the PR adopts 1 PTO as a suggestion, as I am not sure if there is a point in that value being the exact limit. If it needs to be, we can rephrase the text to \"MUST retire the corresponding connection IDs in one PTO and send corresponding RETIRECONNECTIONID frames.\"\nI'm a bit worried that a stateless reset from a retired CID could be used as an attack vector by simply delaying packets en-route. However, if the reset is only effective on CIDs that are no longer recognised that shouldn't be a problem.\nAt the moment, states, quote: Upon receipt, the peer SHOULD retire the corresponding connection IDs and send the corresponding RETIRECONNECTIONID frames in a timely manner. Failing to do so can cause packets to be delayed, lost, or cause the original endpoint to send a stateless reset in response to a connection ID it can no longer route correctly. As the text uses \"SHOULD\" rather than a \"MUST\", some view it as an optional requirement to switch to using a new connection ID in such a situation. Others think that it is a mandatory requirement for an endpoint to switch to a new connection ID. If we are to consider the latter behavior to be the case, I think the text should better be changed to \"MUST retire the corresponding connection IDs and send the corresponding RETIRECONNECTIONID frames in a timely manner\".\nMy personal take is that it would be nice if the processing of NEWCONNECTIONID frame and RETIRECONNECTIONID frame can be an optional feature. QUIC is already very complex, and it would be beneficial to have some features being optional, in sense that endpoints can simply discard certain frames when not implementing certain features. IIUC, at the moment, these two frames are optional for an endpoint that do not support migration. OTOH, from the robustness viewpoint of the protocol, it does make sense to change the text to MUST. We might have already crossed the river by introducing the Retire Prior To field. The third option is to add a TP that indicates if an endpoint supports retirement of connection IDs...\nThere was a lot of discussion when the Retire Prior To feature was added. At the time, I felt it pretty clear that it could not be optional from our server's perspective. I agree the sentence with the SHOULD, if read in isolation, could be interpreted as making the feature as optional. But the sentence immediately following it makes it pretty clear that it's not option. I understand the desire to make things optional for simplicity's sake, but a primary point of this feature, when it was added, was that is was required. Making it optional now would, IMO, defeat the purpose of the feature.\nMy understanding is that this is optional in the sense GOAWAY is optional. At some point, the connection is likely to terminate abruptly if you don't follow the request.\nThat's one way to look at it. The difference here is that we're not trying to kill the connection with Retire Prior To.\nThere was definitely lots of discussion around the requirement level here when this was added (see ), NAME asked: There was good discussion following that ( and scroll down). My understanding of was that the endpoint being asked to retire a CID really SHOULD retire it or else the peer may stop responding to that CID. After the endpoint requesting retirement gives up on a CID and forgets about it, packets will either arrive with that CID or they will not. It has two choices, it can choose to process them if they arrive with the wrong CID, or it can drop them. The enforcement mechanism here is to stop processing the \"bad\" CIDs, not close the connection, especially as you take into account potentially delayed-but-valid packets. I think the current stance of \"really SHOULD stop using such a CID, but the other endpoint needs to be careful about enforcing that\" is the right one, if there's a wording update that would help with understanding there, then it would be great to update the wording as an editorial issue.\nWhatever the answer here, the SHOULD was chosen because \"timely manner\" isn't defined in a testable fashion. If someone wants to argue for MUST, then we need a firmer definition of the reaction.\nAnd to agree with NAME the squishiness here only exists up until the point that this is \"do this or else\". We should make it clearer that the cost of non-compliance is losing the connection. Not always, but a endpoint is entitled to cut and run if their peer doesn't comply.\nNAME Thank you for the explanation and the links. They help a lot in understanding how we ended up here. NAME Yeah we already state that \"an endpoint MAY discard a connection ID for which retirement has been requested once an interval of no less than 3 PTO has elapsed since an acknowledgement is received for the NEWCONNECTIONID frame requesting that retirement.\" So I think it's testable. Considering that we have a MAY on the side that initiates the retirement, I think we can have a MUST on the receiver side (assuming that we want to).\nThat almost works, except that PTO isn't a shared variable. It's subjective and so endpoints might disagree on where this line is. At 3PTO, that's probably not a disagreement worth having, but you are effectively recommending instant compliance if you set the bar that low, as packet loss on any packet sent that attempts to comply might cause it to take up to 3PTO to get through.\nIIRC, the base of the SHOULD was that we didn't have a specific timeout anyone felt comfortable enforcing in the absence of shared timers between peers. We've had a general principle that requirements you can't tell whether the peer violated are probably SHOULDs instead of MUSTs. As Nick says, the SHOULD is intended to refer to doing so promptly, not to doing so at all.\nWhile I agree that we have tried to make MUST requirements testable, I am not certain if it is a good idea to avoid use of MUST when a requirement cannot be tested (or if doing so is a good idea, per the definition of the keywords in RFC 2119). At least for this particular case, isn't it a requirement for an endpoint to drop the old connection IDs? Though I could well be wrong though in how the keywords are used in practice.\nI'm fine saying MUST drop in a timely manner, and being explicit about what we mean by timely manner (3 PTO from the peer's perspective). Obviously, from the peer's perspective, that is enforceable. Locally though, it's obviously not as clear cut.\nWe need a PR. NAME do you think that you can do this? As discussed, should be that the receiving side will retire within one PTO. The sending side can enforce at 3PTO, by dropping the connection.\nDiscussed in Cupertino. NAME to quickly rev the PR to incorporate NAME suggestion.\nNAME the delayed packets are handled by this text: If you've followed the requirement to retire as indicated, then a delayed packet triggering stateless reset can't hurt you.Looks fine to me. I'm also fine with Martin's suggestions as well.The endpoint here has always been going to do whatever it does, regardless of whether we said SHOULD or MUST, so this seems fine. I do like the direction for the other endpoint to wait 3PTO, that would be nice to include.", "new_text": "migrates away from a local address SHOULD retire all connection IDs used on that address once it no longer plans to use that address. An endpoint can cause its peer to retire connection IDs by sending a NEW_CONNECTION_ID frame with an increased Retire Prior To field. Upon receipt, the peer MUST retire the corresponding connection IDs and send corresponding RETIRE_CONNECTION_ID frames. Failing to retire the connection IDs within approximately one PTO can cause packets to be delayed, lost, or cause the original endpoint to send a stateless reset in response to a connection ID it can no longer route correctly. An endpoint MAY discard a connection ID for which retirement has been requested once an interval of no less than 3 PTO has elapsed since an acknowledgement is received for the NEW_CONNECTION_ID frame requesting that retirement. Until then, the endpoint SHOULD be prepared to receive packets that contain the connection ID that it has requested be retired. Subsequent incoming packets using that connection ID could elicit a response with the corresponding stateless reset token."}
{"id": "q-en-quicwg-base-drafts-705c4a2e4834545d72a4e5df364655b843e4206a39b489814650c71dfa7cc41a", "old_text": "not ACK-only, and they are not acknowledged, declared lost, or abandoned along with old keys. All frames besides ACK or PADDING are considered ack-eliciting. Packets that contain ack-eliciting frames elicit an ACK from the receiver within the maximum ack delay and are called ack-eliciting", "comments": "At the moment, we state that a QUIC packet that contains frames other than ACK and PADDING (), but I wonder if that is true for CONNECTIONCLOSE frame. A CONNECTIONCLOSE frame never elicits an acknowledgment. IIUC, it is not meant to be governed by the congestion controller. Considering the aspects, I think we should add CONNECTION_CLOSE to the list of non-ACK-eliciting frames.\nThat also ensures the congestion controller never inadvertently blocks sending a CONNECTIONCLOSE, which is consistent with the intent. If we make this change, I believe APPLICATIONCLOSE needs to be on the list as well. Now that we discuss ack-eliciting in transport, I think we'd need to change both transport and recovery and mark this design. So I think this would be a slight improvement, but I an also imagine people claiming that CONNECITON_CLOSE is already special, so there's no need for a change.\nAPPLICATION_CLOSE is no more. This seems like a fine change.\nThis seems sensible.\nDiscussed in Cupertino. Obvious correction.\nSeems right to me. Editorial tweaks suggested.", "new_text": "not ACK-only, and they are not acknowledged, declared lost, or abandoned along with old keys. All frames other than ACK, PADDING, and CONNECTION_CLOSE are considered ack-eliciting. Packets that contain ack-eliciting frames elicit an ACK from the receiver within the maximum ack delay and are called ack-eliciting"}
{"id": "q-en-quicwg-base-drafts-705c4a2e4834545d72a4e5df364655b843e4206a39b489814650c71dfa7cc41a", "old_text": "performance of the QUIC handshake and use shorter timers for acknowledgement. Packets that contain only ACK frames do not count toward congestion control limits and are not considered in-flight. PADDING frames cause packets to contribute toward bytes in flight without directly causing an acknowledgment to be sent.", "comments": "At the moment, we state that a QUIC packet that contains frames other than ACK and PADDING (), but I wonder if that is true for CONNECTIONCLOSE frame. A CONNECTIONCLOSE frame never elicits an acknowledgment. IIUC, it is not meant to be governed by the congestion controller. Considering the aspects, I think we should add CONNECTION_CLOSE to the list of non-ACK-eliciting frames.\nThat also ensures the congestion controller never inadvertently blocks sending a CONNECTIONCLOSE, which is consistent with the intent. If we make this change, I believe APPLICATIONCLOSE needs to be on the list as well. Now that we discuss ack-eliciting in transport, I think we'd need to change both transport and recovery and mark this design. So I think this would be a slight improvement, but I an also imagine people claiming that CONNECITON_CLOSE is already special, so there's no need for a change.\nAPPLICATION_CLOSE is no more. This seems like a fine change.\nThis seems sensible.\nDiscussed in Cupertino. Obvious correction.\nSeems right to me. Editorial tweaks suggested.", "new_text": "performance of the QUIC handshake and use shorter timers for acknowledgement. Packets containing frames besides ACK or CONNECTION_CLOSE frames count toward congestion control limits and are considered in- flight. PADDING frames cause packets to contribute toward bytes in flight without directly causing an acknowledgment to be sent."}
{"id": "q-en-quicwg-base-drafts-705c4a2e4834545d72a4e5df364655b843e4206a39b489814650c71dfa7cc41a", "old_text": "UDP datagram. Multiple QUIC packets can be encapsulated in a single UDP datagram. A QUIC packet that contains frames other than ACK and PADDING. These cause a recipient to send an acknowledgment (see sending- acknowledgements). An entity that can participate in a QUIC connection by generating, receiving, and processing QUIC packets. There are only two types", "comments": "At the moment, we state that a QUIC packet that contains frames other than ACK and PADDING (), but I wonder if that is true for CONNECTIONCLOSE frame. A CONNECTIONCLOSE frame never elicits an acknowledgment. IIUC, it is not meant to be governed by the congestion controller. Considering the aspects, I think we should add CONNECTION_CLOSE to the list of non-ACK-eliciting frames.\nThat also ensures the congestion controller never inadvertently blocks sending a CONNECTIONCLOSE, which is consistent with the intent. If we make this change, I believe APPLICATIONCLOSE needs to be on the list as well. Now that we discuss ack-eliciting in transport, I think we'd need to change both transport and recovery and mark this design. So I think this would be a slight improvement, but I an also imagine people claiming that CONNECITON_CLOSE is already special, so there's no need for a change.\nAPPLICATION_CLOSE is no more. This seems like a fine change.\nThis seems sensible.\nDiscussed in Cupertino. Obvious correction.\nSeems right to me. Editorial tweaks suggested.", "new_text": "UDP datagram. Multiple QUIC packets can be encapsulated in a single UDP datagram. A QUIC packet that contains frames other than ACK, PADDING, and CONNECTION_CLOSE. These cause a recipient to send an acknowledgment (see sending-acknowledgements). An entity that can participate in a QUIC connection by generating, receiving, and processing QUIC packets. There are only two types"}
{"id": "q-en-quicwg-base-drafts-705c4a2e4834545d72a4e5df364655b843e4206a39b489814650c71dfa7cc41a", "old_text": "Each endpoint advertises its own idle timeout to its peer. An endpoint restarts any timer it maintains when a packet from its peer is received and processed successfully. The timer is also restarted when sending a packet containing frames other than ACK or PADDING (an ack-eliciting packet; see QUIC-RECOVERY), but only if no other ack- eliciting packets have been sent since last receiving a packet. Restarting when sending packets ensures that connections do not prematurely time out when initiating new activity. The value for an idle timeout can be asymmetric. The value advertised by an endpoint is only used to determine whether the", "comments": "At the moment, we state that a QUIC packet that contains frames other than ACK and PADDING (), but I wonder if that is true for CONNECTIONCLOSE frame. A CONNECTIONCLOSE frame never elicits an acknowledgment. IIUC, it is not meant to be governed by the congestion controller. Considering the aspects, I think we should add CONNECTION_CLOSE to the list of non-ACK-eliciting frames.\nThat also ensures the congestion controller never inadvertently blocks sending a CONNECTIONCLOSE, which is consistent with the intent. If we make this change, I believe APPLICATIONCLOSE needs to be on the list as well. Now that we discuss ack-eliciting in transport, I think we'd need to change both transport and recovery and mark this design. So I think this would be a slight improvement, but I an also imagine people claiming that CONNECITON_CLOSE is already special, so there's no need for a change.\nAPPLICATION_CLOSE is no more. This seems like a fine change.\nThis seems sensible.\nDiscussed in Cupertino. Obvious correction.\nSeems right to me. Editorial tweaks suggested.", "new_text": "Each endpoint advertises its own idle timeout to its peer. An endpoint restarts any timer it maintains when a packet from its peer is received and processed successfully. The timer is also restarted when sending an ack-eliciting packet (see QUIC-RECOVERY), but only if no other ack-eliciting packets have been sent since last receiving a packet. Restarting when sending packets ensures that connections do not prematurely time out when initiating new activity. The value for an idle timeout can be asymmetric. The value advertised by an endpoint is only used to determine whether the"}
{"id": "q-en-quicwg-base-drafts-705c4a2e4834545d72a4e5df364655b843e4206a39b489814650c71dfa7cc41a", "old_text": "13.2. Endpoints acknowledge all packets they receive and process. However, only ack-eliciting packets (see QUIC-RECOVERY) trigger the sending of an ACK frame. Packets that are not ack-eliciting are only acknowledged when an ACK frame is sent for other reasons. When sending a packet for any reason, an endpoint should attempt to", "comments": "At the moment, we state that a QUIC packet that contains frames other than ACK and PADDING (), but I wonder if that is true for CONNECTIONCLOSE frame. A CONNECTIONCLOSE frame never elicits an acknowledgment. IIUC, it is not meant to be governed by the congestion controller. Considering the aspects, I think we should add CONNECTION_CLOSE to the list of non-ACK-eliciting frames.\nThat also ensures the congestion controller never inadvertently blocks sending a CONNECTIONCLOSE, which is consistent with the intent. If we make this change, I believe APPLICATIONCLOSE needs to be on the list as well. Now that we discuss ack-eliciting in transport, I think we'd need to change both transport and recovery and mark this design. So I think this would be a slight improvement, but I an also imagine people claiming that CONNECITON_CLOSE is already special, so there's no need for a change.\nAPPLICATION_CLOSE is no more. This seems like a fine change.\nThis seems sensible.\nDiscussed in Cupertino. Obvious correction.\nSeems right to me. Editorial tweaks suggested.", "new_text": "13.2. Endpoints acknowledge all packets they receive and process. However, only ack-eliciting packets cause an ACK frame to be sent within the maximum ack delay. Packets that are not ack-eliciting are only acknowledged when an ACK frame is sent for other reasons. When sending a packet for any reason, an endpoint should attempt to"}
{"id": "q-en-quicwg-base-drafts-c6f64b7c635501c9bf59804800baf9e650908e76980eb2f9bfed2ebb71b71b70", "old_text": "The endpoint accepting incoming QUIC connections. An opaque identifier that is used to identify a QUIC connection at an endpoint. Each endpoint sets a value for its peer to include in packets sent towards the endpoint.", "comments": "Some definitional work for terms that were poorly used. Note that \"host\" in the remaining uses is strictly interpreted in the RFC 1122 sense, so I left those alone. I considered sorting the terms section differently, and might still alphabetize that, but this isn't problematic just yet.\nThe word \"host\" is undefined. If it is equal to \"endpoint\", it should be replaced. Otherwise, \"host\" should\" be defined in Sec 1.2. The word \"address\" sometime means \"IP address\" and \"a pair of IP address and port\" in other cases. Sec 1.2 should define \"address\" as \"a pair of IP address and port\". And \"IP\" should be added in front of \"address\" which means \"IP address\".\nfixed this.\nThank you!", "new_text": "The endpoint accepting incoming QUIC connections. When used without qualification, the tuple of IP version, IP address, UDP protocol, and UDP port number that represents one end of a network path. An opaque identifier that is used to identify a QUIC connection at an endpoint. Each endpoint sets a value for its peer to include in packets sent towards the endpoint."}
{"id": "q-en-quicwg-base-drafts-c6f64b7c635501c9bf59804800baf9e650908e76980eb2f9bfed2ebb71b71b70", "old_text": "associated with an existing connection, or - for servers - potentially create a new connection. Hosts try to associate a packet with an existing connection. If the packet has a non-zero-length Destination Connection ID corresponding to an existing connection, QUIC processes that packet accordingly. Note that more than one connection ID can be associated with a connection; see connection-id. If the Destination Connection ID is zero length and the packet matches the local address and port of a connection where the host used zero-length connection IDs, QUIC processes the packet as part of that connection.", "comments": "Some definitional work for terms that were poorly used. Note that \"host\" in the remaining uses is strictly interpreted in the RFC 1122 sense, so I left those alone. I considered sorting the terms section differently, and might still alphabetize that, but this isn't problematic just yet.\nThe word \"host\" is undefined. If it is equal to \"endpoint\", it should be replaced. Otherwise, \"host\" should\" be defined in Sec 1.2. The word \"address\" sometime means \"IP address\" and \"a pair of IP address and port\" in other cases. Sec 1.2 should define \"address\" as \"a pair of IP address and port\". And \"IP\" should be added in front of \"address\" which means \"IP address\".\nfixed this.\nThank you!", "new_text": "associated with an existing connection, or - for servers - potentially create a new connection. Endpoints try to associate a packet with an existing connection. If the packet has a non-zero-length Destination Connection ID corresponding to an existing connection, QUIC processes that packet accordingly. Note that more than one connection ID can be associated with a connection; see connection-id. If the Destination Connection ID is zero length and the packet matches the local address and port of a connection where the endpoint used zero-length connection IDs, QUIC processes the packet as part of that connection."}
{"id": "q-en-quicwg-base-drafts-85eba4bf402148f43b0e26aebd0ce29110529e8f27faecf7f3e1c772dde33a87", "old_text": "perform this process as part of packet processing, but this creates a timing signal that can be used by an attacker to learn when key updates happen and thus the value of the Key Phase bit in certain packets. Endpoints SHOULD instead defer the creation of the next set of receive packet protection keys until some time after a key update completes, up to three times the PTO; see old-keys-recv. Once generated, the next set of packet protection keys SHOULD be", "comments": "As concluded in Singapore.\nWhen receiving a packet with an unexpected key phase, an implementation should compute the new keys: This creates a timing side-channel, since an attacker can modify the KEYPHASE bit, thereby making a peer compute the next key generation. The packet will be discarded, since the header is protected by the AEAD, but the fact that the key is computed on receipt of that packet is a side-channel that might be used to recover the header protection key. The way to avoid this side-channel is to pre-compute the N+1 key when rolling over keys from key phase N-1 to N. An implementation would then use the N+1 keys when receiving a packet with an unexpected KEYPHASE, in exactly the same way it would do decryption on a packet sent at key phase N. This however contradicts the text that claims that an implementation may choose to only keep two read keys at any given moment: Directly after a key update, an implementation would have to store (at least) 3 key generations: N-1 for a duration of 3 PTO, in order to be able to decrypt delayed / reordered packets, N, as well as the precomputed N+1 to avoid the timing side-channel described above. I don't think that storing 3 keys is a problem (I think NAME disagrees, I'll let him explain his reasoning himself), but the spec shouldn't pretend that it's possible to create an implementation that both isn't vulnerable to the timing side-channel and at the same time gracefully handles moderate amounts of reordering around a key update.\nI think you are correct in pointing out that there is a timing side-channel, and that we need to fix the issue. OTOH, I disagree that requiring all endpoints to be capable of retaining 3 keys is not a good way of fixing the issue due to the following reasons. Firstly, I do not think we can justify why an endpoint needs to be capable of retaining that much keys, even though Key Update is expected to happen very rarely. The purpose of Key Update is to replace a key after extensive use, we would never be using a key that becomes weak just after couple of PTOs. Considering that, using 2 keys should be sufficient; 3 keys is likely a sign of a needlessly complex design. Let's compare the proposal of the 3-key design with a 2-key design (that allows the receiver to delay the installation of the next key for 3 PTO after since Key Update, which is ). In the 3-key design, the next key is installed when a Key Update happens. The old key is disposed 3 PTO after a Key Update, and the slot becomes NULL. Note also that an endpoint needs to retain a mapping between packet number and key phase to determine whether an incoming packet is to be decrypted using the oldest key or the newest key (those two key share the same KEY_PHASE bit on the wire). In the 2-key design, installation of the next key and the disposal of the old key happens at the same moment (i.e. 3 PTO after a Key Update). An endpoint would always retain 2 keys once the handshake is confirmed. There is no need for retaining a mapping between packet numbers and key phases. The oddity of the key phase bit directly maps to the decryption key. To me it seems that the latter is simpler. As stated above, there is no need for doing Key Update every 3 PTO, and therefore, assuming that we need to make a change due to the attack vector, I think we should simply allow endpoints delay the installation of the next key rather than changing the increasing the minimum state that an endpoint needs to be capable of maintaining.\nI agree, there's not really a need to do key updates that frequently. However, I like the fact that we're currently defining a clear rule: An endpoint is allowed to update to N+1 as soon as it has received an acknowledgement for a packet sent at key phase N. You should still keep track of the packet number at which the key update happened, in order to check that the peer is not reusing the old key to encrypt packets after it already switched to new keys. Strictly speaking, this is not required for a working implementation, but it's a really easy check to catch a misbehaving peer (or detect an active attack). As I've mentioned in the discussion in , the reason I'm skeptical about delaying key updates by 3 PTO (and effectively dropping all packets with a higher key phase until this period is over) is that we're replacing a very clear definition by an approximate rule: the PTO is calculated locally, and depending on the network conditions, the two endpoints might have different opinions about the exact value of the PTO. An endpoint that wants to ensure that no packets are dropped due to the unavailability of keys would have no choice but to wait >> 3 PTO. While this should be well within the safety limits of the cipher suites we're using, not having a clear criterion seems to be a sign of suboptimal protocol design.\nPerhaps we should state that an endpoint should precompute the key early but not update keys too frequently to prevent timing attacks and overhead. That is not necessarily a bad thing, but it is unfortunate that there is no clear definition. We could also say the idle timeout period. As long as we compute keys using SHA256 it might not be a concern, but computing keys too frequently could exhaust safety parameters on other potentially faster algorithms.\nTo clarify, the key phase bit is inside header protection, so the attacker in question is the peer?\nNAME No, this is an on-path attack. Header protection is not authenticated, so an attacker can just flip bits in the header. Decrypting the packet will fail then, since the header is part of the authenticated data used in the AEAD. We have a timing side-channel as soon as we treat packets differently based on values in the unprotected header, before authenticating the packet. This is analogous to the .\nThanks for the extra explanation, I'm not sure that's a very good side-channel, but I agree it is one.\nDoes anyone have any ideas what kind of actual attack could take advantage of this? The attacker can only cause you to install your keys early once per key phase change. Assuming you hardly every change key phase, what then? I agree it's a way the unauthenticated attacker can change your state, but I don't think it's a big enough problem we need to worry about fixing.\nThat's a good question. I agree that this attack vector doesn't look very promising at first sight, but then again I'm always surprised what cryptanalysts can extract from a single bit. That depends. If you create the new keys, try decrypt the packet, and then discard the keys (if decryption fails), the attacker has an unbounded number of tries. Obviously, this would also be a DoS vector, since computing new keys is more than order of magnitude more expensive than an AEAD operation (at least in my implementation). If you cache the new keys once they are computed, then the attacker indeed only has a single shot per key phase. However, then you might as precompute the new keys when installing the current key phase. The problem with that is that the spec currently suggests that you only ever need to keep state for 2 key generations at any given moment. This is incompatible with caching the keys for the next key phase, as I described above.\nNAME But the current spec. also recommends to wait for 3 PTO before initiating the next Key Update. That would not change, and as we agree, nothing in practice requires us to do Key Updates every 3 PTO. I agree that an endpoint might still want to track the packet number at which key update happened. But even in such case, the simplicity of the 2-key design still exists, in sense that you'd only use it for detecting a suspicious peer. It's much straightforward and easier to test, because the rare case (of actually using 3 keys at the same time) never happens. To clarify, I am not saying that 3-key design is incorrect. My argument is that lack of clear signal is a non-issue (because there is no need to do a Key Update every 3 PTO or more frequently), and that therefore that argument cannot be used for prohibiting people to choose the 2-key design (which is simpler).\nIsn't the middle ground here to pre-compute the N+1 key when the N-1 key is dropped, with the drop happening at the first of: 3xPTO after the N key was first successfully used First arrival of a packet which purports to be N+1 The attacker's ability at that point would be to cause any delayed N-1 packets to be dropped, but if we're presuming the ability to bit-flip, they can cause those packets to be invalid and dropped anyway.\nNAME That certainly reduces the information being potentially exposed to the attacker to one bit per every key update. But no less than that for example when an attacker injects two packets slightly more frequent than every 3 PTO, with the only difference between the two packets being the value of the KEY_PHASE bit. In such an attack scenario, the attacker can tell which of the two packets caused the calculation of the updated key for every key update. Admittedly, the leak is tiny; but I am not sure if it is tiny enough that we can ignore.\nTo recap my comments at the mic: it's not clear to me what the attacker is learning here. Can someone expand on this?\nDiscussed in Montreal: NAME to work with NAME and NAME to work out the details\nI'd like to second NAME comment at the mic. I don\u2019t think this is an issue for which we need text. Without loss of generality, let\u2019s assume that in the worst case an attacker learns the entire unprotected header bytes for every packet. (That is, that header protection does nothing overwhelmingly helpful.) Revisiting the header protection algorithm, it basically works as follows: where H\u2019 and H are the protected and unprotected header bytes, respectively, and sample is known to the adversary. If we assume mask is the output of a PRP, and therefore both IND-CPA secure and KPA secure, an adversary learns nothing from known plaintext and ciphertext pairs. (The typical game-based IND-CPA security definition says that an adversary with access to an oracle for encrypting any message Mi of its choice has negligible probability in selecting b, upon giving the challenger M0 and M1 and receiving an encryption of Mb. Recovering the header protection key would break this assumption, as it would let the adversary encrypt M0 and M1 -- without the oracle -- and trivially match against M_b.) All that said, I think adding text describing that endpoints should precompute keys is fine. After all, it'd be nice if header protection worked as intended.\nNAME and NAME to resolve their issue before we'll take it forward\nDiscussed in Singapore; proposal is to change SHOULD to MAY regarding pregeneration of next phase of keys.", "new_text": "perform this process as part of packet processing, but this creates a timing signal that can be used by an attacker to learn when key updates happen and thus the value of the Key Phase bit in certain packets. Endpoints MAY instead defer the creation of the next set of receive packet protection keys until some time after a key update completes, up to three times the PTO; see old-keys-recv. Once generated, the next set of packet protection keys SHOULD be"}
{"id": "q-en-quicwg-base-drafts-1251ac9ee63f8a50f01b3ed969c0b31412ac2fb736eef89a74d7d6dea7f6d44a", "old_text": "Clients MUST discard Retry packets that contain an Original Destination Connection ID field that does not match the Destination Connection ID from its Initial packet. This prevents an off-path attacker from injecting a Retry packet. The client responds to a Retry packet with an Initial packet that includes the provided Retry Token to continue connection", "comments": "The Retry packet can be discarded.\nThe Retry packet currently allows sending an empty token. The Retry token is defined as an opaque blob where \"indicates that x is variable-length\", which also includes zero-length values. On the wire, an Initial packet with a zero-length token is identical to an Initial without a token. However, there's a difference on the client side, since a client will perform at most a single Retry.\nTagging this editorial. This was clearly intended to be considered non-zero length.", "new_text": "Clients MUST discard Retry packets that contain an Original Destination Connection ID field that does not match the Destination Connection ID from its Initial packet. This prevents an off-path attacker from injecting a Retry packet. A client MUST discard a Retry packet with a zero-length Retry Token field. The client responds to a Retry packet with an Initial packet that includes the provided Retry Token to continue connection"}
{"id": "q-en-quicwg-base-drafts-fe01872b9688dbcbd5182856d820f8bc5229e68ef041b3db14b9d5c13342da50", "old_text": "4.2. This document leaves when and how many bytes to advertise in a MAX_STREAM_DATA or MAX_DATA frame to implementations, but offers a few considerations. These frames contribute to connection overhead. Therefore frequently sending frames with small changes is undesirable. At the same time, larger increments to limits are necessary to avoid blocking if updates are less frequent, requiring larger resource commitments at the receiver. Thus there is a trade- off between resource commitment and overhead when determining how large a limit is advertised. A receiver can use an autotuning mechanism to tune the frequency and amount of advertised additional credit based on a round-trip time estimate and the rate at which the receiving application consumes data, similar to common TCP implementations. As an optimization, sending frames related to flow control only when there are other frames to send or when a peer is blocked ensures that flow control doesn't cause extra packets to be sent. If a sender runs out of flow control credit, it will be unable to send new data and is considered blocked. It is generally considered best to not let the sender become blocked. To avoid blocking a sender, and to reasonably account for the possibility of loss, a receiver should send a MAX_DATA or MAX_STREAM_DATA frame at least two round trips before it expects the sender to get blocked. A receiver MUST NOT wait for a STREAM_DATA_BLOCKED or DATA_BLOCKED frame before sending MAX_STREAM_DATA or MAX_DATA, since doing so will mean that a sender will be blocked for at least an entire round trip, and potentially for longer if the peer chooses to not send STREAM_DATA_BLOCKED or DATA_BLOCKED frames. 4.3.", "comments": "Addresses feedback from NAME and NAME to add a reference to the recovery doc on handling bursts resulting from flow control credit; that advice on when to send *_DATA frames (at least 2 RTTs before sender is blocked) is too specific. I've also done some editorializing to fix flow.\nIs this the right place to discuss pacing? In my mind, we\u2019re talking about different layers here. Bursty sending could happen due to many reasons, only one of them is due to increased flow control credit.\nYeah I was wondering that too. We do talk about application limited in the recovery doc, so maybe that's the place to clarify this. I'll see (after the weekend) if it makes sense to add anything there.\nI think we should discuss what to do in one place, but given the sort of traffic bursts that can arise if you happen to get this wrong, I am still keen to see a couple of sentences here that explain the pathology when a sender has cwnd and no credit, and then suddenly get lots of credit. We see the enormous burst, and it is REALLY not nice if you don't have a pacer to fix it (or figure out the receiver should have sent the credit earlier).\nNAME in the recovery draft covers the general case of an under-utilized cwnd, which applies here. The same bursting can happen with a bursty app as well, but we've said it all in one place -- in that section of the recovery draft. I think that section could use some editing work (which I'll take on separately), but we've already said there that it applies to flow control. I think that ought to cover the pacing concern.", "new_text": "4.2. Implementations decide when and how much credit to advertise in MAX_STREAM_DATA and MAX_DATA frames, but this section offers a few considerations. To avoid blocking a sender, a receiver can send a MAX_STREAM_DATA or MAX_DATA frame multiple times within a round trip or send it early enough to allow for recovery from loss of the frame. Control frames contribute to connection overhead. Therefore, frequently sending MAX_STREAM_DATA and MAX_DATA frames with small changes is undesirable. On the other hand, if updates are less frequent, larger increments to limits are necessary to avoid blocking a sender, requiring larger resource commitments at the receiver. There is a trade-off between resource commitment and overhead when determining how large a limit is advertised. A receiver can use an autotuning mechanism to tune the frequency and amount of advertised additional credit based on a round-trip time estimate and the rate at which the receiving application consumes data, similar to common TCP implementations. As an optimization, an endpoint could send frames related to flow control only when there are other frames to send or when a peer is blocked, ensuring that flow control does not cause extra packets to be sent. A blocked sender is not required to send STREAM_DATA_BLOCKED or DATA_BLOCKED frames. Therefore, a receiver MUST NOT wait for a STREAM_DATA_BLOCKED or DATA_BLOCKED frame before sending a MAX_STREAM_DATA or MAX_DATA frame; doing so could result in the sender being blocked for the rest of the connection. Even if the sender sends these frames, waiting for them will result in the sender being blocked for at least an entire round trip. When a sender receives credit after being blocked, it might be able to send a large amount of data in response, resulting in short-term congestion; see Section 6.9 in QUIC-RECOVERY for a discussion of how a sender can avoid this congestion. 4.3."}
{"id": "q-en-quicwg-base-drafts-1d6476327ccca9b30d0e2da7ea868c569d10bbece10f89cbda4d48c9db59b3fe", "old_text": "acknowledgment (see sending-acknowledgements). A packet that does not increase the largest received packet number for its packet number space by exactly one. A packet can arrive out of order if it is delayed or if earlier packets are lost or delayed. An entity that can participate in a QUIC connection by generating, receiving, and processing QUIC packets. There are only two types", "comments": "Fixes issue\nSect. 7 in transport refers to packer number spaces without any reference to what that might mean. These are only introduced later in sect. 12.3. Also the text talks about crypto sequence numbers and the text is not very clear about these being different from packet numbers if you haven't been through TLS text.\nThis is still an issue. I think adding references to where packet number spaces are mentioned (List of Common Terms and Section 7) would be sufficient.\nSounds great, would you be willing to write a PR NAME\nFixed in .", "new_text": "acknowledgment (see sending-acknowledgements). A packet that does not increase the largest received packet number for its packet number space (packet-numbers) by exactly one. A packet can arrive out of order if it is delayed or if earlier packets are lost or delayed. An entity that can participate in a QUIC connection by generating, receiving, and processing QUIC packets. There are only two types"}
{"id": "q-en-quicwg-base-drafts-1d6476327ccca9b30d0e2da7ea868c569d10bbece10f89cbda4d48c9db59b3fe", "old_text": "An endpoint can verify support for Explicit Congestion Notification (ECN) in the first packets it sends, as described in ecn-validation. The CRYPTO frame can be sent in different packet number spaces. The sequence numbers used by CRYPTO frames to ensure ordered delivery of cryptographic handshake data start from zero in each packet number space. Endpoints MUST explicitly negotiate an application protocol. This avoids situations where there is a disagreement about the protocol", "comments": "Fixes issue\nSect. 7 in transport refers to packer number spaces without any reference to what that might mean. These are only introduced later in sect. 12.3. Also the text talks about crypto sequence numbers and the text is not very clear about these being different from packet numbers if you haven't been through TLS text.\nThis is still an issue. I think adding references to where packet number spaces are mentioned (List of Common Terms and Section 7) would be sufficient.\nSounds great, would you be willing to write a PR NAME\nFixed in .", "new_text": "An endpoint can verify support for Explicit Congestion Notification (ECN) in the first packets it sends, as described in ecn-validation. The CRYPTO frame can be sent in different packet number spaces (packet-numbers). The sequence numbers used by CRYPTO frames to ensure ordered delivery of cryptographic handshake data start from zero in each packet number space. Endpoints MUST explicitly negotiate an application protocol. This avoids situations where there is a disagreement about the protocol"}
{"id": "q-en-quicwg-base-drafts-55f9e61f1b658627a09f916d188ba3108139c612189eb0e554aaf390b0ebd2cc", "old_text": "The Frame Type field uses a variable length integer encoding (see integer-encoding) with one exception. To ensure simple and efficient implementations of frame parsing, a frame type MUST use the shortest possible encoding. Though a two-, four- or eight-byte encoding of the frame types defined in this document is possible, the Frame Type field for these frames is encoded on a single byte. For instance, though 0x4001 is a legitimate two-byte encoding for a variable-length integer with a value of 1, PING frames are always encoded as a single byte with the value 0x01. An endpoint MAY treat the receipt of a frame type that uses a longer encoding than necessary as a connection error of type PROTOCOL_VIOLATION. 13.", "comments": "Fixes issue\nThanks for this. It's a lot clearer this way.\nrestricts frame type varint encoding: The relevant sentence is: This means that the restriction may not apply to frames defined by QUIC extensions (such as the frame in the Delayed Acks_ extension). Must each extension spell out its own frame type encoding rules? It would be easier to change the transport draft to extend the minimal encoding restriction to not-as-yet-specified frames as well.\nI read the first sentence as applying to all QUIC frame types, core or extensions. The examples confuse the matter by explicitly mentioning the core frames. Section 19.21 contains some guidance for extension frames, adding a pointer back to the encoding rules from would be a good editorial change.\nThe sentence I highlighted talks about frames in this document and then these frames, effectively overriding what the first sentence may imply.\nyes I think we agree :)\nI think we all agree that it can be made more explicit. The first sentence says that the rule for all frame types is that they be minimally encoded. Applying that principle to all frame types defined in this document means that they get one-byte encodings. We should clarify that it's an application of the rule, not a limitation of it.\nWhy did we have this rule in the first place? It just seems to complicate things in unexpected ways.\nNAME See . IIRC the rationale for using minimal varint encoding is that stacks that support only the standardized frames can live with a simpler dispatch scheme that takes one byte as an input (regardless of the dispatch method being a table or a switch clause or whatever). While I do not have a strong opinion regarding that requirement, I'd prefer not revisiting the discussion.\nPR has been merged -- closing bug.", "new_text": "The Frame Type field uses a variable length integer encoding (see integer-encoding) with one exception. To ensure simple and efficient implementations of frame parsing, a frame type MUST use the shortest possible encoding. For frame types defined in this document, this means a single-byte encoding, even though it is possible to encode these values as a two-, four- or eight-byte variable length integer. For instance, though 0x4001 is a legitimate two-byte encoding for a variable-length integer with a value of 1, PING frames are always encoded as a single byte with the value 0x01. This rule applies to all current and future QUIC frame types. An endpoint MAY treat the receipt of a frame type that uses a longer encoding than necessary as a connection error of type PROTOCOL_VIOLATION. 13."}
{"id": "q-en-quicwg-base-drafts-5aaf8ef0e3c9650ebb5288b824c80fc676b2a66b50e1d703e0f8d8bff0151f54", "old_text": "server, can cause a situation in which the server cannot send when the client has no data to send and the anti-amplification limit is reached. In order to avoid this causing a handshake deadlock, clients SHOULD send a packet upon a probe timeout, as described in QUIC-RECOVERY. If the client has no data to retransmit and does not have Handshake keys, it SHOULD send an Initial packet in a UDP datagram of at least 1200 bytes. If the client has Handshake keys, it SHOULD send a Handshake packet. A server might wish to validate the client address before starting the cryptographic handshake. QUIC uses a token in the Initial packet", "comments": "These are all MUST in recovery and the editors discussed this and agreed that these should be MUST too.", "new_text": "server, can cause a situation in which the server cannot send when the client has no data to send and the anti-amplification limit is reached. In order to avoid this causing a handshake deadlock, clients MUST send a packet upon a probe timeout, as described in QUIC-RECOVERY. If the client has no data to retransmit and does not have Handshake keys, it MUST send an Initial packet in a UDP datagram of at least 1200 bytes. If the client has Handshake keys, it SHOULD send a Handshake packet. A server might wish to validate the client address before starting the cryptographic handshake. QUIC uses a token in the Initial packet"}
{"id": "q-en-quicwg-base-drafts-132fd33bc0b50d111bbf738417a488c7ad4d0ed260620820b69d62752137d010", "old_text": "the server is not overly constrained by the amplification restriction. Packet loss, in particular loss of a Handshake packet from the server, can cause a situation in which the server cannot send when the client has no data to send and the anti-amplification limit is reached. In order to avoid this causing a handshake deadlock, clients MUST send a packet upon a probe timeout, as described in QUIC-RECOVERY. If the client has no data to retransmit and does not have Handshake keys, it MUST send an Initial packet in a UDP datagram of at least 1200 bytes. If the client has Handshake keys, it SHOULD send a Handshake packet. A server might wish to validate the client address before starting the cryptographic handshake. QUIC uses a token in the Initial packet", "comments": "Clarifies why a client might send a Handshake packet, as well as improving the surrounding anti-deadlock text. Changes 2 SHOULDs to MUSTs to line up with recovery.\nWe don't provide examples of the weird cases where you need to send Handshake packets to prevent amplification deadlock. The strangest one is where the client has all Initial data (and therefore has Handshake keys), but never receives a Handshake packet from the server. In this case, the client needs to send a Handshake packet that won't contain ACK or CRYPTO frames. We should explain this in a note. See .\nI think the logic in the recovery draft misses some of these cases entirely. The rule in draft 23 is: but in your example case there is not necessarily any unacknowledged data to transmit in a probe, and a is forbidden at Handshake level, leaving no reasonable options to compose an ACK-eliciting packet.\na few suggestions", "new_text": "the server is not overly constrained by the amplification restriction. Loss of an Initial or Handshake packet from the server can cause a deadlock if the client does not send additional Initial or Handshake packets. A deadlock could occur when the server reaches its anti- amplification limit and the client has received acknowledgements for all the data it has sent. In this case, when the client has no reason to send additional packets, the server will be unable to send more data because it has not validated the client's address. To prevent this deadlock, clients MUST send a packet on a probe timeout (PTO, see Section 5.3 of QUIC-RECOVERY). Specifically, the client MUST send an Initial packet in a UDP datagram of at least 1200 bytes if it does not have Handshake keys, and otherwise send a Handshake packet. A server might wish to validate the client address before starting the cryptographic handshake. QUIC uses a token in the Initial packet"}
{"id": "q-en-quicwg-base-drafts-3c0c266b6fc6c0226cbe82a04f321499f865b00d68ed0e939c16b5bfca5d59b8", "old_text": "6.6. During the handshake, some packet protection keys might not be available when a packet arrives. In particular, Handshake and 0-RTT packets cannot be processed until the Initial packets arrive, and 1-RTT packets cannot be processed until the handshake completes. Endpoints MAY ignore the loss of Handshake, 0-RTT, and 1-RTT packets that might arrive before the peer has packet protection keys to process those packets. 6.7.", "comments": "Currently, the \"Ignoring Loss of Undecryptable Packets\" section indicates that some loss can be ignored, but now that we have multiple packet number spaces, I think we can constrain the set of ignorable losses to those below the smallest acknowledged packet in a packet number space. Ignoring the loss of a subsequently sent packet was not the intent.\nI might have to put in a lint for HTAB...", "new_text": "6.6. During the handshake, some packet protection keys might not be available when a packet arrives and the receiver can choose to drop the packet. In particular, Handshake and 0-RTT packets cannot be processed until the Initial packets arrive and 1-RTT packets cannot be processed until the handshake completes. Endpoints MAY ignore the loss of Handshake, 0-RTT, and 1-RTT packets that might have arrived before the peer had packet protection keys to process those packets. Endpoints MUST NOT ignore the loss of packets that were sent after the earliest acknowledged packet in a given packet number space. 6.7."}
{"id": "q-en-quicwg-base-drafts-5de3e347aa6edd3fd881772788efcd678919dbd918c711ff0fd84694c05e3d6d", "old_text": "layer. By providing reliability at the stream level and congestion control across the entire connection, it has the capability to improve the performance of HTTP compared to a TCP mapping. QUIC also incorporates TLS 1.3 at the transport layer, offering comparable security to running TLS over TCP, with the improved connection setup latency of TCP Fast Open RFC7413. This document defines a mapping of HTTP semantics over the QUIC transport protocol, drawing heavily on the design of HTTP/2. While", "comments": "It might pay to reconcile: {{vars-of-interest}}) to be larger than the congestion window, unless the packet is sent on a PTO timer expiration; see {{pto}}. And now exceeds the recently reduced congestion window. Now, you might implement the latter using PTO machinery, but that text doesn't really suggest that in any way.\nWhere does this second sentence come from? I'm pretty sure I haven't implemented anything like this in quic-go. Should I?\nThat sentence is the the definition of the . We haven't implemented that either, and I was wondering if it truly helps.\nFor TCP this second case if important because as long as you don't retransmit the lost packet you will not get any ACK that acknowledges new data. I guess we don't need that for QUIC.\nThe second sentence was added later in , and the MUST wasn't updated, so I'll fix that. Chromium currently implements PRR(RFC6937), which is a more complex variant of this which is designed to keep the ACK clock going. I thought we discussed adding an informative reference to PRR when we added this text, and I'm not sure why we didn't.\nseems to be the wrong issue you are pointed to...\nGood catch, it's\nI've suggested some symbolic references.", "new_text": "layer. By providing reliability at the stream level and congestion control across the entire connection, it has the capability to improve the performance of HTTP compared to a TCP mapping. QUIC also incorporates TLS 1.3 TLS13 at the transport layer, offering comparable security to running TLS over TCP, with the improved connection setup latency of TCP Fast Open TFO. This document defines a mapping of HTTP semantics over the QUIC transport protocol, drawing heavily on the design of HTTP/2. While"}
{"id": "q-en-quicwg-base-drafts-cc3a1d4dc0306eddfb0e87da067fb7bec869d01bd3fce90dd5df92fe402b7413", "old_text": "unblock the server until it is certain that the server has finished its address validation (see Section 8 of QUIC-TRANSPORT). That is, the client MUST set the probe timer if the client has not received an acknowledgement for one of its Handshake or 1-RTT packets. Prior to handshake completion, when few to none RTT samples have been generated, it is possible that the probe timer expiration is due to", "comments": "Having non-overlapping conditions makes sense; adjusted wording.\nThe recovery draft : This is consistent with the pseudocode, in particular . When the server receives the client's final handshake flight, it discards handshake keys and hence cannot acknowledge it. When the client receives the resulting , the handshake is confirmed, but no Handshake or 1-RTT acknowledgements have necessarily been received. This results in a PTO being set even though the client's address has certainly been verified and there are no ack-eliciting packets in flight. As written, the pseudocode would in fact set a PTO for the Initial space at a garbage time. I think this could be corrected by appending \"and has not discarded handshake keys\" or \"and has not confirmed the handshake\" to the language, and adding the analogous clause to the conditions checked in the final statement of .\nI thought that was implied, but implications are bad, so I agree it's best to explicitly state it. I'm happy to review a PR if you have time to write one, or I'll fix this fairly soon ideally.\nWhen a reverse proxy forwards a HTTP response to the client, sometimes it sees the upstream server terminate the response early. For example, a HTTP/1.1 connection being closed prior to transmitting number of bytes specified by the field. Or a chunked response being closed abruptly. The only way of delegating such a condition to the client in HTTP/2 was to send a RSTSTREAM frame with . However, use of the error code is awkward in this case, considering the fact that the error did not happen within the communication between the endpoints using the particular HTTP/2 connection. I think it makes sense to have an error code that better designates the case. One way to address the issue in H3 would be to change to , and let the server use the error code (as an argument of the RESET_STREAM frame) to indicate that the response has been \"cancelled\". Originally proposed in URL\nWhat if you do not want to admit to being a proxy? (who does?)\nHTTPMESSAGECANCELLED would simply mean that the transfer of the response is cancelled, regardless of the server being an origin or a reverse proxy.\nI can get behind this. As long as we're not looking for a byte-by-byte reproduction of the bytes the intermediary received (as was once requested and rejected, I think at the Seattle interim), then it makes sense to signal that the message was killed.\nThis is basically an expansion of the CANCELLED/REJECTED codes to allow a server to do either. WFM.\nNAME Yes. And thank you for the PR.", "new_text": "unblock the server until it is certain that the server has finished its address validation (see Section 8 of QUIC-TRANSPORT). That is, the client MUST set the probe timer if the client has not received an acknowledgement for one of its Handshake or 1-RTT packets, and has not received a HANDSHAKE_DONE frame. Prior to handshake completion, when few to none RTT samples have been generated, it is possible that the probe timer expiration is due to"}
{"id": "q-en-quicwg-base-drafts-4d9d2b9ee9f73e0596362f1b17266acedfb651d6d082e94275ff2757cdf71832", "old_text": "An ACK frame SHOULD be generated for at least every second ack- eliciting packet. This recommendation is in keeping with standard practice for TCP RFC5681. In order to assist loss detection at the sender, an endpoint SHOULD send an ACK frame immediately on receiving an ack-eliciting packet", "comments": "Adds editorial text about why a receiver could decide to send ACKs less frequently than every two packets. This keeps coming up(ie: ), so I thought text was necessary.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "An ACK frame SHOULD be generated for at least every second ack- eliciting packet. This recommendation is in keeping with standard practice for TCP RFC5681. A receiver could decide to send an ACK frame less frequently if it has information about how frequently the sender's congestion controller needs feedback, or if the receiver is CPU or bandwidth constrained. In order to assist loss detection at the sender, an endpoint SHOULD send an ACK frame immediately on receiving an ack-eliciting packet"}
{"id": "q-en-quicwg-base-drafts-31ebc39f33f5c18c951aca6bcab23bb8428e7bde1a68c57c9915f9f7db90e1a9", "old_text": "An endpoint that receives a NEW_CONNECTION_ID frame with a sequence number smaller than the Retire Prior To field of a previously received NEW_CONNECTION_ID frame MUST immediately send a corresponding RETIRE_CONNECTION_ID frame that retires the newly received connection ID. 19.16.", "comments": "If you drop the first paragraph, this is just an editorial fix as far as I'm concerned. That is, not sending a frame that was previously sent is not a change. And - assuming that you dump it - \"immediately\" has no defined meaning anyway.\nNAME I'm happy to drop the first paragraph, and I'm glad to hear that it's editorial. Two people on our team took the existing text to mean that we had to send RCID after receiving NCID even though we'd previously retired that sequence number. Update coming soon.\nNAME Feel free to change to \"editorial\"\nThis is a small point, but it's pointlessly creating pain in our implementation and possibly some others. In Sec 19.15 (NEWCONNECTIONID frame): IMO this is poorly formulated. If the receiver has already sent an RCID frame for the sequence number and that RCID has been acked, it may have thrown away state for that sequence number and shouldn't have to resend a frame the peer has received. In particular, a buggy or malicious peer might send a very old sequence number, or the NCID receiver might have preemptively retired a sequence number it hadn't seen when it got the retire-prior-to field. Simply appending 'if has not already done so' to the text above would do the trick I think. I'm happy to file a PR unless most people think this is a terrible idea.\nAs a related point, it would be useful to say that an NCID receiver SHOULD NOT immediately retire old CIDs solely as the result of the NCID frame, except when instructed by Retire-Prior-to. This can lead to infinite loops if the peer tries to keep a specific number of CIDs available.\nI think this would be a nice clarification. I think we already prohibit such behavior through the use of activeconnectionidlimit TP. Section 5.1.1 states: endpoints store received connection IDs for future use and advertise the number of connection IDs they are willing to store with the activeconnectionidlimit transport parameter. To paraphrase, by using the TP, an endpoint willing to retain N active connection IDs prohibits the peer from sending more than N.\nI was probably unclear. For a while, our implementation was doing something dumb: each time it got a new CID, it started using it and retired all previous ones regardless of Retire Prior To. This created infinite loops where the client was trying to make sure that we always had two CIDs. We should probably warn against that. Please see if the text in the PR is clear enough.\nTaking a look at the PR, I'm finding the proposed text to be a bit unclear, too. Based on the discussion on the list, this seems to be a misreading of the text itself. From the list: I don't believe that this is actually what the text says. The text says: This to me reads as \"If you said everything below CID 10 is done, and then you send me CID 8, I must immediately send a RCID frame for 8 since I can't use it\". I don't believe that there is any CID-specific state required at all for this. You do need to store the current highest value that you've ever seen for RPT, which is a single integer, I'll call it . Beyond that, when you get a NCID frame, you check the sequence number. If it's less than , you should immediately enqueue (which I think might be a reasonable wording change to make, since immediately seems to have caused concern when the intent was that you put it on the tracks leaving the station, not that you force the train out the door before it's allowed) a RCID frame for that sequence number. Note that a RCID frame contains a single value, the sequence number of the connection ID being retired, which incidentally is contained by the NCID frame you're responding to. We can argue as to whether or not you should bother to send RCID again when the same frame has been acked, but (a) storing that is actually more state and (b) we're working pretty hard to avoid tracking ACKs of packets. I might be missing something here -- can you help me understand the concern about storing state in order to meet the requirements of this text?\nYes, I concede that the original email was garbled. I read the original text the same way you do. The issue is that if I decide to retire immediately without actually having the sequence number, that should be sufficient. Making me send it again after the NCID is gratuitous, and in at least one implementation creates state solely to track this corner case.\nAh cool, that's excellent then! :) What's the state that gets created to track that case? If you retired them all upon getting RPT without checking to see if you had them yet, don't you still have to keep track of the RPT value or which ones you've already retired? I'm probably missing some other piece of state in this case...\nI think what an endpoint is required to track are: active connection IDs sequence number of connection IDs that are being retired (but are yet to be acknowledged) When receiving a NCID frame with RPT, an endpoint checks if there are any active CIDs below the RPT, and retires them. I do not think an endpoint has to track RPT, though admittedly, I haven't implement them (yet).\nNAME what you suggest would certainly work. However, a malicious peer could send an infinite number of NCIDs as long as it kept advancing rpt and didn't ack RCIDs, spiraling the state upwards. To avoid this, we keep a window of unacked but retired sequence numbers and construct RCID from that. That's why having an NCID come in well below the window results in additional state for us.\nNAME Exactly. We have an issue for that problem: . Agreeing on what to do to resolve the unbounded state problem might help resolving this issue too.\nTaking this as editorial based on the proposal (and promised tweaks).\nLooks fine to me, though I do think it's better with some kind of language to send it sooner rather than later.", "new_text": "An endpoint that receives a NEW_CONNECTION_ID frame with a sequence number smaller than the Retire Prior To field of a previously received NEW_CONNECTION_ID frame MUST send a corresponding RETIRE_CONNECTION_ID frame that retires the newly received connection ID, unless it has already done so for that sequence number. 19.16."}
{"id": "q-en-quicwg-base-drafts-974e92e605999deb6789e2d0423d7047e86d198ec92ae9af92c3109c8b7d40ec", "old_text": "connection ID when the client initiates migration to the preferred address. The active connection ID limit is an integer value specifying the maximum number of connection IDs from the peer that an endpoint is willing to store. This value includes the connection ID received", "comments": "This is option 4 from the discussion in . That is, making preferredaddress equivalent to NEWCONNECTION_ID with a sequence number of 1 in every way that matters.\nOffshoot from review. Servers behind a load balancer, and the load balancer isn't CID-aware. Each server gives out an IP:port per client in the parameter. If the client moves to the SPA, then future migration is fine -- it will continue to talk to the same server using the preferred address. If the client doesn't move, those future migrations will break. There's currently no way to set to \"only if you don't use my preferred address\". Similarly, if the primary connection is to an anycast address, if the client fails to move to the preferred address, subsequent migration will likely break. Should we blanket recommend that clients not actively migrate if they ignore / are unsuccessful in using the SPA? Or should there be a way for the server to indicate that?\nGood question. This brings up a related question for me about disableactivemigration. Given a server can decide not to give out any additional connection IDs, doesn't that give it a mechanism for disabling active migration? Ironically, given recent decisions on connection CIDs and SPA, if an SPA is supplied, then the client always has a second connection ID it can use on any path, not just the SPA. So my suggestion of not giving out extra CIDs fails for the original path, but not the SPA.\nWhat we\u2019ve said for this so far is that SPA is limited to offering a minimum of a single additional CID that needs to work in both places, the original server and the preferred server. As Ian says, the original server should not issue the client any more CIDs (if you have some that are deployment specific) until after they\u2019ve successfully migrated to the new endpoint, at which point the preferred server can handle that issuance. And as he notes, you still have to deal with the fact that there are two CIDs that the client could use, from any incoming address. It seems as though the full way to solve this is to define potentially disjoint sets of TPs that different servers could set to match their particular requirements (around more than just migration). In that world, if you had an initial server that couldn\u2019t handle migration and a preferred server that did handle migration, you could set different TPs to indicate that. It goes well beyond that though, any other TP might have a different values for the original server and then have those be different with the preferred server. But that does feel an awful lot like something that goes with full server migration. In the limited form that we have now, if you want to be able to push someone off to a different server, there are some limitations that you need to deal with, like being able to set up enough of a shared configuration for the two that the same TPs work with both. In practice, I think this means we end up with the following situation: The client either moves to the requested address or it doesn\u2019t. If it does, then you\u2019re in good shape, in theory you gave it TPs that matched what the server wanted and everybody is happy. If it doesn\u2019t, then you\u2019re unhappy, since in theory you had a reason you asked it to move in the first place. You might choose to close it down more aggressively than you would on the preferred server to save resources, or give it smaller limits for flow control, etc. When you\u2019re in the \u201cclient didn\u2019t move\u201d state: If the client then tries to use the original 2nd CID that you gave, from the same IP address/port, then a non-CID aware load balancer should be fine. If the client then tries to use the original 2nd CID that you gave, from a different IP address/port, then a non-CID aware load balancer will not be happy. Of course, the problematic case is even narrower, since if the client is probing the new path they\u2019ll see it as though there isn\u2019t any connectivity, just like they will if there\u2019s anything else on the new path that prevents their connection from working \u2014 this is always a risk when migrating, so they should be unsurprised and stick with their current path. If they\u2019re forced to move for whatever reason, then it will be just as though they didn\u2019t have connectivity over the new path (which they don\u2019t) and whether or not you disable migration you\u2019ll see the same outcome \u2014 the connection is closed. Interestingly, I\u2019m not actually seeing a difference from the client\u2019s perspective in terms of outcome whether you let them try with that CID and fail or if you told them to not try.\nIn very much shorter terms: isn\u2019t this a situation that all clients already need to be able to handle for other reasons?\nNAME I do not like the first option, because IMO banning migration on the original address is going to be a premature optimization based on what we have now, has negative effect on what we would have in the future. My assumption is that the majority of the server deployments would be CID-aware, especially in the long term, especially in terms of traffic volume. Note that the volume of traffic is more important than the number of server deployments, as migration is about (improving) end-user experience. IMO there are two ways forward: (a) State that disableactivemigration TP only applies to the original address. Per-definition, server-preferred address is an address that is capable of receiving packets from any address. There is no need to prohibit clients doing migration when connected to SPA. (b) Do nothing. Clients are RECOMMENDED to implement SPA, and this issue is about a small fraction of deployments that rely on non-CID-aware load balancers. We can ignore the problem.\nIIRC, we had that conversation when deciding whether to add in the first place. The short version is that taking this approach prevents the client from rotating CIDs on the primary path; if we want the client to be able to rotate CIDs but not migrate, they need to be distinct signals. NAME in a way you're correct that clients will have to deal with the fallout regardless of what the server says. However, the actual use of is likely to trigger a little more than just \"don't do that\" -- a client might PING more actively to prevent NAT rebindings, for example. I think it's this hinting aspect that's actually more valuable, since as you note, clients need to be prepared for attempted migrations to fail even with servers that tacitly support them. I don't think this is necessarily true, though I agree that we're talking about smaller and smaller fractions of server space.\nNAME While I agree that it might not necessarily be true at the moment, I might argue that it is going to be true. The basis of the resolution for and (both on consensus call) is that servers using preferred address would not be using the server's address for distinguishing between connections. We also know that a server cannot use client's address (unless there is out-of-band knowledge), because clients are not required to use the same 2-tuple when migrating to a different address (and also because there are NATs). The outcome is that the server has to rely on CID when using SPA. And therefore I might insist that servers using SPA would always be capable of handling further migration.\nNAME So your suggestion is that disableactivemigration applies only to the original address and the SPA must always support active migration, because it has to use CID routing? I don't think that'd be a problem for our use cases, but I'm not sure about others.\nNAME Exactly. I think that that would be the correct thing to do, if we are to make changes. I agree that existing clients might be applying after migrating to SPA, and that they do not want change. But they can remain as they are, because the change in spec does not lead to interoperability issues. To paraphrase, I think making the aforementioned change is strictly better than doing nothing, because it would be an improvement to some, while not hurting anybody.\nA truth table might help here. ... --- no preferred address preferred address Right now, our definition of disableactivemigration is a simple \"don't migrate\". But the argument here says that the server that provides a preferred address is probably capable of handling migrations on the basis that it has to support one. We might insist on the source address not change, but the fact is that some NATs (those with address-dependent mappings) will change the source address. Changing to a definition where disableactivemigration is instead \"don't move to a new address unless you use a preferred address\" is a little more complicated, but doable. Is it worth the extra complexity this late in the process? Is that complexity justified when it would be trivially possible to define another extension that said \"I support migration on my preferred address\"?\nI'd like to minimize changes, but I believe the status quo is that disableactivemigration applies to both the original address and SPA. As such, all suggestions above seem like a change. There is a mechanism to disable migration on the SPA, which is don't give out any more CIDs once they migrate. I don't expect any hosts to support migration on the original path and not the SPA, so I don't think that's a use case worth supporting(or encouraging).\nMy proposal is that we defer the ability to signal \"I support migration, but only on my preferred address\" to an extension.\nNAME And leave the current behavior of disableactivemigration applies to both the original address and SPA? I'm a bit concerned that makes SPA somewhat useless. Is this issue a candidate for parking and we can come back to and see if we can live with the status quo?\nWhich, again, not only disables migration, it disables the ability to rotate CIDs. That's not something we want servers to be doing. NAME argument is convincing that it's improbable for something which supports SPA to not be able to uniquely identify connections (whether by CID or because the SPA endpoint is unique). Therefore, I'd support scoping disableactivemigration to the handshake address, and defer the ability to disable it on the SPA to an extension. Loosening this definition doesn't make any current clients break, so I suspect it's bearable despite the schedule.\nIf we're going to make a change, I support NAME and NAME direction of scoping disableactivemigration to the handshake address. And I agree that I don't expect this breaks many existing clients.\nI personally find the proposed change confusing. I'm not sure I find the use-case of \"my original address doesn't support migration but my preferred address does\" compelling. Since this can be easily solved by an extension with no downside over it being in the core docs, I don't see a reason to change the spec at this point in the process.\nAs this has sat here a while, I'll reiterate my proposal again: we defer the ability to signal \"I support migration, but only on my preferred address\" to an extension. Which means that disableactivemigration applies to both the original address and SPA, so answer the question (sorry Ian, I missed that). I realize that you could change the meaning of disableactivemigration to apply to the address the server uses in the handshake, but there are interesting corner cases and I don't think we need to concern ourselves with those. For instance, if you connect over V4 and get disableactivemigration and serverpreferredaddress. If that SPA includes a V6 address you can't use, along with the same V4 address, what then? Does the server really support migration? I know that the current situation is unappealing, and I think that disableactivemigration is not good for the protocol. But it exists for very good reasons. Trying to limit the scope is admirable, but as we haven't seen a concrete proposal, I think we need to move on.\nNAME If this example is concerning, I'd argue that is an issue that already exists, orthogonal to what we are discussing here. I am still not convinced that limiting the scope of disableactivemigration to just the original address is going to cause any new issue. That said, if nobody needs this change, I'm more than happy to move on with status-quo.\nI've thought about this, and I don't believe we need this change, but I also think the change NAME suggested makes the current behavior better overall. A server that supports migration on the SPA(ie: unicast) and not on the handshake address(ie: anycast) will end up not disabling migration and hoping that most clients migrate to the SPA or don't support migration of any sort. That seems like a plausible hope, though I'm a bit leery of hope based protocol design.\nThinking about this more, a statement like: \"Migration is never guaranteed to succeed. For servers that advertise a preferred address, it's expected migration is more likely to be successful when using the preferred address than the handshake address.\"\ntl;dr: I'm arguing for \"do nothing\". This conversation has made it clear to me that an address, and not a server, is the thing that supports client migration. Which makes sense -- migration support at the server is entirely a question of routability, and it makes sense that it would be tied to the address in question and not the endpoint. So if we were to do anything, I think I would propose: scoping disableactivemigration to the handshake address, and adding two disableactivemigration flags to the preferredaddress TP, for the two addresses in the TP. That said, this is a clean enough extension. A new extension can define a new TP that provides this granular interpretation of the three server addresses. A server that wishes to support this can send both the SPA and this newSPA TPs, with the SPA TP (with the current scoping of disableactivemigration applying to all server addresses) providing cover for legacy clients. Also I expect that clients are likely to switch to using the SPA if provided. A server that has different routability properties for the two addresses can wait for some deployment and make decisions on whether to risk allowing migration based on data of what it sees as client behavior with SPA. I would argue for doing nothing now. Experience will teach us the right way forward. If necessary, this extension will be easy enough to do, and importantly, the current interpretation of disableactive_migration gives us the right kind of cover in the meantime.\nI would rather not make the change in . It assumes that migration is more likely to succeed when attempted to the preferred address as opposed to the handshake address. While this assumption might hold in some server deployments, it is not a fact. I'd rather we didn't bake this assumption into the core spec. I'd prefer to see this feature as an extension.\nNAME Would you mind elaborating why a server, accepting connections on a preferred address, might not be able to disambiguate connections using CIDs? As stated previously, when a client migrates to the preferred address, the client's address port tuple can change due to the existence of NATs. A server using preferred address need to take that into consideration. That means that a server has to disambiguate connections based on the CID, which in turn means that the server would be possible to support migration.\nHmmm thanks for that note NAME Looking at things under that light, I agree that the assumption I described above is already part of the protocol. Please disregard my comment above URL .\nWe don't prohibit the use of a zero-length connection ID in the preferredaddress transport parameter. So a server can use a connection ID during connection establishment, then remove privacy protections by moving to a unique address (see ). Of course, this is largely moot, as the privacy protection offered through the use of connection IDs toward a unique address is nil. In either case, we should clarify whether preferredaddress is exactly like NEWCONNECTIONID, or whether it has its own rules. Right now, it looks like it has its own rules and that is likely bad.\nZero-length connection IDs don't have a stateless reset token associated with them, but the preferred_address TP requires you to set a token. That's an awkward corner case.\nThat is just one of the strange things, yeah. We should clarify that.\nSo I see a number of ways forward here: Do nothing. Leave this ambiguous and potentially special. Make the fields in preferredaddress exactly equivalent to a NEWCONNECTIONID with a sequence number of 1. But allow the connection ID to be zero-length. A zero-length connection ID provides a new stateless reset token that is bound to the new address. (This is a weird option that would go against previous decisions to eliminate special handling for this connection ID, but I include it for the sake of completeness.) As 2, except that the new stateless reset token has connection-wide scope such that either of the provided tokens can be used. This is more consistent with established principles. Prohibit the use of zero-length connection IDs in preferredaddress. That is, you can't use preferredaddress without a connection ID. That is, preferredaddress effectively is a new server address plus a NEWCONNECTIONID frame. Something else. From my perspective, I rank these as 4 > 3 >> 1 > 2. But that is largely based on my bias against zero-length connection IDs (as much as it seems like that might be good for privacy, I think that privacy is better served by the use of connection IDs). If people intend to use preferred_address to migrate to a connection-specific address, then I'm OK with 3; we already have to cover the privacy hole that arises from that.\nMy preference goes to option 4 too. I do not think servers would use both 0-byte CID and preferredaddress TP at the same time. The only case a server using 0-byte CID can also use preferredaddress is: when the server is never accepting more than one connection at a time, or when the server knows that the client's address tuple does not change when the client migrates to a new address IMO that's a niche. And when these cases can be met, I do not see why the client cannot know the preferred address of the server beforehand.\nI forgot to add: There is (maybe) a case for using a connection ID during setup and a zero-length connection ID after migrating to a preferred address. That isn't entirely crazy, but I personally would be comfortable losing that option. But I don't operate a server, so I want to give those who do a chance to let me know that this is preemptive.\nNAME My personal take would be that people would not do that. When a client migrates to the preferred address, there's fair chance that the client's address tuple will change. That means that the server should be capable of receiving short header packets with 0-byte CID from any address, and determine that the packet belongs to the connection that is migrating to the preferred address. That's possible theoretically, but in practice ... I agree that a server might want to switch to zero-byte CID after migration to SPA completes. But to do that, we need a NEWCONNECTIONID that can carry a zero-byte CID, which is forbidden at the moment. I do not think we'd want to change the spec now to allow that.\nThat's an excellent point, and a good reason not to allow changing to zero-length connection IDs in combination with preferred_address. The nasty part of that is that in most cases that design would work because clients won't use the \"new\" connection ID until after the handshake is complete. Of course, it would fail spectacularly if a client did decide to switch earlier - and the client might attract the blame, where it is clearly the fault of the server.\nI'll note that prohibiting a 0 length CID doesn't actually ensure the server's IP isn't unique, though it does remove an incentive to make it unique. I'd like to leave the option to switch to a zero-length CID later in the connection, and I think I'd be fine with 4 if we allowed that. An easy example is a large upload. I mostly don't care about the CID, but if I'm uploading a 10GB file, maybe saving some bytes matters. Also, if you're constantly filling the pipe, NAT rebinds are extremely unlikely. Another example is an environment like a datacenter where there are no NATs and routes are very stable, unless a peer intentionally changes it, so the 5-tuple is sufficient. In that case, the ideal operating condition is to typically use a 0 byte CID and switch to a non-0 length anytime a peer changes IP or port. Changing port has some clear benefits in some cases, so it's in no way a threoretical use case. NAME may have thoughts on this range of use case as well.\nIn our implementation, we plan to do all routing based purely on the CID. IMO, the couple of byte overhead in a 1500 MTU (at least; in datacenter) packet isn't going to make much difference, so it's just simpler to have a common CID routing scheme. That being said, I personally am not against allowing for others to do something different. I've never understood why we make any restrictions on changing between 0-length and non-0-length CIDs at all. IMO, the complexity is that it can change length at all, not that it's zero or not.\nNAME we already prohibit changing from non-zero-length to zero-length using NEWCONNECTIONID. I don't want to relitigate that. I want to concentrate on this specific issue.\nIf we're not going to re-litigate that, then I'd suggest we continue allowing a 0-length CID for the preferred address case.\nNAME While I would not argue strongly against continue allowing that, I would appreciate it if you could clarify why allowing a zero-byte CID would be useful. As pointed out in URL, when you provide a zero-byte CID in SPA, you need to be prepared for receiving a short header packet using zero-length CID from any IP address, and be capable of associating that to an existing connection. This is because the client (or the network) might change the client's address tuple when it sends packets to the preferred address. I would anticipate that you wouldn't want to do that in your file upload case (or in any deployment that might accept multiple connections concurrently).\nNAME If I know 5-tuples are stable, then using a zero-length CID is always fine, just like it's typically fine in the server to client direction. But in that case, hopefully I can avoid needing to use SPA, making the decision on this issue moot. After thinking about this more, we're probably better off with option 4 for the sake of consistency and if we want to allow 0-length CIDs in more cases, I can write an extension for how that would work, since I don't think it's only a matter of relaxing some constraints.\nThanks all. I think that we reached the same conclusion in the end, via different paths. The proposed fix here is firmly design chairs. A bugfix only, but still worth careful review.\nApologies for getting to this late, but I'm not sure I agree that we should disallow this. Here's a use-case that I think makes sense: the client wants to upload a very large file to the server over HTTP/3 and it has a small MTU, making it particularly sensitive to per-packet overheads. Let's also assume that we are in the future and the server supports IPv6 and has its own /64 (shocking!). In this scenario, a good way to reduce per-packet overhead is to operate the server on a given address, and then switch each client to their own unique server IP using preferred address, and then use zero-length CIDs. I've always found this use-case a good use of preferred address, and I would rather we didn't ban it without a good reason (and making the design cleaner doesn't necessarily count as a good reason at this stage in the process).\nThat is an example we discussed. As Kazuho noted, that doesn't work as we have already agreed that any connection ID (in this case, a zero-length one) can be used prior to handshake completion. That's why we ended up on 4. As Ian noted, this isn't ideal, but this is an area that is ripe for extensions that might allow this sort of thing.\nI don't understand what you meant by the following: Can you clarify? I was under the impression that the client switches to the preferred address after the handshake is done.\nYes, you can't migrate (act on the address information) before the handshake is confirmed. However, the connection ID is not special; it can be treated just like any other connection ID you learn, so you can act on the connection ID information before you confirm the handshake. So that means that a server that opts to include a zero-length connection ID in the preferred_address might receive a packet with the zero-length connection ID at the old address. That might not work out.\nWhat's the point of including a connection ID (and stateless reset token) in the transport parameter if that CID isn't tied to the preferred address in any way? At that point might as well remove that CID entirely and rely on NEWCONNECTIONID frames. My understanding was that the reason for there being a CID in the TP was specifically to make that CID special. Would it make sense to just say that this CID MUST NOT be used when sending to the handshake server IP?\nthe reason for having a connection ID in the transport parameter is only so that the client has a connection ID to use for the new path\nNAME I agree with your intuition, but I think we discussed this in Zurich and ended up concluding the preferredaddress CIDs weren't special. This was in issue and also came up in (which I realize hasn't gone out for a consensus call yet, that PR seems stalled?). In the upload example, the server needs to know that the client is going to send a lot of data, which it might know via SNI, or it might not. I tend to think the more annoying restriction is that you can't change from a non-0 length CID to a 0-length one and vice versa. By supplying a 0-length CID later in the connection along with other CIDs, the client could choose whether it wanted to use a 0-length CID at any time. Thinking out loud, an easier solution may be a TP like 'acceptemptyconnectionid' that indicates at anytime, the peer can switch to a 0-length CID if it wants. It could either think the 5-tuple is stable or its happy re-establishing the path with a non 0-length CID after a NAT rebind. In this world, the Connection ID is more of a path re-establishment token.\nThanks for the details NAME and NAME I get a sense I might be trying to re-litigate past consensus without providing new information, so I'll back down. I do think that the ghost of hypothetical linkability has caused us to make some very odd design choices, but this isn't new to this issue/PR. In the end, this is the kind of feature that can be enabled by extensions so I can live with us further restricting the base spec for consistency.", "new_text": "connection ID when the client initiates migration to the preferred address. The Connection ID and Stateless Reset Token fields of a preferred address are identical in syntax and semantics to the corresponding fields of a NEW_CONNECTION_ID frame (frame-new-connection-id). A server that chooses a zero-length connection ID MUST NOT provide a preferred address. Similarly, a server MUST NOT include a zero- length connection ID in this transport parameter. A client MUST treat violation of these requirements as a connection error of type TRANSPORT_PARAMETER_ERROR. The active connection ID limit is an integer value specifying the maximum number of connection IDs from the peer that an endpoint is willing to store. This value includes the connection ID received"}
{"id": "q-en-quicwg-base-drafts-e3aa91bedf32a0efbf53907701cf3e2879f61c214823cfc6677f196d0bffbf55", "old_text": "approach 2^(-L/2), that is, the birthday bound. For the algorithms described in this document, that probability is one in 2^64. In some cases, inputs shorter than the full size required by the packet protection algorithm might be used. To prevent an attacker from modifying packet headers, the header is transitively authenticated using packet protection; the entire packet header is part of the authenticated additional data. Protected", "comments": "Now that we require a minimum packet size, this is no longer true.\nIs this no longer true for the algorithms defined in the spec, or no longer true for any packet protection algorithms for any AEAD? It might be worth moving the note to the end of 5.4.1 instead.", "new_text": "approach 2^(-L/2), that is, the birthday bound. For the algorithms described in this document, that probability is one in 2^64. To prevent an attacker from modifying packet headers, the header is transitively authenticated using packet protection; the entire packet header is part of the authenticated additional data. Protected"}
{"id": "q-en-quicwg-base-drafts-b5950ce9feda7601e5a5323e7baccebcb1afc052a22394341cce11f1e0f694fd", "old_text": "A client that is unable to retry requests loses all requests that are in flight when the server closes the connection. An endpoint MAY send multiple GOAWAY frames indicating different identifiers, but MUST NOT increase the identifier value they send, since clients might already have retried unprocessed requests on another connection. Receiving a GOAWAY containing a larger identifier than previously received MUST be treated as a connection error of type H3_ID_ERROR. An endpoint that is attempting to gracefully shut down a connection can send a GOAWAY frame with a value set to the maximum possible", "comments": "The endpoint sends frames, which carry -- not send -- identifier values.\nThe way it's intended to parse, \"An endpoint ... MUST NOT increase the value they send,\" is probably incorrect in number or gender. But yes, changing the verb to reflect that the subject is the frames instead of the endpoint is also an option.\nOK. I have no preference how this sentence is fixed -- as long as it's fixed.\nThis looks good. While we are at this paragraph, why say \"client?\" GOAWAY can be sent by either side now.\nIt can, but a server isn't going to retry push streams on a new connection; it's going to decide de novo whether to push there. Arguably, nothing breaks if the client increases the Push ID on the server after the fact.\nBTW, thanks for catching this. I appreciate how many of these ambiguities you're winkling out.\nNAME NAME OK, lgtm, then.\nMaybe, given the ambiguity, the right answer is to get rid of \"they\" in the first place?", "new_text": "A client that is unable to retry requests loses all requests that are in flight when the server closes the connection. An endpoint MAY send multiple GOAWAY frames indicating different identifiers, but the identifier in each frame MUST NOT be greater than the identifier in any previous frame, since clients might already have retried unprocessed requests on another connection. Receiving a GOAWAY containing a larger identifier than previously received MUST be treated as a connection error of type H3_ID_ERROR. An endpoint that is attempting to gracefully shut down a connection can send a GOAWAY frame with a value set to the maximum possible"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "receipt by sending one or more ACK frames containing the packet number of the received packet. 13.2. Endpoints acknowledge all packets they receive and process. However,", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "receipt by sending one or more ACK frames containing the packet number of the received packet. An endpoint SHOULD treat receipt of an acknowledgment for a packet it did not send as a connection error of type PROTOCOL_VIOLATION, if it is able to detect the condition. 13.2. Endpoints acknowledge all packets they receive and process. However,"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "receiver's max_ack_delay value in determining timeouts for timer- based retransmission, as detailed in Section 5.2.1 of QUIC-RECOVERY. An ACK frame SHOULD be generated for at least every second ack- eliciting packet. This recommendation is in keeping with standard practice for TCP RFC5681. A receiver could decide to send an ACK frame less frequently if it has information about how frequently the sender's congestion controller needs feedback, or if the receiver is CPU or bandwidth constrained. In order to assist loss detection at the sender, an endpoint SHOULD send an ACK frame immediately on receiving an ack-eliciting packet", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "receiver's max_ack_delay value in determining timeouts for timer- based retransmission, as detailed in Section 5.2.1 of QUIC-RECOVERY. Since packets containing only ACK frames are not congestion controlled, an endpoint MUST NOT send more than one such packet in response to receiving an ack-eliciting packet. An endpoint MUST NOT send a non-ack-eliciting packet in response to a non-ack-eliciting packet, even if there are packet gaps which precede the received packet. This avoids an infinite feedback loop of acknowledgements, which could prevent the connection from ever becoming idle. Non-ack-eliciting packets are eventually acknowledged when the endpoint sends an ACK frame in response to other events. In order to assist loss detection at the sender, an endpoint SHOULD send an ACK frame immediately on receiving an ack-eliciting packet"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "codepoint in the IP header SHOULD be acknowledged immediately, to reduce the peer's response time to congestion events. As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets. Packets containing PADDING frames are considered to be in flight for congestion control purposes QUIC-RECOVERY. Sending only PADDING frames might cause the sender to become limited by the congestion controller with no acknowledgments forthcoming from the receiver. Therefore, a sender SHOULD ensure that other frames are sent in addition to PADDING frames to elicit acknowledgments from the receiver. An endpoint that is only sending ACK frames will not receive acknowledgments from its peer unless those acknowledgements are", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "codepoint in the IP header SHOULD be acknowledged immediately, to reduce the peer's response time to congestion events. The algorithms in QUIC-RECOVERY are expected to be resilient to receivers that do not follow guidance offered above. However, an implementer should only deviate from these requirements after careful consideration of the performance implications of doing so. An endpoint that is only sending ACK frames will not receive acknowledgments from its peer unless those acknowledgements are"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "be acknowledged, an endpoint MAY wait until an ack-eliciting packet has been received to bundle an ACK frame with outgoing frames. The algorithms in QUIC-RECOVERY are resilient to receivers that do not follow guidance offered above. However, an implementor should only deviate from these requirements after careful consideration of the performance implications of doing so. Packets containing only ACK frames are not congestion controlled, so there are limits on how frequently they can be sent. An endpoint MUST NOT send more than one ACK-frame-only packet in response to receiving an ack-eliciting packet. An endpoint MUST NOT send a non- ack-eliciting packet in response to a non-ack-eliciting packet, even if there are packet gaps which precede the received packet. Limiting ACK frames avoids an infinite feedback loop of acknowledgements, which could prevent the connection from ever becoming idle. However, the endpoint acknowledges non-ACK-eliciting packets when it sends an ACK frame. An endpoint SHOULD treat receipt of an acknowledgment for a packet it did not send as a connection error of type PROTOCOL_VIOLATION, if it is able to detect the condition. 13.2.2. When an ACK frame is sent, one or more ranges of acknowledged packets are included. Including older packets reduces the chance of spurious", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "be acknowledged, an endpoint MAY wait until an ack-eliciting packet has been received to bundle an ACK frame with outgoing frames. 13.2.2. A receiver determines how frequently to send acknowledgements in response to ack-eliciting packets. This determination involves a tradeoff. Endpoints rely on timely acknowledgment to detect loss; see Section 5 of QUIC-RECOVERY. Window-based congestion controllers, such as the one in Section 6 of QUIC-RECOVERY, rely on acknowledgments to manage their congestion window. In both cases, delaying acknowledgments can adversely affect performance. On the other hand, reducing the frequency of packets that carry only acknowledgements reduces packet transmission and processing cost at both endpoints. It can also improve connection throughput on severely asymmetric links; see Section 3 of RFC3449. A receiver SHOULD send an ACK frame after receiving at least two ack- eliciting packets. This recommendation is general in nature and consistent with recommendations for TCP endpoint behavior RFC5681. Knowledge of network conditions, knowledge of the peer's congestion controller, or further research and experimentation might suggest alternative acknowledgment strategies with better performance characteristics. A receiver MAY process multiple available packets before determining whether to send an ACK frame in response. 13.2.3. When an ACK frame is sent, one or more ranges of acknowledged packets are included. Including older packets reduces the chance of spurious"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "acknowledgments to be lost. A sender cannot expect to receive an acknowledgment for every packet that the receiver processes. 13.2.3. When a packet containing an ACK frame is sent, the largest acknowledged in that frame may be saved. When a packet containing an", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "acknowledgments to be lost. A sender cannot expect to receive an acknowledgment for every packet that the receiver processes. 13.2.4. When a packet containing an ACK frame is sent, the largest acknowledged in that frame may be saved. When a packet containing an"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "algorithm could cause spurious retransmits, but the sender will continue making forward progress. 13.2.4. A receiver limits the number of ACK Ranges (ack-ranges) it remembers and sends in ACK frames, both to limit the size of ACK frames and to", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "algorithm could cause spurious retransmits, but the sender will continue making forward progress. 13.2.5. A receiver limits the number of ACK Ranges (ack-ranges) it remembers and sends in ACK frames, both to limit the size of ACK frames and to"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "that would otherwise be non-ack-eliciting, to avoid an infinite feedback loop of acknowledgements. 13.2.5. An endpoint measures the delays intentionally introduced between the time the packet with the largest packet number is received and the", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "that would otherwise be non-ack-eliciting, to avoid an infinite feedback loop of acknowledgements. 13.2.6. An endpoint measures the delays intentionally introduced between the time the packet with the largest packet number is received and the"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "NOT include delays that it does not control when populating the Ack Delay field in an ACK frame. 13.2.6. ACK frames MUST only be carried in a packet that has the same packet number space as the packet being ACKed; see packet-protected. For", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "NOT include delays that it does not control when populating the Ack Delay field in an ACK frame. 13.2.7. ACK frames MUST only be carried in a packet that has the same packet number space as the packet being ACKed; see packet-protected. For"}
{"id": "q-en-quicwg-base-drafts-e4904e5447e6f4061d24283fef3e8c2f396a361f0023a218b48fa2c6c1ff7a06", "old_text": "Note that the same limitation applies to other data sent by the server protected by the 1-RTT keys. 13.3. QUIC packets that are determined to be lost are not retransmitted", "comments": "This PR adds explicit context around the choice of ACK frequency in the draft. It also takes in Bob Briscoe's suggestion in , of calling it a guidance in light of the best information we have. Also does some editorial cleanup of text blocks that were not in the right sections.\nThanks NAME NAME -- comments incorporated.\nAs currently specified, the default QUIC ACK policy models TCP\u2019s ACK policy, but consumes significantly more return path capacity. This can impact resource usage of the return channel, can constrain forward throughput, and can impact the performance of other sessions sharing an uplink (draft-fairhurst-quic-ack-scaling). This was not discussed in Zurich - where discussion moved instead to the separate topic of draft-iyengar-quic-delayed-ack. In many current network segments (e.g., cable networks, mobile cellular, WiFi and other radio tech), the effects of asymetry are presently mitigated by network devices RFC3449 that rely upon observing TCP headers. This results in ossification problems, but does practically have beneit. It reduces the TCP ACK rate over the constrained link, it reduces consumption of \"radio\" transmission opportunities, and reduces traffic volume from ACKs. These mitigations are not possible, nor desirable for QUIC. However, the default performance of QUIC is therefore much worse than for TCP. In QUIC, the default ACK Ratio has to set by the endpoints implementing the protocol because the sender and receiver have no information about the \"cost\" of sending an ACK using a link along an asymmetric path. These endpoints can also choose a non-default value to enable a new CC, scale to higher rates, etc (e.g. draft-iyengar-quic-delayed-ack). We have shown there can be significant benefit from an ACK Ratio of 1:10 and this is expected to alos have benefit for higher levels of asymmetry. The proposed change from 1:2 to 1:10 is predicated on QUIC's design decisions: starting a connection with an Initial Window of 10 is acceptable, QUIC uses PTO as the loss detection method (a predicate to issue ), and QUIC has some form of pacing at the sender, at least to mitigate IW bursts.\nEven though we've avoided straying this far from the TCP spec, I think allowing this would be good for the internet. It should probably be documented for TCP, too. How does ack stretching of this nature work for BBR bandwidth estimation? It's harder to use the ack pacing, no?\nI don't believe QUIC ACKs are significantly larger. In fact, I believe the ACK frame itself is typically smaller than TCP ACKs, but the savings there are used on QUIC's 16-byte auth hash and likely a non-zero-length CID. Do you have a size analysis written up somewhere? I think this is the core issue. It's not that QUIC ACKs are really consuming significantly more capacity in terms of bytes on the wire, it's that middleboxes can't decide to thin the ACKs, because they don't know which packets are ACK-only packets. Did you implement the optimization already described in the transport draft?: \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" NAME did and I believe found it hugely reduced the number of ACKs, sometimes too much. As I've said before, I would really prefer not specify an approach in QUIC transport that is a heuristic like does. In particular, on lower bandwidth networks, 10 packets may be the entire congestion window and is too large a value. On higher bandwidth networks, every 20 packets has shown to be slightly better than 10, at least when using BBR. I wrote to add some editorial text about why every 2 packets is a SHOULD, not a MUST. Chrome did not default enable the ACK strategy recommends until BBR was launched, because there was a performance regression with Cubic. I'd rather not assume everyone is using BBR or a BBR-like congestion controller. NAME BBR does fine with stretch ACKs. Cubic and Reno suffer in some cases, however.\nQUIC ACKs are larger! The smallest IPv4 QUIC ACK packet is 51 bytes. The ACK frame is 4 bytes, but is sent using a QUIC packet. For IPv4 this is: 20 (IP) + 8 (UDP) +3 (1+1+1 QUIC header ... assuming a 1 B CID) + 4 (minimum ACK frame size)+ 16 (crypto). Other types of frames may also be sent in the same packet, increasing this size. That size difference matters. The point is that QUIC ACKs are consuming more capacity in terms of bytes on the wire, and often also important: in more transmission opportunities at the physical layer. That makes QUIC perform more poorly than it needs to across some paths. The text \"As an optimization, a receiver MAY process multiple packets before sending any ACK frames in response. In this case the receiver can determine whether an immediate or delayed acknowledgement should be generated after processing incoming packets.\" Is permissive to do something different - alas the effect of ordering of the processing depends on rate, timing, etc - The present text is not a recommendation to do something appropriate, which I think it needs to. If this were to describe that a receiver SHOULD reduce the ACK to data packet ratio to 1:10, then that would help. On low bandwidth links, then I agree that waiting for 10 packets would not be suitable, but then wouldn't the rule 4 ACKs/RTT still cover this case? (I have no problems believing every an ACK 20 packets can be slightly better than 10 in some cases, but then we have chosen IW at 10 and that relates to bustiness and responsiveness of the basic spec, so I think 1:10 makes more sense as the recommendation. If the CC and path information suggest 1:20 - perhaps as the rate increases, then a sender can \"tune\" as per the transport parameter proposal. That I STILL assert that this is a different issue from the expected default.)\nWould it help to give a recommendation like, if the congestion window is large, the ACK radio should be reduced? Not sure if we can easily give a threshold for the window but to align with the 4 ACKs per RTT rule we could say 40 or to be more conservation maybe 100?\nIf the receiver knew the cwnd... we could add that. The current proposal we made is here (along with some of the arguments about why QUIC and TCP are slightly different): draft-fairhurst-quic-ack-scaling\nSo the min TCP ack is 40 bytes vs QUIC at 50(0 byte CIDs are common, so I'll assume that), 25% larger in the minimum case. However, with a few SACK blocks, TCP ACKs are likely to be larger than QUIC ACKs, since SACK consumes 10 bytes for the first block and 8 for each additional block. By the time the second block is added, there's a good chance QUIC ACKs are smaller and it's almost certainly true with 3 SACK blocks(which I think is a typical number). So I'll go back to my previous statement that the real issue is that the network can't thin QUIC ACKs effectively.\nMay be \"yes\", may be \"no\" on a larger size after loss - since QUIC ACK ranges are often larger - but that depends on loss pattern, etc. In our case, we saw more bytes. But \"yes\" on me saying there is a need for QUIC to make good default decisions about the ACK rate, because QUIC becomes responsible for transport working across heterogeneous paths and we need to get good defaults rather than rely on \"ACK-Thining\" and other network modification.\nAnybody has a gut feeling how very low latency congestion control fares with sparse ACKs?. I have experienced that DCTCP is more picky with ACK aggregation than e.g Reno. I suspect that ACK-thinning can have the give the issue as well. A too sparse ACK spacing can also give problems. Another aspect is how e.g. TCP Prague with paced chirps work with sparse ACKs ?. To me the RTT/4 rule makes sense but I feel that the best alternative depends a lot on both the congestion control and the source material. Large file transfers are more trivial than sources that are occasionally idle.\nI think this isn't the correct question. This is about the default policy. The default policy is about what a receiver does by \"default\" - the current text uses a value of ACK every other packet, 1:2. TCP specified this when transmission rates were low, and TCP depended on individual ACKs to clock data (no pacing, IW=2), detect loss (no PTO), increase cwnd (per ACK packet), etc. In response, TCP causes ACK congestion and strange interactions with radio access. Performance over WiFi, DOCSIS etc suffers unless the network tries to address this. I don't see why QUIC, which doesn't rely on any of these, needs to set a DEFAULT of 1:2. This is literally a few lines of code difference, and we could use 1:10. These CC's you mention may have different needs and that needs to be handled by telling the remote that your CC has these specific needs - which I think is what Jana's extension is about.\nI am kind of the donkey in the middle... ACK thinning in networks should be discouraged. Standardization of ACK thinning mechanisms was proposed in 3GPP RAN2, this request was turned down mainly with reference to that it can harm transport protocol performance and does not work safely with QUIC. Nothing stops UE (terminal and modem) vendors from implement proprietary ACK-thinning but I certainly hope that 5G UE vendors don't attempt to thin QUIC-ACKs as this is a slippery slope. High ACK rates is a 5G problem mainly for high band NR (=5G at e.g 28GHz). It is mainly a coverage issue as UEs have a limited output power due to practical and regulatory requirements. At the same time one need to address that end users don't only download movies today and even VoD streaming implies uplink traffic in the form of chunk requests. And then we have an increasing amount of people doing uplink streaming or upload an infinite amount of pictures of food plates and cats :-) Operators and 5G vendors have to deal with this and solve the problem with deployment of 5G access in various frequency bands to get the best downlink and uplink performance. Reduced ACK rates is of course welcome but I don't currently see a problem that the initial ACK ratio is 1:2 , to increase to 1:? as the congestion window grows, bounded by e.g an RTT/4 rule. But here I need to admit that I at least I have not studied this in detail. Ergo.. I am not convinced that it is necessary to hardwire the QUIC spec to 1:10 , but I also cannot say that it should not be done.\nThis is a small enough change. However, there is still a bar for those, and the current level of discussion makes me believe we should close this with no action?\nJust so that it's easy to find, there is data and a continued discussion at URL\nFolks who care about a change here need to prioritize progressing this issue. We're close to being able to last-call the specs, and would like this resolved in time.\nWe\u2019ve completed a round of updated analysis of the proposed change to the QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! This is a short slide deck about the performance of the proposed change: URL There have been a number of other change made by the QUIC ID editors relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) We also really hope this will motivate a change the practice of forcing of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. Please do provide feedback, or any questions on this - as Lars notes, we think the time is running out for considering new changes, even relatively simple ones like this. Gorry P.S. First email: URL\nI believe that it is beneficial to use a 1:10 policy, I understand that this means that a 1:2 policy that still be used in the initial phase (flow start) or does the timer mechanisms kick in to avoid that ACKs are transmitted too sparsely in this case ?.\nYes, our proposal counted ACK-eliciting packets at the receiver (because the receiver does not in general know the congestion state, so you need a heuristic), and we finally chose the same method in currect QUIC Chromium - i.e. for the first 100 packets we do not use ACK 1:10, changing only after 100 packets.\nOK, it sounds like a good idea to use 1:10 after 100 packets or so. Just one potentially stupid question, Is there any special treatment necessary after an RTO ?\n[This seems to be a forwarded comment from NAME Hi, Gorry Thank you for sharing your experiments. I'm delighted to see results showing that use of ack10 gives us comparable throughput to ack2. Recently, I have been looking into how ack-frequency affects the computational efficiency of the server [1]. Using the (almost) same setup, I have today run a benchmark comparing three ACK strategies: ack2, ack10, ack 1/8cwnd. ack2: 334Mbps ack10: 414Mbps ack 1/8cwnd: 487Mbps As can be seen, ack10 is better than ack2, but lags behind ack1/8cwnd. That's because there are still frequent ACKs. They take considerable amount of CPU cycles. If we want to see QUIC being as lightweight as TLS over TCP, it is critically important to reduce ACK frequency as much as possible. This is because the cost of processing ACK in QUIC is much more expensive than TCP, due QUIC being a userspace transport. Based on this, if we are to change what we have in the base drafts, I'd much prefer adopting what we have in the proposed delayed ack extension [2], rather than just changing the default ack policy from ack2 to ack10. By adopting what we have in the delayed ack extension, we can expect better performance, and the endpoints would have the freedom to change the behavior when the default ack policy is deemed insufficient (regardless of the default being ack2 or ack10). [1] URL [2] URL 2020428() 20:37 Gorry Fairhurst : issue. We're close to being able to last-call the specs, and would like this resolved in time. QUIC spec, using quicly (changing to a 1:10 policy), and chromium (which currently does 1:10 by default). Our hope is that this change to the spec, to actually make standard end-to-end QUIC work better than TCP over asymmetric paths! relating to ACK processing, so we wanted to be sure these results were up to date. (These results do not consider additional ACKs after re-ordering - reflecting the latest text removing that.) of QUIC fallback to TCP (with PEP/Thining) over highly asymmetric paths. People may also be interested in work for L4S to evaluate the impact of ACK Thining for TCP-Prague. think the time is running out for considering new changes, even relatively simple ones like this. URL\nThese results/article on computational efficiency are really interesting. Please see below. This is useful, we see here there's a clear benefit here in using a larger ACK Ratio also in terms of server load. We can see already the benefit at 1:10 and GSO helps as expected. We think that this is also really true. QUIC ACKs are also larger than TCP ACKs (see URL). I don't think the two approaches conflict. The difference between the two approaches is that the small change to the base specification to use an ACK Ratio 1:10 ensures the default return path is at least as good as TCP for common traffic (even with common TCP ACK Thinning). Without this change, the end user could experience significant differences in performance between TCP and between different QUIC versions over asymmetric paths. Whereas to me, the transport parameter extension looks like an exciting way to enable many more things, but there are also many ways this could usefully evolve based on CC, offload, pacing etec and I think this will take much more to work-out what is needed, safe and best ... then to reach consensus. We still also need to specify a default ACK Ratio in this case as well. Best wishes, Gorry\nI'm greatly in favour of draft-iyengar-quic-delayed-ack-00 (BTW, where would you prefer me to give review comments about that?) I am much less keen on any default delayed ACK policy, whether 1:2 as now in quic-transport, or the 2-10/100 numbers proposed in draft-fairhurst-quic-ack-scaling-02 My concerns would melt away if the default policy was explicitly /implementation-dependent/ and the draft instead gave /guidance/ to implementers. The draft could still give the same 2-10/100 policy, but as /guidance/ for the current public Internet. The draft should also state the conditions under which this guidance was determined (e.g. Internet paths with BDP between x and y, or whatever). What's the difference? Stating a default policy implies this is a matter of interoperability. It's not. If we change to giving guidance, I'm sure most implementers will just use the guidance. However, implementers will be free to evolve their default as they gain experience. This reduces the possibility that other behaviours will evolve around any default like 2-10/100. For instance, network monitoring using it as a signature. Or congestion controls optimizing around it. In turn, this will reduce the chance of ossification around the first numbers we thought of. Senders can always use the ACK frequency frame to ask a receiver to change from its default. Then receivers will gradually learn what senders prefer them to do, and change their default accordingly. IMO, a continually improving default is important for short flows, so senders can avoid requesting a different ACK frequency right from the start.\nNeed to admit that I have not had too much time to follow the QUIC work lately. I too believe that draft-iyengar-quic-delayed-ack-00 is a good long term solution that allows endpoint to set a desired ACK frequency that is not more than necessary. As experiments show there are good incentives to keep the ACK frequency as low as possible in terms of client/server load and asymmetric links limitations. An ACK-FREQUENCY frame will allow to find the proper balance between the requirements given by the congestion control, the traffic type and the above given limitations. I am not too concerned if it takes an extra development/standards cycle to have that in place in the standard and in running code as long as the QUIC community shows other SDOs that the challenge with QUIC ACKs is being addressed.\nNAME NAME could you please trim the quoted text when you reply by email?\n(I trimmed the comments, but forwarding inline replies to email messages doesn't work well.)\nGiven that this is an area where there is still obviously experimentation and refinement, NAME comment seems on point; baking a specific algorithm as requirements into the specifications doesn't seem like a good idea (although putting constraints on endpoints might make sense). Can we focus on general guidance rather than documenting a specific algorithm? What would that PR look like?\n+1 to NAME suggestion. If we want to make changes for v1, they need to happen now, and they must be in a form that can let us get consensus.\nTo be clear, when I said that having a default policy is not a matter of interoperability, that's only strictly true if QUIC supports something like draft-iyengar-quic-delayed-ack for the sender to ask the receiver to use a certain policy. Bob\nI've proposed more context and text around our current guidance in the draft, see .\nI believe that the text around asymmetric links look fine, not really necessary to write up much more. What can be good to know that for the 5G case with limited UL (uplink = modem->base station transmission) , the pain that a flow with an unnecessary high ACK rate causes is on the flow itself or other flows sharing the same access.\nThere are some different points of pain here, the default and the need to adapt to the endpoint: An ACK Ratio (AR 1:1): is the finest granuality of feedback. A QUIC AR of 1:1 would consume return path capacity proportional to about 6% of the forward traffic (assuming no loss, etc). This is more than TCP (about 3%). An ACK Ratio (AR 1:2): For a return path bottleneck, QUIC with an AR of 1:2 will still have more impact on sharing flows than TCP would with the same AR 1:2 (because QUIC ACKs are larger, and because networks do not currently thin QUIC ACKs). So QUIC needs to choose the default - is it 1:2, 1:4, 1:8, 1:10? A QUIC AR 1:10 results in an ACK traffic volume of about 1% of the forward traffic. To me, that's a small proportion, which is why I argued that 1:10 is a good recommendation. The default is important when: the endpoints have no knowledge of the asymetry or the impact they have on a link's layer 2 loading, or the endpoints need to reduce the collateral damage to other IP flows sharing the return path capacity. Negotiation: The AR can be driven by the application or endpoint - e.g., a higher rate connection may well have a benefit from an AR of 1:10, 1:20, 1:100, etc - but such ARs are seldom chosen to reduce the impact on sharing a return bottleneck, more likely they are chosen to benefit the performance of an application or endpoint. That consideration is different, and likely depends on the application or ednpoint - which to me is where the flexibility offered by a transport parameter is so important. draft-iyengar-quic-delayed-ack will still need guidance on the \"default\". I argued that the default policy impacts the congestion behaviour when there is appreciable asymmetry this causes at layer 2 or the collateral damage (congestion impact on sharing flows). Sure, the choice of 1% is just a number I said was reasonable, but I've seldom encountered L2 technologies where less than 1% of forward traffic would be significantly better.\nNAME : I understand your position. While I agree that ack thinning happens with TCP, it is not what I expect to find on a common network path. And as far as I know, the endpoints still implement the RFC 5681 recommendation of acking every other packet. What you are arguing is that with ack thinners in the network, TCP's behavior on those network paths is different. However, I do not believe we understand the performance effects of these middleboxes, that is, how they might reduce connection throughput when the asymmetry is not a problem. Importantly, I don't think we should be specifying as default what some middleboxes might be doing without fully understanding the consequences of doing this. Please bear in mind that Chrome's ack policy of 1:10 works with BBR, not with Cubic or Reno. I do not believe there is a magic number at the moment, and as NAME noted, even 1:10 is not a small enough ratio under some conditions. Given this, and given that we know 1:2 is used by TCP endpoints, it makes the most sense to specify that as the suggested guidance. The new text proposed in describes the tradeoff involved, and the rationale for using 1:2. I am strongly against doing anything substantially different than what we have in the draft now without overwhelming information supporting a different position.\nAgree with NAME 1:2 seems a reasonable default, and there seems to be considerable interest in draft-iyengar-quic-delayed-ack, which has a better proposal for stacks that care.\nI think the new ACK text is improving the specification, but I'd like to be sure we have thought about this decision. Let me try to carefully respond on-list here, to see where we agree/disagree: I disagree, I think we shouldn't be perpetuating a myth that a sender receives a TCP ACK for every other packet. Even in the 90's implementations TCP stacks often generated stretch-ACKs for 3 segments. Since then, limited byte counting was introduced and significantly improved the sender's response to stretch ACKs, and that has been good for everyone. Senders still rely on byte-counting in TCP to handle Stretch-ACKs, and this need is not decreasing: new network cards reduce per-packet receive processing using Large Receive Offload (LRO) or Generic Receiver Offload (GRO). I am saying many TCP senders actually do today see stretch-ACKs. There are papers that have shown significant presence of stretch ACKs, e.g., (H. Ding and M. Rabinovich. TCP stretch acknowledgements and timestamps: findings and implications for passive RTT measurement. ACM SIGCOMM CCR, 45(3):20\u201327, 2015.). A TCP Stretch-ACK of 3 or 4 was common in their datasets (about 10% of cases). In some networks, the proportion of Stretch-ACKs will be much much higher, since ACK Thining is now widely deployed in WiFi drivers as well as cable, satellite and other access technologies - often reducing ACK size/rate by a factor of two. I am intrigued and would like to know more, in case I missed something? Typical algorithms I am aware of track a queue at the \"bottleneck\" and then make a decision based on upon the contents of the queue. If there is no asymmtery, there isn't normally a queue and the algorithm won't hunt to discard an \"ACK\". But, maybe, there is some truth in being wary: Encrypted flows can also build return path queues (as does QUIC traffic). Without an ability to interpret the transport data (which nobody, including me, wants for QUIC), a bottleneck router is forced to either use another rule to drop packets from the queue (such as the smallest packets) or to allow a queue to grow, limiting forward direction throughput. I would say neither of these outcomes are great because the router is trying to fix the transport's asymetry. However, if QUIC causes this problem, I suggest some method will proliferate as QUIC traffic increases. I didn't find something in QUIC spec. that was a major concern. It looked to me like QUIC had learned from TCP how to handle stretch-ACKs. After that, we changed quicly to use 1:10 with Reno, and things worked well. Didn't NAME also use Reno? Sure, the deisgn of TCP recognised that AR 1:1 generated too much traffic, AR 1:2 for TCP resulted in an Forward:Return traffic ratio of ~1.5%, and QUIC increases this ~3%. However, as network link speeds increased, this proved too low for many network links and so, TCP ACK thinning came into being. This often reduces the TCP AR to around 1:4 ( and will do this without bias, whatever the outcome is.\nNAME Just to clarify, in our experiment, we reduced the ACK frequency from the default (2) to after leaving slow start, then updated it every 4 PTO. I think that would work fine on a high-bandwidth, stable network. But I am not sure if it would work as well as ack2 on all network conditions. Anyways, I'd explain our experiment as an optimization of ack 2, as it was reduction of ack rate after leaving slow start. I think that that was a sensible choice, because IIUC having frequent acks during slow start is essential. Assuming that we'd prefer using ack 2 during slow start, the question is if we need to hard-code in the base draft how to optimize after leaving slow start. While I agree that continuing to use ack 2 after slow start is concerning for certain networks, I am reluctant to spending time on looking for such a fixed set of values across diverse networks (including much slower links or lossy ones). Rather, my preference goes to shipping the base-draft as-is, then moving on to finalize the ack-frequency frame. I think we can assume most if not all of the popular clients to support the ack-frequency frame. Then, it becomes less of a question what the default in the base-draft is.\nThanks kazuho, 1/8 or 1/4 cwnd seems reasonable, as does using Ack 2 for the first part of slow start ... that closed to what we proposed. And yes there will anyway be cases where any default does not work. If QUIC decides to retain Ack 2, can we at least capture that Ack 2 with QUIC doesn't have the same return path performance as Ack 2 with TCP - so that people know that less frequent Acts are something to think about from the start? I'm doing a talk in PANRG this week, and have decided to talk-through this topic (I was already going to talk about things related to return paths). That'll give people a view of some data - and a little thought about the implications this has for QUIC, and whether there is an incentive for network device to thin QUIC to restore performance.\nNAME : I'd be in support of saying that ack thinning happens for TCP and cite those studies, and that in the absence of that, there might be throughput consequences for QUIC. I think we've said exactly that in other words in , but if we want to say more there, that's fine with me. That would make this an editorial issue, but I'm entirely happy to do more words there. I am against changing the recommendation here for another arbitrary recommendation. I think the right approach is to allow endpoints to specify this during a connection, which is where the extension is helpful. As NAME notes, our hope is that the extension gets widely deployed. If you want to make the argument for the extension to become part of the base draft, that's a different conversation, but that's not the one we're having here.\nAttempting to summarise the current status of this issue as a chair: there appears to have been a fair amount of discussion about making a design change, with representatives both in favour and against. It appears there is no omnipotent default policy, choosing an optimal one depends on many factors. The editors have landed an editorial change that attempts to better describe the rationale behind the QUICS's current design and highlights some of the factors. There may be further editorial improvements but they no not need to block the design issue. The proposal therefore is to close this issue with no design change. We had prepared some slides for the Vancouver IETD to discuss QUIC's ACKs, and how to make QUIC work better than TCP in common asymmetric paths. This is a short slide deck that summarises what we have learned about ACKs in QUIC: URL It compares the ACK traffic of TCP with that of QUIC from the point of view of a path where QUIC's return traffic is somehow constrained or could impact other user traffic. We think that ACKs are a concern for various shared capacity return link (e.g., radio-based methods in WiFi or mobile cellular were ACKs consume spectrum/radio resource or battery; shared capacity satellite broadband return link or the uplink of a cable system). For TCP, such systems commonly involve a PEP or an ACK Filtering mechanism (which clearly isn't wanted or even possible for QUIC). The baseline QUIC performance (as per current spec) suffers over these paths. So our first question is: Do other people think it is important that the default specified for QUIC works well with asymmetric paths ? Gorry", "new_text": "Note that the same limitation applies to other data sent by the server protected by the 1-RTT keys. 13.2.8. Packets containing PADDING frames are considered to be in flight for congestion control purposes QUIC-RECOVERY. Packets containing only PADDING frames therefore consume congestion window but do not generate acknowledgments that will open the congestion window. To avoid a deadlock, a sender SHOULD ensure that other frames are sent periodically in addition to PADDING frames to elicit acknowledgments from the receiver. 13.3. QUIC packets that are determined to be lost are not retransmitted"}
{"id": "q-en-quicwg-base-drafts-c67897974a9ff5b36b3c476fa35d2614a4008dca706fcfaa08445fe829ca6bb2", "old_text": "opening of either the send or receive side causes the stream to open in both directions. An endpoint MUST open streams of the same type in increasing order of stream ID. These states are largely informative. This document uses stream states to describe rules for when and how different types of frames can be sent and the reactions that are expected when", "comments": "The text in 2.1 is more complete, so we don't need this.\nWhen I read 3.0 I was a bit confused, because when the discussion of streams takes place in section 2.1, the requirement on stream ID ordering is already mentioned, but the reminder of what the types are is fresh in the mind. Perhaps this should be revised.", "new_text": "opening of either the send or receive side causes the stream to open in both directions. These states are largely informative. This document uses stream states to describe rules for when and how different types of frames can be sent and the reactions that are expected when"}
{"id": "q-en-quicwg-base-drafts-187639b153ea0de606ec710b58e172435b859ce1c94354bae259d4c56ae66928", "old_text": "a PATH_CHALLENGE frame is not adequate validation, since the acknowledgment can be spoofed by a malicious peer. Note that receipt on a different local address does not result in path validation failure, as it might be a result of a forwarded packet (see off-path-forward) or misrouting. It is possible that a valid PATH_RESPONSE might be received in the future. 8.6. Path validation only fails when the endpoint attempting to validate", "comments": "In early versions of path validation, there was a requirement that PATH_CHALLENGE be sent on the same path. This is, for various reasons, either unnecessary or harmful, so we no longer have that requirement. But there was some residual text that assumed that property. This removes a paragraph citing symmetry directly, and rewords text that assumes return reachability - a property that QUIC no longer provides mechanisms for.\nIn , we removed the requirement that PATHRESPONSE be sent/received on the same path that PATHCHALLENGE was sent. However, there is still text in the document that relies on this assumption. In 8.5, it explains that receipt of the PATHRESPONSE on another interface doesn't cause validation to fail, because \"It is possible that a valid PATHRESPONSE might be received in the future.\" But receipt of the PATHRESPONSE, over any interface is valid, and causes the validation to succeed. In 9.2, it suggests using path validation to demonstrate return reachability, but since PATHRESPONSE need not be sent on the same path, path validation doesn't demonstrate that.\nNote that sending the response on the old path can lead to failures in \"break before make\" scenarios. Shall we mention that?", "new_text": "a PATH_CHALLENGE frame is not adequate validation, since the acknowledgment can be spoofed by a malicious peer. 8.6. Path validation only fails when the endpoint attempting to validate"}
{"id": "q-en-quicwg-base-drafts-187639b153ea0de606ec710b58e172435b859ce1c94354bae259d4c56ae66928", "old_text": "Receiving acknowledgments for data sent on the new path serves as proof of the peer's reachability from the new address. Note that since acknowledgments may be received on any path, return reachability on the new path is not established. To establish return reachability on the new path, an endpoint MAY concurrently initiate path validation migrate-validate on the new path or it MAY choose to wait for the peer to send the next non-probing frame to its new address. 9.3.", "comments": "In early versions of path validation, there was a requirement that PATH_CHALLENGE be sent on the same path. This is, for various reasons, either unnecessary or harmful, so we no longer have that requirement. But there was some residual text that assumed that property. This removes a paragraph citing symmetry directly, and rewords text that assumes return reachability - a property that QUIC no longer provides mechanisms for.\nIn , we removed the requirement that PATHRESPONSE be sent/received on the same path that PATHCHALLENGE was sent. However, there is still text in the document that relies on this assumption. In 8.5, it explains that receipt of the PATHRESPONSE on another interface doesn't cause validation to fail, because \"It is possible that a valid PATHRESPONSE might be received in the future.\" But receipt of the PATHRESPONSE, over any interface is valid, and causes the validation to succeed. In 9.2, it suggests using path validation to demonstrate return reachability, but since PATHRESPONSE need not be sent on the same path, path validation doesn't demonstrate that.\nNote that sending the response on the old path can lead to failures in \"break before make\" scenarios. Shall we mention that?", "new_text": "Receiving acknowledgments for data sent on the new path serves as proof of the peer's reachability from the new address. Note that since acknowledgments may be received on any path, return reachability on the new path is not established. No method is provided to establish return reachability, as endpoints independently determine reachability on each direction of a path. To establish reachability on the new path, an endpoint MAY concurrently initiate path validation migrate-validate on the new path. An endpoint MAY defer path validation until after a peer sends the next non-probing frame to its new address. 9.3."}
{"id": "q-en-quicwg-base-drafts-01935d39a9f33491832e8e566bba4565c3f4dcda2adb89705ce0006da5fb67a3", "old_text": "4.1.4. As keys for new encryption levels become available, TLS provides QUIC with those keys. Separately, as keys at a given encryption level become available to TLS, TLS indicates to QUIC that reading or writing keys at that encryption level are available. These events are not asynchronous; they always occur immediately after TLS is provided with new handshake bytes, or after TLS produces handshake bytes. TLS provides QUIC with three items as a new encryption level becomes available:", "comments": "These two statements are - to the extent that I can best determine - redundant. One is better.\nThe first two sentences of QUIC-TLS, Section 4.1.4 clearly intend to say different things, but I can't figure out the distinction. Is there a way we could reword them to make them clearer?\nI think that this is not separately at all. Let's reduce that to a single statement.", "new_text": "4.1.4. As keys at a given encryption level become available to TLS, TLS indicates to QUIC that reading or writing keys at that encryption level are available. These events are not asynchronous; they always occur immediately after TLS is provided with new handshake bytes, or after TLS produces handshake bytes. TLS provides QUIC with three items as a new encryption level becomes available:"}
{"id": "q-en-quicwg-base-drafts-ad607b3db43cf70374f379ebfd498f560b1ec0408fcd925c82183b5bea5eb943", "old_text": "CONNECTION_CLOSE in response to any UDP datagram that is received. However, an endpoint without the packet protection keys cannot identify and discard invalid packets. To avoid creating an unwitting amplification attack, such endpoints MUST reduce the frequency with which it sends packets containing a CONNECTION_CLOSE frame. To minimize the state that an endpoint maintains for a closing connection, endpoints MAY send the exact same packet. Allowing retransmission of a closing packet is an exception to the", "comments": "We weren't very concrete in saying how endpoints generate packets with CONNECTIONCLOSE, particularly those that throw away keys. We have a rather vague requirement. In short, follow the same rules we established for the handshake. Wording this as an aggregate number allows for stochastic reactions and larger CONNECTIONCLOSE frames. This way, if you get a 25-byte packet and respond with a 200-byte packet, you can do that, but you have to respond to 3 in 8 or fewer in that way. Note that this limit only applies if the endpoint throws away decryption keys. Endpoints with keys aren't blind amplifiers.\nclosing period ({{draining}}) and send a packet containing a CONNECTIONCLOSE in response to any UDP datagram that is received. However, an endpoint without the packet protection keys cannot identify and discard invalid packets. To avoid creating an unwitting amplification attack, such endpoints MUST reduce the frequency with which it sends packets containing a CONNECTIONCLOSE frame. To This seems like a pretty vague requirement for a 2119 MUST. Like, I could only send it 9999/10000 packets and that would be compliant.\nI think that it would be good to follow the 3x rule here also, but that's a non-editorial change.\nDo folks agree the PR resolves this?\nI think the PR resolves this and is a sensible choice.\nYup, the PR resolves this.\nOK, labelling as such.", "new_text": "CONNECTION_CLOSE in response to any UDP datagram that is received. However, an endpoint without the packet protection keys cannot identify and discard invalid packets. To avoid creating an unwitting amplification attack, such endpoints MUST limit the cumulative size of packets containing a CONNECTION_CLOSE frame to 3 times the cumulative size of the packets that cause those packets to be sent. To minimize the state that an endpoint maintains for a closing connection, endpoints MAY send the exact same packet. Allowing retransmission of a closing packet is an exception to the"}
{"id": "q-en-quicwg-base-drafts-d4ac070ca418107f72a3d5a74096e23b8bf7ed967369f9e95d49e79f60541086", "old_text": "likely the receiver will be able to process all the packets in a single pass. A packet with a short header does not include a length, so it can only be the last packet included in a UDP datagram. An endpoint SHOULD NOT coalesce multiple packets at the same encryption level. Receivers MAY route based on the information in the first packet contained in a UDP datagram. Senders MUST NOT coalesce QUIC packets", "comments": "This uses positive framing for the recommendation to use frames rather than packets. And it makes that recommendation directly, rather than implying it.\nHere are three quasi-nits about S 12.2. Handshake, 1-RTT; see Section 4.1.4 of {{QUIC-TLS}}) makes it more likely the receiver will be able to process all the packets in a single pass. A packet with a short header does not include a length, so it can only be the last packet included in a UDP datagram. An endpoint SHOULD NOT coalesce multiple packets at the same encryption level. Is the implication here that you should instead pack all the frames into one packet? UDP datagram. Senders MUST NOT coalesce QUIC packets for different connections into a single UDP datagram. Receivers SHOULD ignore any subsequent packets with a different Destination Connection ID than the first packet in the datagram. Is there a valid reason to have different SCIDs? ({{packet-version}}), and packets with a short header ({{short-header}}) do not contain a Length field and so cannot be followed by other packets in the same UDP datagram. Note also that there is no situation where a Retry or Version Negotiation packet is coalesced with another packet. Should this be a MUST NOT?\nYes. See . I don't see why. It's just a fact, not a conformance requirement.\nFor (1) I think it would be helpful to say that.\nLGTM, one suggestion.", "new_text": "likely the receiver will be able to process all the packets in a single pass. A packet with a short header does not include a length, so it can only be the last packet included in a UDP datagram. An endpoint SHOULD include multiple frames in a single packet if they are to be sent at the same encryption level, instead of coalescing multiple packets at the same encryption level. Receivers MAY route based on the information in the first packet contained in a UDP datagram. Senders MUST NOT coalesce QUIC packets"}
{"id": "q-en-quicwg-base-drafts-daf433835af87430c1717cc5383a0c548458b58cacf51f22373e144054c59cea", "old_text": "the PTO expiry, subject to address validation limits; see Section 8.1 of QUIC-TRANSPORT. Peers can also use coalesced packets to ensure that each datagram elicits at least one acknowledgement. For example, clients can coalesce an Initial packet containing PING and PADDING frames with a 0-RTT data packet and a server can coalesce an Initial packet containing a PING frame with one or more packets in its first flight.", "comments": "This should resolve some parts of .\nNAME If you wish, I will split this PR into small ones.\nI have dropped the commit of \"in flight\".\nI also changed into .\nNAME The commit for \"towards\" has been dropped.\nThanks. BTW, that thing is an another odd English quirk.\nUnfortunately, this PR bundles a bunch of unrelated changes. Some of them I like, some of them I don't.Thanks for the PR, NAME !", "new_text": "the PTO expiry, subject to address validation limits; see Section 8.1 of QUIC-TRANSPORT. Endpoints can also use coalesced packets to ensure that each datagram elicits at least one acknowledgement. For example, a client can coalesce an Initial packet containing PING and PADDING frames with a 0-RTT data packet and a server can coalesce an Initial packet containing a PING frame with one or more packets in its first flight."}
{"id": "q-en-quicwg-base-drafts-622e9efe21197ee469308b1e830278fdc5abe1bf8c4696f71bdb9983ab3197b9", "old_text": "outdated frame, such as a MAX_DATA frame carrying a smaller maximum data than one found in an older packet. Upon detecting losses, a sender MUST take appropriate congestion control action. The details of loss detection and congestion control are described in QUIC-RECOVERY.", "comments": "Added changes to recovery as well.\nBy design, TCP is capable of handling late-arriving acks. If a TCP endpoint receives an ack for a byte range after declaring loss of that byte range, retransmission of that byte range is cancelled. In contrast, QUIC uses monotonically increasing packet numbers. That is definitely an improvement, but does have a downside. If loss detection is too sensitive, and if endpoints drop information regarding what had been sent in a packet at the moment that packet is declared lost, then it would not recognize the late acks. I think that we should suggest (even if not recommend) QUIC endpoints to recognize late-arriving acks, so that our loss detection can be as aggressive and as effective as TCP, if not better. (Please let me know if such text already exists. I am opening the issue because I could not find one)\nI have to agree. We had to implement late acknowledgment by marking \"lost\" packets and retaining them. I believe that this was very helpful in reducing the amount of redundant information we send. Failing to retain packets that were marked as lost was also the cause of a bug. As the lost packet was removed from our outstanding calculation (a different value to the \"in-flight\" counter we use for congestion control), we believed that there were no packets outstanding and we didn't try to send anything when the next PTO fired. As the loss detection timer is shorter than the PTO timer, we believed that there was nothing to send often (and we don't check all sources of frames, which would have told us otherwise).\nAgreed.\nYeah, we should have text on this if we don't already.\nThe resolution in introduces a new recommendation for senders, so this ought to be considered a design issue if we're happy with that resolution.\nURL we have this text:\nNAME Thank you for pointing that out. I missed that. I tend to think that we should adopt . As stated in my original comment, handling of late acks is the prerequisite of many of the normative requirements that exist in the recovery draft. Therefore, it is better to have a normative requirement for handling of late acks.\nI'm not opposing adding new normative text. If new text is added, I think that the existing text should be aligned to it. In the PR, it recommends 3PTO, but the recovery draft A.1 says 1RTT.\nFWIW, our implementation, as of a little while ago, tracks for the current PTO. If everything is working nicely, that limits our commitment. As we are sending probes, the time lengthens so that we send a probe before discarding state. If it takes two probes to get an ACK, we won't recognize that, but we will always catch an ACK for the last set of probes. And the state commitment is minimal unless we're forced to probe.\nI'll note that the PR is against transport, but this issue is tagged as recovery.\nThanks for pointing out this inconsistency, NAME I would argue that this should be a PTO, since that allows for including the RTT variance in the waiting period. Using a PTO includes the maximum ack delay as well -- which may not be necessary -- but it's convenient to use the PTO period.\nI wanted to clarify my comment on the PR about believing that 1 RTT is a preferable recommendation to 3*PTO. We definitely don't want to discard this information BEFORE the packet is lost or acknowledged. But after it's been declared lost, there's no need to wait longer than the reordering window to receive an acknowledgement, because an immediate ACK will be sent when the out of order packet arrives. I tend to think an RTT is a plenty long reordering window in practice. If nothing is inflight before and nothing sent after the packet is lost AND the late immediate ACK for the reordered packet is lost, then it could take longer than the reordering window to receive an ACK, but that seems like an edge case that's not very interesting.\nNAME : I've changed it to 1 PTO. We've commonly used PTO as a time period for everything else in the transport, and so it's convenient to use PTO here too. Ultimately, this is the amount of time you're holding on to packets marked as lost, so the extra time waiting for a PTO (over an RTT) really shouldn't make any measurable difference. If you still think we should say RTT, I'm fine with that.\nI'm fine with PTO, and I prefer PTO above RTT. There could be reasons where there are some packet reordering due to multiple paths being used (e.g., ECMP, clustered endpoint). In such case, the variance of latency becomes more like a constant independent of RTT. I think it is a good idea to recommend a value (i.e. PTO) that can cover such constant even if RTT is very small. PTO has that would cover that constant.\nI'm not convinced I understand the rationale completely, but I'm ok with 1 PTO. I believe we should update the recovery draft in the same PR.\nSo, design?\nWe have text about this in recovery, but it's there was no normative text and it's in the appendix, so I think this is design.\nLet's not thrash too much on this one.", "new_text": "outdated frame, such as a MAX_DATA frame carrying a smaller maximum data than one found in an older packet. A sender SHOULD avoid retransmitting information from packets once they are acknowledged. This includes packets that are acknowledged after being declared lost, which can happen in the presence of network reordering. Doing so requires senders to retain information about packets after they are declared lost. A sender can discard this information after a period of time elapses that adequately allows for reordering, such as a PTO (Section 6.2 of QUIC-RECOVERY), or on other events, such as reaching a memory limit. Upon detecting losses, a sender MUST take appropriate congestion control action. The details of loss detection and congestion control are described in QUIC-RECOVERY."}
{"id": "q-en-quicwg-base-drafts-38063a9d307243d18b926239ca8118ea9931d127c458634cb58cd2efeebf7c94", "old_text": "be acknowledged, an endpoint MAY wait until an ack-eliciting packet has been received to include an ACK frame with outgoing frames. 13.2.2. A receiver determines how frequently to send acknowledgements in", "comments": "It's no longer normative and it's in a single section. I only moved text so far, and not as far as NAME suggested. I can go farther here or in a follow-up PR.\nNAME I would like to merge this. WDYT?\nCan we merge this soon. I'm worried about it developing conflicts.\nYep, NAME lost the chance to fix the PR. We are - of course - still open for editorial improvements in the form of new pull requests though.\ncalls them \"exemplary\" but 13.2.4 has MUST in it.\nFor reference, this refers to and in the editor's copy. I'm in favour of striking the exemplary note and saying that while the recovery draft is advisory, this document depends on implementations following something like what is recommended.\nThe specific MUSTs aren't really about the ACK algorithm per se, but about state endpoints will have to maintain for other reasons that are typically folded into your loss recovery code. If an implementation is unable to do duplicate detection for a packet number range, it MUST fail toward rejecting all packets, not toward accepting potential duplicates. An implementation must track the greatest packet number ever received, for use in recovering future packet numbers. In fact, the first requirement is discussed in 12.3 and could probably be dropped here. The second isn't actually called out (MUST) in 17.1, but could easily be moved there.\nI think I'm inclined towards reorganizing the text and putting all the 'exemplary' parts into one section(ie: \"Limiting ACK Ranges\") and moving the normative text either into \"Managing ACK Ranges\" or elsewhere in the draft.\nI thought I could clean this up fairly quickly, but I think a fair number of paragraphs need to move, including some of NAME suggestions. I'm happy to try to improve this in a few days if no one else has beaten me to it.\nAnyone want to help NAME out and take this on?\nSorry about dropping this one. I'm fine with it, and happy to make editorials one later.Thanks Ian. When you lay it out like this, it seems so obvious. I checked that we don't have citations for {{ack-tracking}}, so this works nicely. If you care about the \"MAY\", then we can fix that in another PR.", "new_text": "be acknowledged, an endpoint MAY wait until an ack-eliciting packet has been received to include an ACK frame with outgoing frames. A receiver MUST NOT send an ack-eliciting frame in all packets that would otherwise be non-ack-eliciting, to avoid an infinite feedback loop of acknowledgements. 13.2.2. A receiver determines how frequently to send acknowledgements in"}
{"id": "q-en-quicwg-base-drafts-38063a9d307243d18b926239ca8118ea9931d127c458634cb58cd2efeebf7c94", "old_text": "single QUIC packet. If it does not, then older ranges (those with the smallest packet numbers) are omitted. ack-tracking and ack-limiting describe an exemplary approach for determining what packets to acknowledge in each ACK frame. Though the goal of these algorithms is to generate an acknowledgment for every packet that is processed, it is still possible for acknowledgments to be lost. A sender cannot expect to receive an acknowledgment for every packet that the receiver processes. 13.2.4. When a packet containing an ACK frame is sent, the largest acknowledged in that frame may be saved. When a packet containing an ACK frame is acknowledged, the receiver can stop acknowledging packets less than or equal to the largest acknowledged in the sent ACK frame. In cases without ACK frame loss, this algorithm allows for a minimum of 1 RTT of reordering. In cases with ACK frame loss and reordering, this approach does not guarantee that every acknowledgement is seen by the sender before it is no longer included in the ACK frame. Packets could be received out of order and all subsequent ACK frames containing them could be lost. In this case, the loss recovery algorithm could cause spurious retransmissions, but the sender will continue making forward progress. 13.2.5. A receiver limits the number of ACK Ranges (ack-ranges) it remembers and sends in ACK frames, both to limit the size of ACK frames and to avoid resource exhaustion. After receiving acknowledgments for an ACK frame, the receiver SHOULD stop tracking those acknowledged ACK Ranges. It is possible that retaining many ACK Ranges could cause an ACK frame to become too large. A receiver can discard unacknowledged ACK", "comments": "It's no longer normative and it's in a single section. I only moved text so far, and not as far as NAME suggested. I can go farther here or in a follow-up PR.\nNAME I would like to merge this. WDYT?\nCan we merge this soon. I'm worried about it developing conflicts.\nYep, NAME lost the chance to fix the PR. We are - of course - still open for editorial improvements in the form of new pull requests though.\ncalls them \"exemplary\" but 13.2.4 has MUST in it.\nFor reference, this refers to and in the editor's copy. I'm in favour of striking the exemplary note and saying that while the recovery draft is advisory, this document depends on implementations following something like what is recommended.\nThe specific MUSTs aren't really about the ACK algorithm per se, but about state endpoints will have to maintain for other reasons that are typically folded into your loss recovery code. If an implementation is unable to do duplicate detection for a packet number range, it MUST fail toward rejecting all packets, not toward accepting potential duplicates. An implementation must track the greatest packet number ever received, for use in recovering future packet numbers. In fact, the first requirement is discussed in 12.3 and could probably be dropped here. The second isn't actually called out (MUST) in 17.1, but could easily be moved there.\nI think I'm inclined towards reorganizing the text and putting all the 'exemplary' parts into one section(ie: \"Limiting ACK Ranges\") and moving the normative text either into \"Managing ACK Ranges\" or elsewhere in the draft.\nI thought I could clean this up fairly quickly, but I think a fair number of paragraphs need to move, including some of NAME suggestions. I'm happy to try to improve this in a few days if no one else has beaten me to it.\nAnyone want to help NAME out and take this on?\nSorry about dropping this one. I'm fine with it, and happy to make editorials one later.Thanks Ian. When you lay it out like this, it seems so obvious. I checked that we don't have citations for {{ack-tracking}}, so this works nicely. If you care about the \"MAY\", then we can fix that in another PR.", "new_text": "single QUIC packet. If it does not, then older ranges (those with the smallest packet numbers) are omitted. A receiver limits the number of ACK Ranges (ack-ranges) it remembers and sends in ACK frames, both to limit the size of ACK frames and to avoid resource exhaustion. After receiving acknowledgments for an ACK frame, the receiver SHOULD stop tracking those acknowledged ACK Ranges. Senders can expect acknowledgements for most packets, but QUIC does not guarantee receipt of an acknowledgment for every packet that the receiver processes. It is possible that retaining many ACK Ranges could cause an ACK frame to become too large. A receiver can discard unacknowledged ACK"}
{"id": "q-en-quicwg-base-drafts-38063a9d307243d18b926239ca8118ea9931d127c458634cb58cd2efeebf7c94", "old_text": "value than what was included in a previous ACK frame could cause ECN to be unnecessarily disabled; see ecn-validation. A receiver that sends only non-ack-eliciting packets, such as ACK frames, might not receive an acknowledgement for a long period of time. This could cause the receiver to maintain state for a large", "comments": "It's no longer normative and it's in a single section. I only moved text so far, and not as far as NAME suggested. I can go farther here or in a follow-up PR.\nNAME I would like to merge this. WDYT?\nCan we merge this soon. I'm worried about it developing conflicts.\nYep, NAME lost the chance to fix the PR. We are - of course - still open for editorial improvements in the form of new pull requests though.\ncalls them \"exemplary\" but 13.2.4 has MUST in it.\nFor reference, this refers to and in the editor's copy. I'm in favour of striking the exemplary note and saying that while the recovery draft is advisory, this document depends on implementations following something like what is recommended.\nThe specific MUSTs aren't really about the ACK algorithm per se, but about state endpoints will have to maintain for other reasons that are typically folded into your loss recovery code. If an implementation is unable to do duplicate detection for a packet number range, it MUST fail toward rejecting all packets, not toward accepting potential duplicates. An implementation must track the greatest packet number ever received, for use in recovering future packet numbers. In fact, the first requirement is discussed in 12.3 and could probably be dropped here. The second isn't actually called out (MUST) in 17.1, but could easily be moved there.\nI think I'm inclined towards reorganizing the text and putting all the 'exemplary' parts into one section(ie: \"Limiting ACK Ranges\") and moving the normative text either into \"Managing ACK Ranges\" or elsewhere in the draft.\nI thought I could clean this up fairly quickly, but I think a fair number of paragraphs need to move, including some of NAME suggestions. I'm happy to try to improve this in a few days if no one else has beaten me to it.\nAnyone want to help NAME out and take this on?\nSorry about dropping this one. I'm fine with it, and happy to make editorials one later.Thanks Ian. When you lay it out like this, it seems so obvious. I checked that we don't have citations for {{ack-tracking}}, so this works nicely. If you care about the \"MAY\", then we can fix that in another PR.", "new_text": "value than what was included in a previous ACK frame could cause ECN to be unnecessarily disabled; see ecn-validation. ack-tracking describes an exemplary approach for determining what packets to acknowledge in each ACK frame. Though the goal of this algorithm is to generate an acknowledgment for every packet that is processed, it is still possible for acknowledgments to be lost. 13.2.4. When a packet containing an ACK frame is sent, the largest acknowledged in that frame may be saved. When a packet containing an ACK frame is acknowledged, the receiver can stop acknowledging packets less than or equal to the largest acknowledged in the sent ACK frame. A receiver that sends only non-ack-eliciting packets, such as ACK frames, might not receive an acknowledgement for a long period of time. This could cause the receiver to maintain state for a large"}
{"id": "q-en-quicwg-base-drafts-38063a9d307243d18b926239ca8118ea9931d127c458634cb58cd2efeebf7c94", "old_text": "send a PING or other small ack-eliciting frame occasionally, such as once per round trip, to elicit an ACK from the peer. A receiver MUST NOT send an ack-eliciting frame in all packets that would otherwise be non-ack-eliciting, to avoid an infinite feedback loop of acknowledgements. 13.2.6. An endpoint measures the delays intentionally introduced between the time the packet with the largest packet number is received and the", "comments": "It's no longer normative and it's in a single section. I only moved text so far, and not as far as NAME suggested. I can go farther here or in a follow-up PR.\nNAME I would like to merge this. WDYT?\nCan we merge this soon. I'm worried about it developing conflicts.\nYep, NAME lost the chance to fix the PR. We are - of course - still open for editorial improvements in the form of new pull requests though.\ncalls them \"exemplary\" but 13.2.4 has MUST in it.\nFor reference, this refers to and in the editor's copy. I'm in favour of striking the exemplary note and saying that while the recovery draft is advisory, this document depends on implementations following something like what is recommended.\nThe specific MUSTs aren't really about the ACK algorithm per se, but about state endpoints will have to maintain for other reasons that are typically folded into your loss recovery code. If an implementation is unable to do duplicate detection for a packet number range, it MUST fail toward rejecting all packets, not toward accepting potential duplicates. An implementation must track the greatest packet number ever received, for use in recovering future packet numbers. In fact, the first requirement is discussed in 12.3 and could probably be dropped here. The second isn't actually called out (MUST) in 17.1, but could easily be moved there.\nI think I'm inclined towards reorganizing the text and putting all the 'exemplary' parts into one section(ie: \"Limiting ACK Ranges\") and moving the normative text either into \"Managing ACK Ranges\" or elsewhere in the draft.\nI thought I could clean this up fairly quickly, but I think a fair number of paragraphs need to move, including some of NAME suggestions. I'm happy to try to improve this in a few days if no one else has beaten me to it.\nAnyone want to help NAME out and take this on?\nSorry about dropping this one. I'm fine with it, and happy to make editorials one later.Thanks Ian. When you lay it out like this, it seems so obvious. I checked that we don't have citations for {{ack-tracking}}, so this works nicely. If you care about the \"MAY\", then we can fix that in another PR.", "new_text": "send a PING or other small ack-eliciting frame occasionally, such as once per round trip, to elicit an ACK from the peer. In cases without ACK frame loss, this algorithm allows for a minimum of 1 RTT of reordering. In cases with ACK frame loss and reordering, this approach does not guarantee that every acknowledgement is seen by the sender before it is no longer included in the ACK frame. Packets could be received out of order and all subsequent ACK frames containing them could be lost. In this case, the loss recovery algorithm could cause spurious retransmissions, but the sender will continue making forward progress. 13.2.5. An endpoint measures the delays intentionally introduced between the time the packet with the largest packet number is received and the"}
{"id": "q-en-quicwg-base-drafts-38063a9d307243d18b926239ca8118ea9931d127c458634cb58cd2efeebf7c94", "old_text": "packets, the ACK Delay field in acknowledgements for those packet types SHOULD be set to 0. 13.2.7. ACK frames MUST only be carried in a packet that has the same packet number space as the packet being acknowledged; see packet-protected.", "comments": "It's no longer normative and it's in a single section. I only moved text so far, and not as far as NAME suggested. I can go farther here or in a follow-up PR.\nNAME I would like to merge this. WDYT?\nCan we merge this soon. I'm worried about it developing conflicts.\nYep, NAME lost the chance to fix the PR. We are - of course - still open for editorial improvements in the form of new pull requests though.\ncalls them \"exemplary\" but 13.2.4 has MUST in it.\nFor reference, this refers to and in the editor's copy. I'm in favour of striking the exemplary note and saying that while the recovery draft is advisory, this document depends on implementations following something like what is recommended.\nThe specific MUSTs aren't really about the ACK algorithm per se, but about state endpoints will have to maintain for other reasons that are typically folded into your loss recovery code. If an implementation is unable to do duplicate detection for a packet number range, it MUST fail toward rejecting all packets, not toward accepting potential duplicates. An implementation must track the greatest packet number ever received, for use in recovering future packet numbers. In fact, the first requirement is discussed in 12.3 and could probably be dropped here. The second isn't actually called out (MUST) in 17.1, but could easily be moved there.\nI think I'm inclined towards reorganizing the text and putting all the 'exemplary' parts into one section(ie: \"Limiting ACK Ranges\") and moving the normative text either into \"Managing ACK Ranges\" or elsewhere in the draft.\nI thought I could clean this up fairly quickly, but I think a fair number of paragraphs need to move, including some of NAME suggestions. I'm happy to try to improve this in a few days if no one else has beaten me to it.\nAnyone want to help NAME out and take this on?\nSorry about dropping this one. I'm fine with it, and happy to make editorials one later.Thanks Ian. When you lay it out like this, it seems so obvious. I checked that we don't have citations for {{ack-tracking}}, so this works nicely. If you care about the \"MAY\", then we can fix that in another PR.", "new_text": "packets, the ACK Delay field in acknowledgements for those packet types SHOULD be set to 0. 13.2.6. ACK frames MUST only be carried in a packet that has the same packet number space as the packet being acknowledged; see packet-protected."}
{"id": "q-en-quicwg-base-drafts-38063a9d307243d18b926239ca8118ea9931d127c458634cb58cd2efeebf7c94", "old_text": "Note that the same limitation applies to other data sent by the server protected by the 1-RTT keys. 13.2.8. Packets containing PADDING frames are considered to be in flight for congestion control purposes QUIC-RECOVERY. Packets containing only", "comments": "It's no longer normative and it's in a single section. I only moved text so far, and not as far as NAME suggested. I can go farther here or in a follow-up PR.\nNAME I would like to merge this. WDYT?\nCan we merge this soon. I'm worried about it developing conflicts.\nYep, NAME lost the chance to fix the PR. We are - of course - still open for editorial improvements in the form of new pull requests though.\ncalls them \"exemplary\" but 13.2.4 has MUST in it.\nFor reference, this refers to and in the editor's copy. I'm in favour of striking the exemplary note and saying that while the recovery draft is advisory, this document depends on implementations following something like what is recommended.\nThe specific MUSTs aren't really about the ACK algorithm per se, but about state endpoints will have to maintain for other reasons that are typically folded into your loss recovery code. If an implementation is unable to do duplicate detection for a packet number range, it MUST fail toward rejecting all packets, not toward accepting potential duplicates. An implementation must track the greatest packet number ever received, for use in recovering future packet numbers. In fact, the first requirement is discussed in 12.3 and could probably be dropped here. The second isn't actually called out (MUST) in 17.1, but could easily be moved there.\nI think I'm inclined towards reorganizing the text and putting all the 'exemplary' parts into one section(ie: \"Limiting ACK Ranges\") and moving the normative text either into \"Managing ACK Ranges\" or elsewhere in the draft.\nI thought I could clean this up fairly quickly, but I think a fair number of paragraphs need to move, including some of NAME suggestions. I'm happy to try to improve this in a few days if no one else has beaten me to it.\nAnyone want to help NAME out and take this on?\nSorry about dropping this one. I'm fine with it, and happy to make editorials one later.Thanks Ian. When you lay it out like this, it seems so obvious. I checked that we don't have citations for {{ack-tracking}}, so this works nicely. If you care about the \"MAY\", then we can fix that in another PR.", "new_text": "Note that the same limitation applies to other data sent by the server protected by the 1-RTT keys. 13.2.7. Packets containing PADDING frames are considered to be in flight for congestion control purposes QUIC-RECOVERY. Packets containing only"}
{"id": "q-en-quicwg-base-drafts-ff2903eb351f796c9b5ff07ce288479a95509744f20afe8bfb46bd00bea263f8", "old_text": "connection ID. If the spin bit is enabled for the connection, the endpoint maintains a spin value and sets the spin bit in the short header to the currently stored value when a packet with a short header is sent out. The spin value is initialized to 0 in the endpoint at connection start. Each endpoint also remembers the highest packet number seen from its peer on the connection. When a server receives a short header packet that increments the highest packet number seen by the server from the client, it sets the spin value to be equal to the spin bit in the received packet. When a client receives a short header packet that increments the highest packet number seen by the client from the server, it sets the spin value to the inverse of the spin bit in the received packet. An endpoint resets its spin value to zero when sending the first packet of a given connection with a new connection ID. This reduces the risk that transient spin bit state can be used to link flows across connection migration or ID change. With this mechanism, the server reflects the spin value received, while the client 'spins' it after one RTT. On-path observers can", "comments": "This binds the spin-bit state to a network path, which aligns with intent better than the existing language. It's more words, but it is far less ambiguous.\nNAME sanity check please? I'll merge this soon either way.\nThis is likely a case of \"we know what we meant,\" but.... Section 17.3.1 says that the spin bit can be enabled/disabled on a per-connection or per-CID basis, and that: However, a literal reading of this suggests that a client which sets the spin bit on a per-connection basis will revert the spin bit on the main path to zero whenever it uses a new CID to probe on an alternate path. It also has implications for an implementation that uses many active CIDs at once. One resolution is that the current state of the spin bit is maintained on a per-CID basis, and that the CID begins at zero for each CID. However, for the client using many CIDs at once, that then begs the question of which of the peer's CIDs are being reflected back to it. I think that what we actually mean here is two-fold: The current state of the spin bit is maintained on a per-path basis, and is initialized to zero for each path. The current state of the spin bit is reset to zero whenever the CID being used on a path changes, no matter how frequently that is.\nI think what you summarize in the bullets is indeed what was intended. NAME", "new_text": "connection ID. If the spin bit is enabled for the connection, the endpoint maintains a spin value for each network path and sets the spin bit in the short header to the currently stored value when a packet with a short header is sent on that path. The spin value is initialized to 0 in the endpoint for each network path. Each endpoint also remembers the highest packet number seen from its peer on each path. When a server receives a short header packet that increases the highest packet number seen by the server from the client on a given network path, it sets the spin value for that path to be equal to the spin bit in the received packet. When a client receives a short header packet that increases the highest packet number seen by the client from the server on a given network path, it sets the spin value for that path to the inverse of the spin bit in the received packet. An endpoint resets the spin value for a network path to zero when changing the connection ID being used on that network path. With this mechanism, the server reflects the spin value received, while the client 'spins' it after one RTT. On-path observers can"}
{"id": "q-en-quicwg-base-drafts-aab428b55bd04e2307f1baf9693247f36cde6bb58b1b09dc20dff74ab3c3994e", "old_text": "path via the attacker. This places the attacker on path, giving it the ability to observe or drop all subsequent packets. Unlike the attack described in on-path-spoofing, the attacker can ensure that the new path is successfully validated. This style of attack relies on the attacker using a path that is approximately as fast as the direct path between endpoints. The attack is more reliable if relatively few packets are sent or if", "comments": "This text could be read to imply that an off-path attacker is more capable than an on-path attacker, which is rarely true. What it was meant to point out was that it is easier to move traffic onto a path that you are on. What it fails to acknowledge is that it is also easier to move traffic off a path that you are on. In other words, the treatment of this in 21.12 is more thorough and we don't need to talk about limitations. Mike suggested that there is some duplication between this attack and the more comprehensive analysis in 21.12. That is true, but these serve different purposes. This is to describe attacks and the normative requirements on endpoints necessary to avoid them. The other section is a thorough and hollistic analysis. I couldn't see any truly straightforward changes. That doesn't mean that we won't find a way to clean this up, or that it would be undesirable to have fewer words, but I've not the time for that right now.\nI find the text in 9.3.3 confusing, because it says: Unlike the attack described in {{on-path-spoofing}}, the attacker can ensure that the new path is successfully validated. This is odd because on-path attackers are generally regarded as stronger than on-path attackers. Why can't an on-path attacker complete this attack.\nI think you're right that the text is confusing, if not quite incorrect. An on-path attacker can rewrite a source address, causing an apparent migration, but can't cause path validation to succeed with that source address unless it is actually on-path from both endpoints to the spoofed addresses as well. An off-path attacker which can race packets successfully can cause migration with successful path validation to a path which includes it, becoming a limited on-path attacker. This also feels slightly duplicative of 21.12.3; it might be worth condensing some text between these sections.", "new_text": "path via the attacker. This places the attacker on path, giving it the ability to observe or drop all subsequent packets. This style of attack relies on the attacker using a path that is approximately as fast as the direct path between endpoints. The attack is more reliable if relatively few packets are sent or if"}
{"id": "q-en-quicwg-base-drafts-79beb17080c83004bd44a36b09671be1d94161885eb4916a50187d82bfc0c93b", "old_text": "assumed to be implicitly reset. After sending a CONNECTION_CLOSE frame, an endpoint immediately enters the closing state. During the closing period, an endpoint that sends a CONNECTION_CLOSE frame SHOULD respond to any incoming packet that can be decrypted with another packet containing a CONNECTION_CLOSE frame. Such an endpoint SHOULD limit the number of packets it generates containing a CONNECTION_CLOSE frame. For instance, an endpoint could wait for a progressively increasing number of received packets or amount of time before responding to a received packet. An endpoint is allowed to drop the packet protection keys when entering the closing period (draining) and send a packet containing a CONNECTION_CLOSE in response to any UDP datagram that is received. However, an endpoint without the packet protection keys cannot identify and discard invalid packets. To avoid creating an unwitting amplification attack, such endpoints MUST limit the cumulative size of packets containing a CONNECTION_CLOSE frame to 3 times the cumulative size of the packets that cause those packets to be sent. To minimize the state that an endpoint maintains for a closing connection, endpoints MAY send the exact same packet. Allowing retransmission of a closing packet is an exception to the requirement that a new packet number be used for each packet in", "comments": "I took a careful look at the text here and found a few things: There wasn't much redundancy, but there was a little. Mostly the problem was scattering of content between two locations in the document. The closing and draining states only apply to immediate close. So I moved things up. I trimmed the top-level immediate close text, and moved text specific to closing or draining to sections corresponding to each. There were a few bits that could be cut, and a few that needed to be better aligned with existing text, but nothing major. Most of this is unmodified text. There are a few bits that I had to tweak. I will add comments for those.\nNAME you can add this to your review load.\nThe section on Immediate Close contains some text that is redundant with the section on the Closing and Draining periods. Factor some of this out.\nI think this flows a lot better. Thanks for the time you put into this!Thanks, Martin!", "new_text": "assumed to be implicitly reset. After sending a CONNECTION_CLOSE frame, an endpoint immediately enters the closing state; see closing. After receiving a CONNECTION_CLOSE frame, endpoints enter the draining state; see draining. An immediate close can be used after an application protocol has arranged to close a connection. This might be after the application protocol negotiates a graceful shutdown. The application protocol can exchange messages that are needed for both application endpoints to agree that the connection can be closed, after which the application requests that QUIC close the connection. When QUIC consequently closes the connection, a CONNECTION_CLOSE frame with an application-supplied error code will be used to signal closure to the peer. The closing and draining connection states exist to ensure that connections close cleanly and that delayed or reordered packets are properly discarded. These states SHOULD persist for at least three times the current Probe Timeout (PTO) interval as defined in QUIC- RECOVERY. Disposing of connection state prior to exiting the closing or draining state could cause could result in an endpoint generating a stateless reset unnecessarily when it receives a late-arriving packet. Endpoints that have some alternative means to ensure that late-arriving packets do not induce a response, such as those that are able to close the UDP socket, MAY end these states earlier to allow for faster resource recovery. Servers that retain an open socket for accepting new connections SHOULD NOT end the closing or draining states early. Once its closing or draining state ends, an endpoint SHOULD discard all connection state. The endpoint MAY send a stateless reset in response to any further incoming packets belonging to this connection. 10.2.1. An endpoint enters the closing state after initiating an immediate close. In the closing state, an endpoint retains only enough information to generate a packet containing a CONNECTION_CLOSE frame and to identify packets as belonging to the connection. An endpoint in the closing state sends a packet containing a CONNECTION_CLOSE frame in response to any incoming packet that it attributes to the connection. An endpoint SHOULD limit the rate at which it generates packets in the closing state. For instance, an endpoint could wait for a progressively increasing number of received packets or amount of time before responding to received packets. An endpoint's selected connection ID and the QUIC version are sufficient information to identify packets for a closing connection; the endpoint MAY discard all other connection state. An endpoint that is closing is not required to process any received frame. An endpoint MAY retain packet protection keys for incoming packets to allow it to read and process a CONNECTION_CLOSE frame. An endpoint MAY drop packet protection keys when entering the closing state and send a packet containing a CONNECTION_CLOSE frame in response to any UDP datagram that is received. However, an endpoint that discards packet protection keys cannot identify and discard invalid packets. To avoid being used for an amplication attack, such endpoints MUST limit the cumulative size of packets it sends to three times the cumulative size of the packets that are received and attributed to the connection. To minimize the state that an endpoint maintains for a closing connection, endpoints MAY send the exact same packet in response to any received packet. Allowing retransmission of a closing packet is an exception to the requirement that a new packet number be used for each packet in"}
{"id": "q-en-quicwg-base-drafts-79beb17080c83004bd44a36b09671be1d94161885eb4916a50187d82bfc0c93b", "old_text": "expected to be relevant for a closed connection. Retransmitting the final packet requires less state. New packets from unverified addresses could be used to create an amplification attack; see address-validation. To avoid this, endpoints MUST either limit transmission of CONNECTION_CLOSE frames to validated addresses or drop packets without response if the response would be more than three times larger than the received packet. After receiving a CONNECTION_CLOSE frame, endpoints enter the draining state. An endpoint that receives a CONNECTION_CLOSE frame MAY send a single packet containing a CONNECTION_CLOSE frame before entering the draining state, using a CONNECTION_CLOSE frame and a NO_ERROR code if appropriate. An endpoint MUST NOT send further packets, which could result in a constant exchange of CONNECTION_CLOSE frames until the closing period on either peer ended. An immediate close can be used after an application protocol has arranged to close a connection. This might be after the application protocols negotiates a graceful shutdown. The application protocol exchanges whatever messages that are needed to cause both endpoints to agree to close the connection, after which the application requests that the connection be closed. When the application closes the connection, a CONNECTION_CLOSE frame with an appropriate error code will be used to signal closure. 10.2.1. When sending CONNECTION_CLOSE, the goal is to ensure that the peer will process the frame. Generally, this means sending the frame in a", "comments": "I took a careful look at the text here and found a few things: There wasn't much redundancy, but there was a little. Mostly the problem was scattering of content between two locations in the document. The closing and draining states only apply to immediate close. So I moved things up. I trimmed the top-level immediate close text, and moved text specific to closing or draining to sections corresponding to each. There were a few bits that could be cut, and a few that needed to be better aligned with existing text, but nothing major. Most of this is unmodified text. There are a few bits that I had to tweak. I will add comments for those.\nNAME you can add this to your review load.\nThe section on Immediate Close contains some text that is redundant with the section on the Closing and Draining periods. Factor some of this out.\nI think this flows a lot better. Thanks for the time you put into this!Thanks, Martin!", "new_text": "expected to be relevant for a closed connection. Retransmitting the final packet requires less state. While in the closing state, an endpoint could receive packets from a new source address, possibly indicating a connection migration; see migration. An endpoint in the closing state MUST either discard packets received from an unvalidated address or limit the cumulative size of packets it sends to an unvalidated address to three times the size of packets it receives from that address. An endpoint is not expected to handle key updates when it is closing (Section 6 of QUIC-TLS). A key update might prevent the endpoint from moving from the closing state to the draining state, as the endpoint will not be able to process subsequently received packets, but it otherwise has no impact. 10.2.2. The draining state is entered once an endpoint receives a CONNECTION_CLOSE frame, which indicates that its peer is closing or draining. While otherwise identical to the closing state, an endpoint in the draining state MUST NOT send any packets. Retaining packet protection keys is unnecessary once a connection is in the draining state. An endpoint that receives a CONNECTION_CLOSE frame MAY send a single packet containing a CONNECTION_CLOSE frame before entering the draining state, using a NO_ERROR code if appropriate. An endpoint MUST NOT send further packets. Doing so could result in a constant exchange of CONNECTION_CLOSE frames until one of the endpoints exits the closing state. An endpoint MAY enter the draining state from the closing state if it receives a CONNECTION_CLOSE frame, which indicates that the peer is also closing or draining. In this case, the draining state SHOULD end when the closing state would have ended. In other words, the endpoint uses the same end time, but ceases transmission of any packets on this connection. 10.2.3. When sending CONNECTION_CLOSE, the goal is to ensure that the peer will process the frame. Generally, this means sending the frame in a"}
{"id": "q-en-quicwg-base-drafts-79beb17080c83004bd44a36b09671be1d94161885eb4916a50187d82bfc0c93b", "old_text": "of broken connections where only very small packets are sent; such failures might only be detected by other means, such as timers. 10.4. The closing and draining connection states exist to ensure that connections close cleanly and that delayed or reordered packets are properly discarded. These states SHOULD persist for at least three times the current Probe Timeout (PTO) interval as defined in QUIC- RECOVERY. An endpoint enters a closing period after initiating an immediate close; immediate-close. While closing, an endpoint MUST NOT send packets unless they contain a CONNECTION_CLOSE frame; see immediate- close for details. An endpoint retains only enough information to generate a packet containing a CONNECTION_CLOSE frame and to identify packets as belonging to the connection. The endpoint's selected connection ID and the QUIC version are sufficient information to identify packets for a closing connection; an endpoint can discard all other connection state. An endpoint MAY retain packet protection keys for incoming packets to allow it to read and process a CONNECTION_CLOSE frame. The draining state is entered once an endpoint receives a signal that its peer is closing or draining. While otherwise identical to the closing state, an endpoint in the draining state MUST NOT send any packets. Retaining packet protection keys is unnecessary once a connection is in the draining state. An endpoint MAY transition from the closing period to the draining period if it receives a CONNECTION_CLOSE frame or stateless reset, both of which indicate that the peer is also closing or draining. The draining period SHOULD end when the closing period would have ended. In other words, the endpoint can use the same end time, but cease retransmission of the closing packet. Disposing of connection state prior to the end of the closing or draining period could cause delayed or reordered packets to generate an unnecessary stateless reset. Endpoints that have some alternative means to ensure that late-arriving packets on the connection do not induce a response, such as those that are able to close the UDP socket, MAY use an abbreviated draining period to allow for faster resource recovery. Servers that retain an open socket for accepting new connections SHOULD NOT exit the closing or draining period early. Once the closing or draining period has ended, an endpoint SHOULD discard all connection state. This results in new packets on the connection being handled generically. For instance, an endpoint MAY send a stateless reset in response to any further incoming packets. The draining and closing periods do not apply when a stateless reset (stateless-reset) is sent. An endpoint is not expected to handle key updates when it is closing or draining. A key update might prevent the endpoint from moving from the closing state to draining, but it otherwise has no impact. While in the closing period, an endpoint could receive packets from a new source address, indicating a connection migration; migration. An endpoint in the closing state MUST strictly limit the number of packets it sends to this new address until the address is validated; see migrate-validate. A server in the closing state MAY instead choose to discard packets received from a new source address. 11. An endpoint that detects an error SHOULD signal the existence of that", "comments": "I took a careful look at the text here and found a few things: There wasn't much redundancy, but there was a little. Mostly the problem was scattering of content between two locations in the document. The closing and draining states only apply to immediate close. So I moved things up. I trimmed the top-level immediate close text, and moved text specific to closing or draining to sections corresponding to each. There were a few bits that could be cut, and a few that needed to be better aligned with existing text, but nothing major. Most of this is unmodified text. There are a few bits that I had to tweak. I will add comments for those.\nNAME you can add this to your review load.\nThe section on Immediate Close contains some text that is redundant with the section on the Closing and Draining periods. Factor some of this out.\nI think this flows a lot better. Thanks for the time you put into this!Thanks, Martin!", "new_text": "of broken connections where only very small packets are sent; such failures might only be detected by other means, such as timers. 11. An endpoint that detects an error SHOULD signal the existence of that"}
{"id": "q-en-quicwg-base-drafts-74a8f3e075a3187727014c376ef36281c24df3dda51a321161c5426980f2e8b9", "old_text": "semantics and encodings for any version-specific field. In particular, different packet protection keys might be used for different versions. Servers that do not support a particular version are unlikely to be able to decrypt the payload of the packet. Servers SHOULD NOT attempt to decode or decrypt a packet from an unknown version, but instead send a Version Negotiation packet, provided that the packet is sufficiently long. Packets with a supported version, or no version field, are matched to a connection using the connection ID or - for packets with zero-", "comments": "instead of saying servers SHOULD NOT attempt to parse unknown versions, stick with the factual statement that they will likely be unable to do so. Flips the SHOULD to the second half of the sentence, the expected response. I believe this is editorial, since we're not modifying expected behavior, but it's touching normative language, so please speak up if you're not convinced.\nActually, it seems like there should be a normative MUST NOT attempt.\nI would have said \"cannot\". The contents are unknown, after all. Attempting to prohibit someone from doing something impossible seems like it might be unnecessary (or at least futile). Edit: of course, this change avoids having to deal with that question, which is better.\nWhy do you say it's impossible? You could simply hope that version n+1 was the same as version n in terms of structure and try to parse it as that, ignoring the version number.\nWhich is why the PR says \"unlikely,\" not \"impossible.\" But the impossible thing is for the server to know whether it's doing things correctly.\nRight, but my point is that we should prohibit trying, not just give people advice that it's dumb.\n(Broken out of .) This seems like tuning? Or is the intention to protect the endpoint from an attack, please explain the SHOULD.\nencodings for any version-specific field. In particular, different packet protection keys might be used for different versions. Servers that do not support a particular version are unlikely to be able to decrypt the payload of the packet. Servers SHOULD NOT attempt to decode or decrypt a packet from an unknown version, but instead send a Version Negotiation packet, provided that the packet is sufficiently long. I'm having trouble seeing what's needed here. The endpoint has received a packet professing to be in a version it does not understand. It can't know the semantics or encoding of any field there -- and the paragraph already explains that. Yes, I'm sure there are potential attacks in interpreting a packet of unknown provenance and unknown semantics against a different set of semantics and expectations, but I'm not convinced we need to explain them. The text recommends sending a Version Negotiation packet, which seems like the only sensible response.\nThis refers to .\nNAME\nI'm looking for two things: (i) I think this ought to clarify the intention for the SHOULD, I suspect it is clear to those working on this part, but not clear to me why this is a recommendation - i.e., specifically can you give me a use-case of why the Server would otherwise USE information from a packet from an unknown version, unless the intention is to say the server shouldn't waste resources to do try to do this? (ii) I think this implies understanding of Section 14.1 later in the spec, and a ref would help. So... Is it equally true that.. \"Servers MUST NOT use the contents of a packet with an unknown version, and SHOULD NOT decrypt this packet, but instead send a Version Negotiation packet, provided that the packet is sufficiently long (see Section 14.1).\"? Or perhaps simply explain something like: \"Servers SHOULD NOT commit resources to attempt to decode or decrypt a packet from an unknown version, but instead send a Version Negotiation packet, provided that the packet is sufficiently long (see Section 14.1).\"?\nFundamentally, it's a caution against assumptions which could prove incorrect, such as \"this looks like an unencrypted TLS ClientHello, so I can go find the field that contains....\" The server will either process information in a way that could be used maliciously, or the server will generate a response that won't be understood by the client. Ideally, this would be a MUST, except that it's entirely unenforceable by the client what the server does internally. Perhaps the right solution here is to remove the normative language with regard to that piece and stick to the statement of fact we already have:\nThat approach seems fine.", "new_text": "semantics and encodings for any version-specific field. In particular, different packet protection keys might be used for different versions. Servers that do not support a particular version are unlikely to be able to decrypt the payload of the packet or properly interpret the result. Servers SHOULD respond with a Version Negotiation packet, provided that the datagram is sufficiently long. Packets with a supported version, or no version field, are matched to a connection using the connection ID or - for packets with zero-"}
{"id": "q-en-quicwg-base-drafts-459caf625c1a79a8023a4dcb0a300f5869298013aa5d19314217e5af06afa58e", "old_text": "between a specific local address and a specific peer address, where an address is the two-tuple of IP address and port. Path validation tests that packets can be both sent to (PATH_CHALLENGE) and received from (PATH_RESPONSE) a peer on the path. Importantly, it validates that the packets received from the migrating endpoint do not carry a spoofed source address. Path validation can be used at any time by either endpoint. For instance, an endpoint might check that a peer is still in possession", "comments": "The original design for path validation attempted to validate return reachability by requiring that PATHRESPONSE followed the same path as PATHCHALLENGE. We removed that long ago, but the text retained echoes of that choice.\nCurrently, does not specify which path a PATHRESPONSE needs to be sent over. That was somewhat surprising to me, as I had expected it to be scoped to the path that PATHCHALLENGE was received on. If it isn't, could we add a sentence to the Path Validation Responses subsection to explicitly state that a PATH_RESPONSE is not scoped to a given path (and ideally explaining why it isn't)?\nSo I thought we were really crisp about this, but we were not. PATHRESPONSE does not need to follow the path on which PATHCHALLENGE was received. We're clear about this when it comes to preferred address handling, but there is some vestigial text that contradicts this. That should be fixed.\nFor the record, I personally believe that adequately resolves this issue while remaining purely editorial.\nThanks for being so prompt David. I will leave this for a bit to allow others a chance to spot errors, but I plan to merge this before we publish -30 (real soon now, I hope).\nYes, the requirement was removed but the inverse statement was never explicitly added. corrects that.\nThanks for writing this up, tiny wording suggestions, but looking great! Thanks for writing this up, I think this adequately resolves my concerns from .", "new_text": "between a specific local address and a specific peer address, where an address is the two-tuple of IP address and port. Path validation tests that packets sent on a path to a peer are received by that peer. Path validation is used to ensure that packets received from a migrating peer do not carry a spoofed source address. Path validation does not validate that a peer can send in the return direction. The peer performs independent validation of the return path. Path validation can be used at any time by either endpoint. For instance, an endpoint might check that a peer is still in possession"}
{"id": "q-en-quicwg-base-drafts-459caf625c1a79a8023a4dcb0a300f5869298013aa5d19314217e5af06afa58e", "old_text": "8.3. To initiate path validation, an endpoint sends a PATH_CHALLENGE frame containing a random payload on the path to be validated. An endpoint MAY send multiple PATH_CHALLENGE frames to guard against packet loss. However, an endpoint SHOULD NOT send multiple", "comments": "The original design for path validation attempted to validate return reachability by requiring that PATHRESPONSE followed the same path as PATHCHALLENGE. We removed that long ago, but the text retained echoes of that choice.\nCurrently, does not specify which path a PATHRESPONSE needs to be sent over. That was somewhat surprising to me, as I had expected it to be scoped to the path that PATHCHALLENGE was received on. If it isn't, could we add a sentence to the Path Validation Responses subsection to explicitly state that a PATH_RESPONSE is not scoped to a given path (and ideally explaining why it isn't)?\nSo I thought we were really crisp about this, but we were not. PATHRESPONSE does not need to follow the path on which PATHCHALLENGE was received. We're clear about this when it comes to preferred address handling, but there is some vestigial text that contradicts this. That should be fixed.\nFor the record, I personally believe that adequately resolves this issue while remaining purely editorial.\nThanks for being so prompt David. I will leave this for a bit to allow others a chance to spot errors, but I plan to merge this before we publish -30 (real soon now, I hope).\nYes, the requirement was removed but the inverse statement was never explicitly added. corrects that.\nThanks for writing this up, tiny wording suggestions, but looking great! Thanks for writing this up, I think this adequately resolves my concerns from .", "new_text": "8.3. To initiate path validation, an endpoint sends a PATH_CHALLENGE frame containing an unpredictable payload on the path to be validated. An endpoint MAY send multiple PATH_CHALLENGE frames to guard against packet loss. However, an endpoint SHOULD NOT send multiple"}
{"id": "q-en-quicwg-base-drafts-459caf625c1a79a8023a4dcb0a300f5869298013aa5d19314217e5af06afa58e", "old_text": "On receiving a PATH_CHALLENGE frame, an endpoint MUST respond by echoing the data contained in the PATH_CHALLENGE frame in a PATH_RESPONSE frame. An endpoint MUST NOT delay transmission of a packet containing a PATH_RESPONSE frame unless constrained by congestion control. An endpoint MUST NOT send more than one PATH_RESPONSE frame in response to one PATH_CHALLENGE frame; see retransmission-of-", "comments": "The original design for path validation attempted to validate return reachability by requiring that PATHRESPONSE followed the same path as PATHCHALLENGE. We removed that long ago, but the text retained echoes of that choice.\nCurrently, does not specify which path a PATHRESPONSE needs to be sent over. That was somewhat surprising to me, as I had expected it to be scoped to the path that PATHCHALLENGE was received on. If it isn't, could we add a sentence to the Path Validation Responses subsection to explicitly state that a PATH_RESPONSE is not scoped to a given path (and ideally explaining why it isn't)?\nSo I thought we were really crisp about this, but we were not. PATHRESPONSE does not need to follow the path on which PATHCHALLENGE was received. We're clear about this when it comes to preferred address handling, but there is some vestigial text that contradicts this. That should be fixed.\nFor the record, I personally believe that adequately resolves this issue while remaining purely editorial.\nThanks for being so prompt David. I will leave this for a bit to allow others a chance to spot errors, but I plan to merge this before we publish -30 (real soon now, I hope).\nYes, the requirement was removed but the inverse statement was never explicitly added. corrects that.\nThanks for writing this up, tiny wording suggestions, but looking great! Thanks for writing this up, I think this adequately resolves my concerns from .", "new_text": "On receiving a PATH_CHALLENGE frame, an endpoint MUST respond by echoing the data contained in the PATH_CHALLENGE frame in a PATH_RESPONSE frame. A PATH_RESPONSE frame does not need to be sent on the network path where the PATH_CHALLENGE was received; a PATH_RESPONSE can be sent on any network path. An endpoint MUST NOT delay transmission of a packet containing a PATH_RESPONSE frame unless constrained by congestion control. An endpoint MUST NOT send more than one PATH_RESPONSE frame in response to one PATH_CHALLENGE frame; see retransmission-of-"}
{"id": "q-en-quicwg-base-drafts-459caf625c1a79a8023a4dcb0a300f5869298013aa5d19314217e5af06afa58e", "old_text": "8.5. A new address is considered valid when a PATH_RESPONSE frame is received that contains the data that was sent in a previous PATH_CHALLENGE frame. Receipt of an acknowledgment for a packet containing a PATH_CHALLENGE frame is not adequate validation, since the acknowledgment can be spoofed by a malicious peer. 8.6.", "comments": "The original design for path validation attempted to validate return reachability by requiring that PATHRESPONSE followed the same path as PATHCHALLENGE. We removed that long ago, but the text retained echoes of that choice.\nCurrently, does not specify which path a PATHRESPONSE needs to be sent over. That was somewhat surprising to me, as I had expected it to be scoped to the path that PATHCHALLENGE was received on. If it isn't, could we add a sentence to the Path Validation Responses subsection to explicitly state that a PATH_RESPONSE is not scoped to a given path (and ideally explaining why it isn't)?\nSo I thought we were really crisp about this, but we were not. PATHRESPONSE does not need to follow the path on which PATHCHALLENGE was received. We're clear about this when it comes to preferred address handling, but there is some vestigial text that contradicts this. That should be fixed.\nFor the record, I personally believe that adequately resolves this issue while remaining purely editorial.\nThanks for being so prompt David. I will leave this for a bit to allow others a chance to spot errors, but I plan to merge this before we publish -30 (real soon now, I hope).\nYes, the requirement was removed but the inverse statement was never explicitly added. corrects that.\nThanks for writing this up, tiny wording suggestions, but looking great! Thanks for writing this up, I think this adequately resolves my concerns from .", "new_text": "8.5. Path validation succeeds when a PATH_RESPONSE frame is received that contains the data that was sent in a previous PATH_CHALLENGE frame. This validates the path on which the PATH_CHALLENGE was sent. Receipt of an acknowledgment for a packet containing a PATH_CHALLENGE frame is not adequate validation, since the acknowledgment can be spoofed by a malicious peer. 8.6."}
{"id": "q-en-quicwg-base-drafts-2807c5d074fad8a5b6581eef7a8a5e324e836fcd562f106278e711d909cb8579", "old_text": "initial timeout (that is, 2*kInitialRtt) as defined in QUIC-RECOVERY is RECOMMENDED. That is: Note that the endpoint might receive packets containing other frames on the new path, but a PATH_RESPONSE frame with appropriate data is required for path validation to succeed.", "comments": "I believe these two cases are both 3*PTO to allow the PTO to expire at least once before declaring path validation failure and idle timeout.\nThanks for your suggestions, I'm quite happy with this now.\nNAME says in : This is reasonable. I don't know which document needs the text though.\nThis looks like it should go into -recovery, since -transport mostly only refers to as a TP, except in Section 13.2.1 (\"sending-acknowledgements\"), but that text is very general.\nI would suggest that this should go where a value of X PTOs is suggested for each specific timer.\nRelabeling, since ended up being against -transport and not -recovery.", "new_text": "initial timeout (that is, 2*kInitialRtt) as defined in QUIC-RECOVERY is RECOMMENDED. That is: This timeout allows for multiple PTOs to expire prior to failing path validation, so that loss of a single PATH_CHALLENGE or PATH_RESPONSE frame does not cause path validation failure. Note that the endpoint might receive packets containing other frames on the new path, but a PATH_RESPONSE frame with appropriate data is required for path validation to succeed."}
{"id": "q-en-quicwg-base-drafts-2807c5d074fad8a5b6581eef7a8a5e324e836fcd562f106278e711d909cb8579", "old_text": "To avoid excessively small idle timeout periods, endpoints MUST increase the idle timeout period to be at least three times the current Probe Timeout (PTO). 10.1.1.", "comments": "I believe these two cases are both 3*PTO to allow the PTO to expire at least once before declaring path validation failure and idle timeout.\nThanks for your suggestions, I'm quite happy with this now.\nNAME says in : This is reasonable. I don't know which document needs the text though.\nThis looks like it should go into -recovery, since -transport mostly only refers to as a TP, except in Section 13.2.1 (\"sending-acknowledgements\"), but that text is very general.\nI would suggest that this should go where a value of X PTOs is suggested for each specific timer.\nRelabeling, since ended up being against -transport and not -recovery.", "new_text": "To avoid excessively small idle timeout periods, endpoints MUST increase the idle timeout period to be at least three times the current Probe Timeout (PTO). This allows for multiple PTOs to expire prior to idle timeout, ensuring the idle timeout does not expire as a result of a single packet loss. 10.1.1."}
{"id": "q-en-quicwg-base-drafts-c67c1a1d030594eee8c642251d1a3163f46abcf8a2acdcc6d8f78c134d024496", "old_text": "sequence 7b bd decodes to 15293; and the single byte 25 decodes to 37 (as does the two byte sequence 40 25). Versions (versions) and error codes (error-codes) are described using integers, but do not use this encoding. 17.", "comments": "Error codes do use varints, but packet numbers (in the header) don't.", "new_text": "sequence 7b bd decodes to 15293; and the single byte 25 decodes to 37 (as does the two byte sequence 40 25). Versions (versions) and packet numbers sent in the header (packet- encoding) are described using integers, but do not use this encoding. 17."}
{"id": "q-en-quicwg-base-drafts-1e35efb7c5395db3bcd3aceb58445341e493e6968ea9e3226410bb90329b2a63", "old_text": "If the packet is from a new encryption level, it is saved for later processing by TLS. Once TLS moves to receiving from this encryption level, saved data can be provided to TLS. When providing data from any new encryption level to TLS, if there is data from a previous encryption level that TLS has not consumed, this MUST be treated as a connection error of type PROTOCOL_VIOLATION. Each time that TLS is provided with new data, new handshake bytes are requested from TLS. TLS might not provide any bytes if the handshake", "comments": "This check really is triggered by TLS providing new keys, not QUIC delivering data on a higher encryption level. Otherwise, in the case where no session ticket is sent, this condition would go undetected for unconsumed data at the Handshake encryption level.", "new_text": "If the packet is from a new encryption level, it is saved for later processing by TLS. Once TLS moves to receiving from this encryption level, saved data can be provided to TLS. When TLS provides keys for a higher encryption level, if there is data from a previous encryption level that TLS has not consumed, this MUST be treated as a connection error of type PROTOCOL_VIOLATION. Each time that TLS is provided with new data, new handshake bytes are requested from TLS. TLS might not provide any bytes if the handshake"}
{"id": "q-en-quicwg-base-drafts-85983188714c6d428dde0666897c6c1623d8c21ec917fa6bdaf2404222bbd10c", "old_text": "expiration, as TCP does with Tail Loss Probes (RACK) and a Retransmission Timeout (RFC5681). The RECOMMENDED value for kPersistentCongestionThreshold is 3, which is approximately equivalent to two TLPs before an RTO in TCP. This design does not use consecutive PTO events to establish persistent congestion, since application patterns impact PTO", "comments": "Section 7.6.1: The RECOMMENDED value for kPersistentCongestionThreshold is 3, which is approximately equivalent to two TLPs before an RTO in TCP. Any values that needs to be explicitly forbidden? For example does it need a sentence saying that it MUST be larger than X? I would think that 1 would be to low and allow slow start after just one RTT+variations? I understand that likely would give the application bad behavior but likely also have negative impact on other applications if the QUIC stack ends up doing slow start crashing into the wall and then slow starting again?\nI lean towards not being overly proscriptive here. We could elaborate on the pros and cons of smaller and larger values if you think that is necessary? Exponential backoff is clearly critical, but it's less clear to me that the CWND really needs to be collapsed in these situations.\nI am really just trying to asses if there are significant risks with any specific ranges of values. For example causing very bursty behavior that could be considered harmful or becoming extremely aggressive after a persistent congestion event, in those cases some text may be necessary.\nBased on , labeling this as \"editorial\"\nThinks this resolve my issue, but would prefer Ian's proposal.", "new_text": "expiration, as TCP does with Tail Loss Probes (RACK) and a Retransmission Timeout (RFC5681). Larger values of kPersistentCongestionThreshold cause the sender to become less responsive to persistent congestion in the network, which can result in aggressive sending into a congested network. Too small a value can result in a sender declaring persistent congestion unnecessarily, resulting in reduced throughput for the sender. The RECOMMENDED value for kPersistentCongestionThreshold is 3, which results in behavior that is approximately equivalent to a TCP sender declaring an RTO after two TLPs. This design does not use consecutive PTO events to establish persistent congestion, since application patterns impact PTO"}
{"id": "q-en-quicwg-base-drafts-9eb4af6ef44407f4a60a131277549f3642e22415128dd5216cfadd56a5a933bd", "old_text": "The total length of time over which consecutive PTOs expire is limited by the idle timeout. The probe timer MUST NOT be set if the time threshold (time- threshold) loss detection timer is set. The time threshold loss detection timer is expected to both expire earlier than the PTO and be less likely to spuriously retransmit data. 6.2.2.", "comments": "That term doesn't appear above, so don't use it in the PTO section.\nThanks NAME I took your suggestions with one tweak.\nSection 6.2.1: \"The probe timer MUST NOT be set if the time threshold (Section 6.1.2) loss detection timer is set. The time threshold loss detection timer is expected to both expire earlier than the PTO and be less likely to spuriously retransmit data.\" What does \"MUST NOT be set\" means. The time threshold loss detection 6.1.2 only occurs if there is acknowledgement received by the sender sufficient late after the transmission. Thus, PTO timer needs to always be set. because even if Time Threshold timer fires it might not declare a loss.\nIt means if the time threshold timer is armed, you are not supposed to arm the PTO timer. If the Time Threshold timer fires and doesn't declare a loss, then I would consider that a bug.\nSo the above text about loss timer is only when one are under the condition created by this Section 6.1.2 sentence? \"If packets sent prior to the largest acknowledged packet cannot yet be declared lost, then a timer SHOULD be set for the remaining time.\" Does the formulation in 6.1.2 need to be improved, possibly naming the timer?\nI can see the confusion. This can either be a single timer or multiple, and sometimes we refer to it one way and sometimes as separately named timers, ie \"The time threshold loss detection timer\" We can either be more consistent about naming the timers or stop naming the timers entirely. Would you prefer the former?\nFrankly I don't know which direction that is best. Could I ask you to take an editorial stab on your most preferred. I would guess when you sit down to do either you realize if it will work or not when you harmonize it.\nNAME Does PR resolve this for you?\nNAME we'd need a response, please\nYes, that simple tweak do make it clearer. So resolved.\nThe second suggestion is a little risky, but avoids a slightly awkward construction.Editorial suggestion. I'm fine with merging this, but NAME should probably see if it addresses his issue.", "new_text": "The total length of time over which consecutive PTOs expire is limited by the idle timeout. The PTO timer MUST NOT be set if a timer is set for time threshold loss detection; see time-threshold. A timer that is set for time threshold loss detection will expire earlier than the PTO timer in most cases and is less likely to spuriously retransmit data. 6.2.2."}
{"id": "q-en-quicwg-base-drafts-c39799dff19a80c06e7138b69f88fd6eea278639e13a9b271a1591c7ba6a02cc", "old_text": "certain that it has not processed another packet with the same packet number from the same packet number space. Duplicate suppression MUST happen after removing packet protection for the reasons described in Section 9.3 of QUIC-TLS. Endpoints that track all individual packets for the purposes of detecting duplicates are at risk of accumulating excessive state.", "comments": "There are three places which refer to \"Section 9.3 of QUIC-TLS\". I think they should refer to \"Section 9.5 of QUIC-TLS\".\nThanks!", "new_text": "certain that it has not processed another packet with the same packet number from the same packet number space. Duplicate suppression MUST happen after removing packet protection for the reasons described in Section 9.5 of QUIC-TLS. Endpoints that track all individual packets for the purposes of detecting duplicates are at risk of accumulating excessive state."}
{"id": "q-en-quicwg-base-drafts-c39799dff19a80c06e7138b69f88fd6eea278639e13a9b271a1591c7ba6a02cc", "old_text": "both packet and header protection as a connection error of type PROTOCOL_VIOLATION. Discarding such a packet after only removing header protection can expose the endpoint to attacks; see Section 9.3 of QUIC-TLS. In packet types that contain a Packet Number field, the least significant two bits (those with a mask of 0x03) of byte 0 contain", "comments": "There are three places which refer to \"Section 9.3 of QUIC-TLS\". I think they should refer to \"Section 9.5 of QUIC-TLS\".\nThanks!", "new_text": "both packet and header protection as a connection error of type PROTOCOL_VIOLATION. Discarding such a packet after only removing header protection can expose the endpoint to attacks; see Section 9.5 of QUIC-TLS. In packet types that contain a Packet Number field, the least significant two bits (those with a mask of 0x03) of byte 0 contain"}
{"id": "q-en-quicwg-base-drafts-c39799dff19a80c06e7138b69f88fd6eea278639e13a9b271a1591c7ba6a02cc", "old_text": "and header protection, as a connection error of type PROTOCOL_VIOLATION. Discarding such a packet after only removing header protection can expose the endpoint to attacks; see Section 9.3 of QUIC-TLS. The next bit (0x04) of byte 0 indicates the key phase, which allows a recipient of a packet to identify the packet protection", "comments": "There are three places which refer to \"Section 9.3 of QUIC-TLS\". I think they should refer to \"Section 9.5 of QUIC-TLS\".\nThanks!", "new_text": "and header protection, as a connection error of type PROTOCOL_VIOLATION. Discarding such a packet after only removing header protection can expose the endpoint to attacks; see Section 9.5 of QUIC-TLS. The next bit (0x04) of byte 0 indicates the key phase, which allows a recipient of a packet to identify the packet protection"}
{"id": "q-en-quicwg-base-drafts-64c75411746ef94ecf99a39eeb8ac2c31248065d87565ef5889fb7cc0e564c88", "old_text": "has received this parameter SHOULD NOT send an HTTP message header that exceeds the indicated size, as the peer will likely refuse to process it. However, an HTTP message can traverse one or more intermediaries (see Section 3.7 of SEMANTICS) before reaching the origin server. Because this limit is applied separately by each implementation which processes the message, messages below this limit are not guaranteed to be accepted.", "comments": "Fixes several nits from the gen-art review, which basically all come down to defining terms appropriately and/or referencing their definitions. modifies definition of \"stream\" and \"stream error\"; pulls in more terms from SEMANTICS references SEMANTICS for definition of fields rewords to avoid the term \"hop\" and describes intermediation more explicitly rewords to avoid saying \"not possible\" and copies language from SEMANTICS advising against retries updates reference to definition of CONNECT in SEMANTICS. NAME your review would be appreciated.\nIn the HTTP genart review, NAME writes:\nYes, CONNECT only supports TCP, and there's no short-term plan to change that. There is separate relevant work such as but I don't think we need to change anything in the draft here - it pretty clearly mentions TCP so I don't see any ambiguity.\nThis question should be addressed by the definition of CONNECT the spec is referring to. It's worth double-checking the references in that section, though; it points to HTTP11, when I think there should be a definition in SEMANTICS.\nIn the HTTP genart review, NAME writes:\nPresumably a reference into SEMANTICS; I'll look for the right section to point to.\nIn the HTTP genart review, NAME writes:\nIn the HTTP genart review, NAME writes:\nIn the HTTP genart review, NAME writes: \"HTTP/3 stream\". Perhaps it's worth defining \"HTTP/3 stream\" explicitly, in analogy to \"HTTP/3 connection\". Perhaps it's worth defining \"message\", too (this is only an HTTP concept, correct?). The document should make sure it clearly distinguishes terms and concept of QUIC vs. of HTTP/3 in all cases. In particular, what is the relationship between HTTP/3 frames and QUIC frames? It's important to make their difference and relationship explicit, as the document uses several other terms interchangeably between HTTP/3 and QUIC, like \"connection\" and perhaps \"stream\".\nA few minor suggestions, but this is a good change.This looks good to me, thanks!", "new_text": "has received this parameter SHOULD NOT send an HTTP message header that exceeds the indicated size, as the peer will likely refuse to process it. However, an HTTP message can traverse one or more intermediaries before reaching the origin server; see Section 3.7 of SEMANTICS. Because this limit is applied separately by each implementation which processes the message, messages below this limit are not guaranteed to be accepted."}
{"id": "q-en-quicwg-base-drafts-64c75411746ef94ecf99a39eeb8ac2c31248065d87565ef5889fb7cc0e564c88", "old_text": "If a stream is canceled after receiving a complete response, the client MAY ignore the cancellation and use the response. However, if a stream is cancelled after receiving a partial response, the response SHOULD NOT be used. Only idempotent actions (such as GET, PUT, or DELETE) can be safely retried; a client SHOULD NOT automatically retry a request with a non-idempotent method unless it has some means to know that the request semantics are actually idempotent, regardless of the method, or some means to detect that the original request was never applied. See Section 8.2.2 of SEMANTICS for more details. 4.1.3.", "comments": "Fixes several nits from the gen-art review, which basically all come down to defining terms appropriately and/or referencing their definitions. modifies definition of \"stream\" and \"stream error\"; pulls in more terms from SEMANTICS references SEMANTICS for definition of fields rewords to avoid the term \"hop\" and describes intermediation more explicitly rewords to avoid saying \"not possible\" and copies language from SEMANTICS advising against retries updates reference to definition of CONNECT in SEMANTICS. NAME your review would be appreciated.\nIn the HTTP genart review, NAME writes:\nYes, CONNECT only supports TCP, and there's no short-term plan to change that. There is separate relevant work such as but I don't think we need to change anything in the draft here - it pretty clearly mentions TCP so I don't see any ambiguity.\nThis question should be addressed by the definition of CONNECT the spec is referring to. It's worth double-checking the references in that section, though; it points to HTTP11, when I think there should be a definition in SEMANTICS.\nIn the HTTP genart review, NAME writes:\nPresumably a reference into SEMANTICS; I'll look for the right section to point to.\nIn the HTTP genart review, NAME writes:\nIn the HTTP genart review, NAME writes:\nIn the HTTP genart review, NAME writes: \"HTTP/3 stream\". Perhaps it's worth defining \"HTTP/3 stream\" explicitly, in analogy to \"HTTP/3 connection\". Perhaps it's worth defining \"message\", too (this is only an HTTP concept, correct?). The document should make sure it clearly distinguishes terms and concept of QUIC vs. of HTTP/3 in all cases. In particular, what is the relationship between HTTP/3 frames and QUIC frames? It's important to make their difference and relationship explicit, as the document uses several other terms interchangeably between HTTP/3 and QUIC, like \"connection\" and perhaps \"stream\".\nA few minor suggestions, but this is a good change.This looks good to me, thanks!", "new_text": "If a stream is canceled after receiving a complete response, the client MAY ignore the cancellation and use the response. However, if a stream is cancelled after receiving a partial response, the response SHOULD NOT be used. Only idempotent actions such as GET, PUT, or DELETE can be safely retried; a client SHOULD NOT automatically retry a request with a non-idempotent method unless it has some means to know that the request semantics are idempotent independent of the method or some means to detect that the original request was never applied. See Section 8.2.2 of SEMANTICS for more details. 4.1.3."}
{"id": "q-en-quicwg-base-drafts-f40396f7f94f784df94465c1b59425ab80c16f02b963df82cf7d1c38cfc6846d", "old_text": "negotiating-connection-ids describes the use of this field in more detail. In this version of QUIC, the following packet types with the long header are defined:", "comments": "Minor nit while looking at ; the definition of generic \"Long-Header Packets\" won't actually match any real packet types, because it ends after the CIDs. This modifies the definition to add a final \"Type-Specific Payload\" section to the figure.\nWhile the transport draft talks a lot about \"1-RTT packets\", the term is not defined. We also have a sentence that seem to imply that a long-header variant of 1-RTT packets might exist (URL). We refer to the packet less frequently as \"short header packets\" or \"packet with a short header.\" As a resolution, I think we might want to change the title of section 17.3 from \"Short Header Packets\" to \"1-RTT Packets\". Rest of the text can remain the same, as the first sentence of that section says that it is the only type of short header packet that we define.", "new_text": "negotiating-connection-ids describes the use of this field in more detail. The remainder of the packet, if any, is type-specific. In this version of QUIC, the following packet types with the long header are defined:"}
{"id": "q-en-quicwg-base-drafts-d003cb3a76d40d0589874ffeaac5f316118c169f7d156ee7b8641567341d0618", "old_text": "distrustful entities control requests or responses that are placed onto a single HTTP/3 connection. If the shared QPACK compressor permits one entity to add entries to the dynamic table, and the other to access those entries, then the state of the table can be learned. Having requests or responses from mutually distrustful entities occurs when an intermediary either:", "comments": "I think this helps?\nMagnus Nystrom : scenario of \"mutual distrustful parties\" is unclear to me. One stated scenario is where multiple client connections aggregate at an intermediary that then maintains a single connection to an origin server. Another stated scenario is where multiple origin servers connect to an intermediary which then serves a client. In these scenarios, is the concern that the intermediary may control either a client or an origin server and thus would be able to leverage the compression context at, say, a second client and then observe (as the intermediary) the result of a guess for a field tuple? It may be helpful to explain this in a little more detail. as to what strategy (option) implementations should choose to protect against this attack. It seems like the Never-Indexed-Literal is a good candidate which should be easy to implement.\nThis text was mostly lifted from RFC 7541. intermediary may control either a client or an origin server and thus would be able to leverage the compression context at, say, a second client and then observe (as the intermediary) the result of a guess for a field tuple? Yes, this is the concern. It's interesting that 'intermediary' here also refers to a browser that coalesces requests from different tabs onto the same connection. I agree. as to what strategy (option) implementations should choose to protect against this attack. I think the document recommends the tagging option unambiguously: \"An ideal solution segregates access to the dynamic table based on the entity that is constructing the message\" While never-index literals is the less complicated fallback option.\nNAME so are you proposing any changes to the text?", "new_text": "distrustful entities control requests or responses that are placed onto a single HTTP/3 connection. If the shared QPACK compressor permits one entity to add entries to the dynamic table, and the other to access those entries to encode chosen field lines, then the attacker can learn the state of the table by observing the length of the encoded output. Having requests or responses from mutually distrustful entities occurs when an intermediary either:"}
{"id": "q-en-quicwg-base-drafts-52cb57ba3d1394e138b77f1dc2bdaa26661ec85340d2d181e298b7744bc96232", "old_text": "1. QUIC is a new multiplexed and secure transport protocol atop UDP, specified in QUIC-TRANSPORT. This document describes congestion control and loss recovery for QUIC. Mechanisms described in this document follow the spirit of existing TCP congestion control and loss recovery mechanisms, described in RFCs, various Internet-drafts, or academic papers, and also those prevalent in TCP implementations. 2.", "comments": "In the Recovery genart review, Vijay Gurbani wrote:\nBased on the proposed PR, I'm marking this as editorial.\nIn the Recovery genart review, Vijay Gurbani wrote:\nBased on the proposed PR, I'm marking this as editorial.", "new_text": "1. QUIC is a secure general-purpose transport protocol, described in QUIC-TRANSPORT). This document describes loss detection and congestion control mechanisms for QUIC. 2."}
{"id": "q-en-quicwg-base-drafts-52cb57ba3d1394e138b77f1dc2bdaa26661ec85340d2d181e298b7744bc96232", "old_text": "4.2. TCP conflates transmission order at the sender with delivery order at the receiver, which results in retransmissions of the same data carrying the same sequence number, and consequently leads to \"retransmission ambiguity\". QUIC separates the two. QUIC uses a packet number to indicate transmission order. Application data is sent in one or more streams and delivery order is determined by stream offsets encoded within STREAM frames. QUIC's packet number is strictly increasing within a packet number space, and directly encodes transmission order. A higher packet", "comments": "In the Recovery genart review, Vijay Gurbani wrote:\nBased on the proposed PR, I'm marking this as editorial.\nIn the Recovery genart review, Vijay Gurbani wrote:\nBased on the proposed PR, I'm marking this as editorial.", "new_text": "4.2. TCP conflates transmission order at the sender with delivery order at the receiver, resulting in the retransmission ambiguity problem (RETRANSMISSION). QUIC separates transmission order from delivery order: packet numbers indicate transmission order, and delivery order is determined by the stream offsets in STREAM frames. QUIC's packet number is strictly increasing within a packet number space, and directly encodes transmission order. A higher packet"}
{"id": "q-en-quicwg-base-drafts-915dee9b837ebdffba501a439c6046901eadca77d30ef6bf86ac2a0dce895b14", "old_text": "The initial contents of this registry are shown in iana-tp-table. Additionally, each value of the format \"31 * N + 27\" for integer values of N (that is, 27, 58, 89, ...) are reserved and MUST NOT be assigned by IANA. 22.3.", "comments": "IANA's review wanted to know whether the 148 quadrillion GREASE values should be listed on the registry page as Reserved, and what note should be present if not. They should not, lest it be challenging to actually download the registry page within the lifetime of the human race. This rewords the restriction and clarifies that the restriction should be the note.\nNAME Can you add the same text to 22.2 (Transport parameter registry) in the transport doc as well?\nNAME said\nWe believe the note, updated based on IANA feedback during Last Call, is sufficient. Reserving 148 quadrillion values in the registry would cause the registry to be impossible to download in the lifetime of humanity; see . The purpose is described in , to exercise the requirement to ignore unknown frame types (a.k.a. GREASE). Proposal is to make no change; this was already covered during IANA review.\nIf IANA has agreed, then I am fine with it. BTW, I was not asking to make those zillions of reservations but leaving a note in the registry ;-)\nIt's a fair question but the WG and the responsible AD are satisfied with our IANA considerations at this stage, closing with no action.\nPlease see the for full context. This issue is tracking the following IANA question:\nCopying my e-mail reply here: unknown values; having an explicit list encourages an explicitly defined reaction. This also applies to and , where the latter has better wording.\nNAME replied to IANA . Closing this issue.\nActually, NAME said in response to the e-mail that this guidance should be in the document: Let's re-open to track. I don't think all instances of this question need separate issues, though.\nSo the related questions were: Should the GREASE values be listed in the registry as Reserved? No, because some rough estimation suggests it would take a few millennia to download the registry page if this were the case. Should there be a note indicating which values cannot be assigned? I'm of two minds, because while I really don't want people embedding the formula into their parsers, I do want people embedding the formula into their generators. says no Reserved listing, yes note.\nI think this is proposal ready. Forwarding Mike's response to the list, to there is an archive to link to. Lars >\n", "new_text": "The initial contents of this registry are shown in iana-tp-table. Each value of the format \"31 * N + 27\" for integer values of N (that is, 27, 58, 89, ...) are reserved; these values MUST NOT be assigned by IANA and MUST NOT appear in the listing of assigned values. 22.3."}
{"id": "q-en-quicwg-base-drafts-915dee9b837ebdffba501a439c6046901eadca77d30ef6bf86ac2a0dce895b14", "old_text": "The entries in iana-frame-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, 0x21, 0x40, ..., through 0x3FFFFFFFFFFFFFFE) MUST NOT be assigned by IANA. 11.2.2.", "comments": "IANA's review wanted to know whether the 148 quadrillion GREASE values should be listed on the registry page as Reserved, and what note should be present if not. They should not, lest it be challenging to actually download the registry page within the lifetime of the human race. This rewords the restriction and clarifies that the restriction should be the note.\nNAME Can you add the same text to 22.2 (Transport parameter registry) in the transport doc as well?\nNAME said\nWe believe the note, updated based on IANA feedback during Last Call, is sufficient. Reserving 148 quadrillion values in the registry would cause the registry to be impossible to download in the lifetime of humanity; see . The purpose is described in , to exercise the requirement to ignore unknown frame types (a.k.a. GREASE). Proposal is to make no change; this was already covered during IANA review.\nIf IANA has agreed, then I am fine with it. BTW, I was not asking to make those zillions of reservations but leaving a note in the registry ;-)\nIt's a fair question but the WG and the responsible AD are satisfied with our IANA considerations at this stage, closing with no action.\nPlease see the for full context. This issue is tracking the following IANA question:\nCopying my e-mail reply here: unknown values; having an explicit list encourages an explicitly defined reaction. This also applies to and , where the latter has better wording.\nNAME replied to IANA . Closing this issue.\nActually, NAME said in response to the e-mail that this guidance should be in the document: Let's re-open to track. I don't think all instances of this question need separate issues, though.\nSo the related questions were: Should the GREASE values be listed in the registry as Reserved? No, because some rough estimation suggests it would take a few millennia to download the registry page if this were the case. Should there be a note indicating which values cannot be assigned? I'm of two minds, because while I really don't want people embedding the formula into their parsers, I do want people embedding the formula into their generators. says no Reserved listing, yes note.\nI think this is proposal ready. Forwarding Mike's response to the list, to there is an archive to link to. Lars >\n", "new_text": "The entries in iana-frame-table are registered by this document. Each code of the format \"0x1f * N + 0x21\" for non-negative integer values of N (that is, 0x21, 0x40, ..., through 0x3FFFFFFFFFFFFFFE) MUST NOT be assigned by IANA and MUST NOT appear in the listing of assigned values. 11.2.2."}
{"id": "q-en-quicwg-base-drafts-915dee9b837ebdffba501a439c6046901eadca77d30ef6bf86ac2a0dce895b14", "old_text": "The entries in iana-setting-table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe) MUST NOT be assigned by IANA. 11.2.3.", "comments": "IANA's review wanted to know whether the 148 quadrillion GREASE values should be listed on the registry page as Reserved, and what note should be present if not. They should not, lest it be challenging to actually download the registry page within the lifetime of the human race. This rewords the restriction and clarifies that the restriction should be the note.\nNAME Can you add the same text to 22.2 (Transport parameter registry) in the transport doc as well?\nNAME said\nWe believe the note, updated based on IANA feedback during Last Call, is sufficient. Reserving 148 quadrillion values in the registry would cause the registry to be impossible to download in the lifetime of humanity; see . The purpose is described in , to exercise the requirement to ignore unknown frame types (a.k.a. GREASE). Proposal is to make no change; this was already covered during IANA review.\nIf IANA has agreed, then I am fine with it. BTW, I was not asking to make those zillions of reservations but leaving a note in the registry ;-)\nIt's a fair question but the WG and the responsible AD are satisfied with our IANA considerations at this stage, closing with no action.\nPlease see the for full context. This issue is tracking the following IANA question:\nCopying my e-mail reply here: unknown values; having an explicit list encourages an explicitly defined reaction. This also applies to and , where the latter has better wording.\nNAME replied to IANA . Closing this issue.\nActually, NAME said in response to the e-mail that this guidance should be in the document: Let's re-open to track. I don't think all instances of this question need separate issues, though.\nSo the related questions were: Should the GREASE values be listed in the registry as Reserved? No, because some rough estimation suggests it would take a few millennia to download the registry page if this were the case. Should there be a note indicating which values cannot be assigned? I'm of two minds, because while I really don't want people embedding the formula into their parsers, I do want people embedding the formula into their generators. says no Reserved listing, yes note.\nI think this is proposal ready. Forwarding Mike's response to the list, to there is an archive to link to. Lars >\n", "new_text": "The entries in iana-setting-table are registered by this document. Each code of the format \"0x1f * N + 0x21\" for non-negative integer values of N (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe) MUST NOT be assigned by IANA and MUST NOT appear in the listing of assigned values. 11.2.3."}
{"id": "q-en-quicwg-base-drafts-915dee9b837ebdffba501a439c6046901eadca77d30ef6bf86ac2a0dce895b14", "old_text": "Specification Required policy to avoid collisions with HTTP/2 error codes. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe) MUST NOT be assigned by IANA. 11.2.4.", "comments": "IANA's review wanted to know whether the 148 quadrillion GREASE values should be listed on the registry page as Reserved, and what note should be present if not. They should not, lest it be challenging to actually download the registry page within the lifetime of the human race. This rewords the restriction and clarifies that the restriction should be the note.\nNAME Can you add the same text to 22.2 (Transport parameter registry) in the transport doc as well?\nNAME said\nWe believe the note, updated based on IANA feedback during Last Call, is sufficient. Reserving 148 quadrillion values in the registry would cause the registry to be impossible to download in the lifetime of humanity; see . The purpose is described in , to exercise the requirement to ignore unknown frame types (a.k.a. GREASE). Proposal is to make no change; this was already covered during IANA review.\nIf IANA has agreed, then I am fine with it. BTW, I was not asking to make those zillions of reservations but leaving a note in the registry ;-)\nIt's a fair question but the WG and the responsible AD are satisfied with our IANA considerations at this stage, closing with no action.\nPlease see the for full context. This issue is tracking the following IANA question:\nCopying my e-mail reply here: unknown values; having an explicit list encourages an explicitly defined reaction. This also applies to and , where the latter has better wording.\nNAME replied to IANA . Closing this issue.\nActually, NAME said in response to the e-mail that this guidance should be in the document: Let's re-open to track. I don't think all instances of this question need separate issues, though.\nSo the related questions were: Should the GREASE values be listed in the registry as Reserved? No, because some rough estimation suggests it would take a few millennia to download the registry page if this were the case. Should there be a note indicating which values cannot be assigned? I'm of two minds, because while I really don't want people embedding the formula into their parsers, I do want people embedding the formula into their generators. says no Reserved listing, yes note.\nI think this is proposal ready. Forwarding Mike's response to the list, to there is an archive to link to. Lars >\n", "new_text": "Specification Required policy to avoid collisions with HTTP/2 error codes. Each code of the format \"0x1f * N + 0x21\" for non-negative integer values of N (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe) MUST NOT be assigned by IANA and MUST NOT appear in the listing of assigned values. 11.2.4."}
{"id": "q-en-quicwg-base-drafts-915dee9b837ebdffba501a439c6046901eadca77d30ef6bf86ac2a0dce895b14", "old_text": "The entries in the following table are registered by this document. Additionally, each code of the format \"0x1f * N + 0x21\" for non- negative integer values of N (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe) MUST NOT be assigned by IANA. ", "comments": "IANA's review wanted to know whether the 148 quadrillion GREASE values should be listed on the registry page as Reserved, and what note should be present if not. They should not, lest it be challenging to actually download the registry page within the lifetime of the human race. This rewords the restriction and clarifies that the restriction should be the note.\nNAME Can you add the same text to 22.2 (Transport parameter registry) in the transport doc as well?\nNAME said\nWe believe the note, updated based on IANA feedback during Last Call, is sufficient. Reserving 148 quadrillion values in the registry would cause the registry to be impossible to download in the lifetime of humanity; see . The purpose is described in , to exercise the requirement to ignore unknown frame types (a.k.a. GREASE). Proposal is to make no change; this was already covered during IANA review.\nIf IANA has agreed, then I am fine with it. BTW, I was not asking to make those zillions of reservations but leaving a note in the registry ;-)\nIt's a fair question but the WG and the responsible AD are satisfied with our IANA considerations at this stage, closing with no action.\nPlease see the for full context. This issue is tracking the following IANA question:\nCopying my e-mail reply here: unknown values; having an explicit list encourages an explicitly defined reaction. This also applies to and , where the latter has better wording.\nNAME replied to IANA . Closing this issue.\nActually, NAME said in response to the e-mail that this guidance should be in the document: Let's re-open to track. I don't think all instances of this question need separate issues, though.\nSo the related questions were: Should the GREASE values be listed in the registry as Reserved? No, because some rough estimation suggests it would take a few millennia to download the registry page if this were the case. Should there be a note indicating which values cannot be assigned? I'm of two minds, because while I really don't want people embedding the formula into their parsers, I do want people embedding the formula into their generators. says no Reserved listing, yes note.\nI think this is proposal ready. Forwarding Mike's response to the list, to there is an archive to link to. Lars >\n", "new_text": "The entries in the following table are registered by this document. Each code of the format \"0x1f * N + 0x21\" for non-negative integer values of N (that is, 0x21, 0x40, ..., through 0x3ffffffffffffffe) MUST NOT be assigned by IANA and MUST NOT appear in the listing of assigned values. "}
{"id": "q-en-quicwg-base-drafts-327e893912365b7552643d49950c2cc239fabc173259e0a5f0257815720184ab", "old_text": "QUIC carries TLS handshake data in CRYPTO frames, each of which consists of a contiguous block of handshake data identified by an offset and length. Those frames are packaged into QUIC packets and encrypted under the current TLS encryption level. As with TLS over TCP, once TLS handshake data has been delivered to QUIC, it is QUIC's responsibility to deliver it reliably. Each chunk of data that is produced by TLS is associated with the set of keys that TLS is currently using. If QUIC needs to retransmit that data, it MUST use the same keys even if TLS has already updated to newer keys. One important difference between TLS records (used with TCP) and QUIC CRYPTO frames is that in QUIC multiple frames may appear in the same QUIC packet as long as they are associated with the same packet number space. For instance, an endpoint can bundle a Handshake message and an ACK for some Handshake data into the same packet. Some frames are prohibited in different packet number spaces; see Section 12.5 of QUIC-TRANSPORT.", "comments": "The first strikes \"TLS\" from encryption level The second removes misleading text about coalescing I'm suggesting that we replace it with explanatory language about encryption levels and packet number spaces. This context is now missing as a result of moving the referenced text to -transport.\nRoman Danyliw said:\nI'm going to suggest that the text I propose in addresses this first part of this. The higher/lower stuff seems like it could be better, but it's a lot of work to do carefully. I'm going to suggest that we could live with this, but I might leave this issue open and try to find a fix once other more pressing issues are dealt with.\nNAME said:\nThis is really two comments, but I think we can dispense with both at the same time as the first is really trivial: The use of \"TLS encryption level\" is an unnecessary redundancy and inconsistent with other usage, so we can easily remove \"TLS\" there. The start of that paragraph is cruft. I'm going to suggest we only keep the last sentence, which was unquoted in this issue. But then I see that we are missing important context, so I'm going to suggest a little bit of text to replace it.\none suggestion, but lgtm", "new_text": "QUIC carries TLS handshake data in CRYPTO frames, each of which consists of a contiguous block of handshake data identified by an offset and length. Those frames are packaged into QUIC packets and encrypted under the current encryption level. As with TLS over TCP, once TLS handshake data has been delivered to QUIC, it is QUIC's responsibility to deliver it reliably. Each chunk of data that is produced by TLS is associated with the set of keys that TLS is currently using. If QUIC needs to retransmit that data, it MUST use the same keys even if TLS has already updated to newer keys. Each encryption level corresponds to a packet number space. The packet number space that is used determines the semantics of frames. Some frames are prohibited in different packet number spaces; see Section 12.5 of QUIC-TRANSPORT."}
{"id": "q-en-quicwg-base-drafts-23b9327700c0cf4a1f99b846ab536e7b174e0140357aac8da4e5fb2b70ef3212", "old_text": "exchange-summary shows the multiple packets that form a single \"flight\" of messages being processed individually, to show what incoming messages trigger different actions. New handshake messages are requested after incoming packets have been processed. This process varies based on the structure of endpoint implementations and the order in which packets arrive; this is intended to illustrate the steps involved in a single handshake exchange. 4.2.", "comments": "Ben observed that this might be at odds with the definition of the interface. This is true, but the illustrative value of the current structure is valuable. More text on that might help.\nNAME said:\nI think that the multiple invocations is enough to call out. However, the retrieval of bytes to send after processing some data is not necessary. For reference, the is: This does not necessarily imply that you pull after every byte provided. As the bytes here are provided in a coalesced packet, it is reasonable to defer the \"Get Handshake\" call until all of the bytes arrive. I know that our stack does, though this is not uniformly true. to Handshake Complete? YES! Thanks. That's going to be awkward, as it involves QUIC-layer messages, but I think we can do that. (In a separate PR though.)", "new_text": "exchange-summary shows the multiple packets that form a single \"flight\" of messages being processed individually, to show what incoming messages trigger different actions. This shows multiple \"Get Handshake\" invocations to retrieve handshake messages at different encryption levels. New handshake messages are requested after incoming packets have been processed. exchange-summary shows one possible structure for a simple handshake exchange. The exact process varies based on the structure of endpoint implementations and the order in which packets arrive. Implementations could use a different number of operations or execute them in other orders. 4.2."}
{"id": "q-en-quicwg-base-drafts-ac84611302b620a4ad3f8e03b8ac06323d6630efb77e86a34e72ee6f86218452", "old_text": "ClientHello message in their first Initial packet. The TLS implementation does not need to ensure that the ClientHello is sufficiently large. QUIC PADDING frames are added to increase the size of the packet as necessary. 4.4.", "comments": "NAME I'll leave that in your hands.\nNAME said:\nSeems reasonable. I took the opportunity to reword a little.\nPerhaps this PR should also fix the text in 8.1 to simply state and point to 14.1 in -transport. That text is redundant.", "new_text": "ClientHello message in their first Initial packet. The TLS implementation does not need to ensure that the ClientHello is large enough to meet the requirements for QUIC packets. QUIC PADDING frames are added to increase the size of the packet as necessary; see Section 14.1 of QUIC-TRANSPORT. 4.4."}
{"id": "q-en-quicwg-base-drafts-9538c938070fd8e80c46f3898e52450ab5474c4129b5b5550899a8cec7c3f691", "old_text": "that supports this extension if the extension is received when the transport is not QUIC. 8.3. The TLS EndOfEarlyData message is not used with QUIC. QUIC does not", "comments": "As agreed via mail.\nNAME said:\nResponse provided via email.\nThis is probably something we need a consensus call on, even though it's a change only in theory.\nReturning to an open state as the chairs directed in their plan.\nClosing this now that the IESG have approved the document(s).\nThis gets the job done, with or without the suggested rewording.", "new_text": "that supports this extension if the extension is received when the transport is not QUIC. Negotiating the quic_transport_parameters extension causes the EndOfEarlyData to be removed; see remove-eoed. 8.3. The TLS EndOfEarlyData message is not used with QUIC. QUIC does not"}
{"id": "q-en-quicwg-base-drafts-cef7d79a8ac8606d5a3f8290687cbfe3fcb76708081313402bf3d56b26573eac", "old_text": "4. Packet diagrams in this document use a format defined in QUIC- TRANSPORT to illustrate the order and size of fields. Complex fields are named and then followed by a list of fields surrounded by a pair of matching braces. Each field in this list is", "comments": "This makes it clearer that the notation is copied from v1, not defined there and normatively referenced. (Note, figures aren't normative anyway.)\n\u00c9ric Vyncke said:\nIn my response to , I explain this. This ensures that the transport reference can remain informative.\nPerhaps the intro needs to be reworded to indicate that it's using some of the same conventions as the transport document, which are reproduced below?\n\u00c9ric Vyncke said:\nBarry's point (captured in ) was that one or other might need to be normative. This is a different point, suggesting that this document concretely depends on the transport draft for terminology that has normative effect. Haven't checked, I don't think that is the case and the reference can remain informative. The reason for that being that all of the terminology used in this draft is also defined in this draft.\nAny other bids?", "new_text": "4. The format of packets is described using the notation defined in this section. This notation is the same as that used in QUIC-TRANSPORT. Complex fields are named and then followed by a list of fields surrounded by a pair of matching braces. Each field in this list is"}
{"id": "q-en-quicwg-base-drafts-e2e96f824d329c4be243d563ac5c682d4f8c536b5822fc185e614b0299fa27ff", "old_text": "defined in Section 6 of TLS13. A TLS alert is converted into a QUIC connection error. The AlertDescription value is added to 0x100 to produce a QUIC error code from the range reserved for CRYPTO_ERROR. The resulting value is sent in a QUIC CONNECTION_CLOSE frame of type 0x1c. QUIC is only able to convey an alert level of \"fatal\". In TLS 1.3, the only existing uses for the \"warning\" level are to signal", "comments": "One possible resolution to ; all literals are zero-padded to full bytes.\nIANA made the following observation while creating the H3 registries: While the IANA copy of the registry uses two digits throughout, the QUIC draft is similarly inconsistent about whether literal hex values have any leading zeros. Is it worth updating our hex literals to use full bytes (, , etc.) or synchonizing on some other format? We probably shouldn't synchronize on trimming, since IANA has already skewed the other way.\nNAME did fix this or do we need to inform IANA also?\nWe should probably shoot IANA an e-mail to update the registries with leading zeroes where they hadn't already assumed them.\nI'll let you do that. The only registries that are affected are in HTTP/3 and QPACK.\nFixed both in the registry and the documents.\nYeah, this is easy to understand. The tables must have been annoying.", "new_text": "defined in Section 6 of TLS13. A TLS alert is converted into a QUIC connection error. The AlertDescription value is added to 0x0100 to produce a QUIC error code from the range reserved for CRYPTO_ERROR. The resulting value is sent in a QUIC CONNECTION_CLOSE frame of type 0x1c. QUIC is only able to convey an alert level of \"fatal\". In TLS 1.3, the only existing uses for the \"warning\" level are to signal"}
{"id": "q-en-quicwg-base-drafts-e2e96f824d329c4be243d563ac5c682d4f8c536b5822fc185e614b0299fa27ff", "old_text": "QUIC permits the use of a generic code in place of a specific error code; see Section 11 of QUIC-TRANSPORT. For TLS alerts, this includes replacing any alert with a generic alert, such as handshake_failure (0x128 in QUIC). Endpoints MAY use a generic error code to avoid possibly exposing confidential information. 4.9.", "comments": "One possible resolution to ; all literals are zero-padded to full bytes.\nIANA made the following observation while creating the H3 registries: While the IANA copy of the registry uses two digits throughout, the QUIC draft is similarly inconsistent about whether literal hex values have any leading zeros. Is it worth updating our hex literals to use full bytes (, , etc.) or synchonizing on some other format? We probably shouldn't synchronize on trimming, since IANA has already skewed the other way.\nNAME did fix this or do we need to inform IANA also?\nWe should probably shoot IANA an e-mail to update the registries with leading zeroes where they hadn't already assumed them.\nI'll let you do that. The only registries that are affected are in HTTP/3 and QPACK.\nFixed both in the registry and the documents.\nYeah, this is easy to understand. The tables must have been annoying.", "new_text": "QUIC permits the use of a generic code in place of a specific error code; see Section 11 of QUIC-TRANSPORT. For TLS alerts, this includes replacing any alert with a generic alert, such as handshake_failure (0x0128 in QUIC). Endpoints MAY use a generic error code to avoid possibly exposing confidential information. 4.9."}
{"id": "q-en-quicwg-base-drafts-e2e96f824d329c4be243d563ac5c682d4f8c536b5822fc185e614b0299fa27ff", "old_text": "on KeyUpdate messages sent using 1-RTT encryption keys. Endpoints MUST NOT send a TLS KeyUpdate message. Endpoints MUST treat the receipt of a TLS KeyUpdate message as a connection error of type 0x10a, equivalent to a fatal TLS alert of unexpected_message; see tls-errors. ex-key-update shows a key update process, where the initial set of", "comments": "One possible resolution to ; all literals are zero-padded to full bytes.\nIANA made the following observation while creating the H3 registries: While the IANA copy of the registry uses two digits throughout, the QUIC draft is similarly inconsistent about whether literal hex values have any leading zeros. Is it worth updating our hex literals to use full bytes (, , etc.) or synchonizing on some other format? We probably shouldn't synchronize on trimming, since IANA has already skewed the other way.\nNAME did fix this or do we need to inform IANA also?\nWe should probably shoot IANA an e-mail to update the registries with leading zeroes where they hadn't already assumed them.\nI'll let you do that. The only registries that are affected are in HTTP/3 and QPACK.\nFixed both in the registry and the documents.\nYeah, this is easy to understand. The tables must have been annoying.", "new_text": "on KeyUpdate messages sent using 1-RTT encryption keys. Endpoints MUST NOT send a TLS KeyUpdate message. Endpoints MUST treat the receipt of a TLS KeyUpdate message as a connection error of type 0x010a, equivalent to a fatal TLS alert of unexpected_message; see tls-errors. ex-key-update shows a key update process, where the initial set of"}
{"id": "q-en-quicwg-base-drafts-e2e96f824d329c4be243d563ac5c682d4f8c536b5822fc185e614b0299fa27ff", "old_text": "When using ALPN, endpoints MUST immediately close a connection (see Section 10.2 of QUIC-TRANSPORT) with a no_application_protocol TLS alert (QUIC error code 0x178; see tls-errors) if an application protocol is not negotiated. While ALPN only specifies that servers use this alert, QUIC clients MUST use error 0x178 to terminate a connection when ALPN negotiation fails. An application protocol MAY restrict the QUIC versions that it can operate over. Servers MUST select an application protocol compatible with the QUIC version that the client has selected. The server MUST treat the inability to select a compatible application protocol as a connection error of type 0x178 (no_application_protocol). Similarly, a client MUST treat the selection of an incompatible application protocol by a server as a connection error of type 0x178. 8.2.", "comments": "One possible resolution to ; all literals are zero-padded to full bytes.\nIANA made the following observation while creating the H3 registries: While the IANA copy of the registry uses two digits throughout, the QUIC draft is similarly inconsistent about whether literal hex values have any leading zeros. Is it worth updating our hex literals to use full bytes (, , etc.) or synchonizing on some other format? We probably shouldn't synchronize on trimming, since IANA has already skewed the other way.\nNAME did fix this or do we need to inform IANA also?\nWe should probably shoot IANA an e-mail to update the registries with leading zeroes where they hadn't already assumed them.\nI'll let you do that. The only registries that are affected are in HTTP/3 and QPACK.\nFixed both in the registry and the documents.\nYeah, this is easy to understand. The tables must have been annoying.", "new_text": "When using ALPN, endpoints MUST immediately close a connection (see Section 10.2 of QUIC-TRANSPORT) with a no_application_protocol TLS alert (QUIC error code 0x0178; see tls-errors) if an application protocol is not negotiated. While ALPN only specifies that servers use this alert, QUIC clients MUST use error 0x0178 to terminate a connection when ALPN negotiation fails. An application protocol MAY restrict the QUIC versions that it can operate over. Servers MUST select an application protocol compatible with the QUIC version that the client has selected. The server MUST treat the inability to select a compatible application protocol as a connection error of type 0x0178 (no_application_protocol). Similarly, a client MUST treat the selection of an incompatible application protocol by a server as a connection error of type 0x0178. 8.2."}
{"id": "q-en-quicwg-base-drafts-e2e96f824d329c4be243d563ac5c682d4f8c536b5822fc185e614b0299fa27ff", "old_text": "MUST send the quic_transport_parameters extension; endpoints that receive ClientHello or EncryptedExtensions messages without the quic_transport_parameters extension MUST close the connection with an error of type 0x16d (equivalent to a fatal TLS missing_extension alert, see tls-errors). Transport parameters become available prior to the completion of the", "comments": "One possible resolution to ; all literals are zero-padded to full bytes.\nIANA made the following observation while creating the H3 registries: While the IANA copy of the registry uses two digits throughout, the QUIC draft is similarly inconsistent about whether literal hex values have any leading zeros. Is it worth updating our hex literals to use full bytes (, , etc.) or synchonizing on some other format? We probably shouldn't synchronize on trimming, since IANA has already skewed the other way.\nNAME did fix this or do we need to inform IANA also?\nWe should probably shoot IANA an e-mail to update the registries with leading zeroes where they hadn't already assumed them.\nI'll let you do that. The only registries that are affected are in HTTP/3 and QPACK.\nFixed both in the registry and the documents.\nYeah, this is easy to understand. The tables must have been annoying.", "new_text": "MUST send the quic_transport_parameters extension; endpoints that receive ClientHello or EncryptedExtensions messages without the quic_transport_parameters extension MUST close the connection with an error of type 0x016d (equivalent to a fatal TLS missing_extension alert, see tls-errors). Transport parameters become available prior to the completion of the"}
{"id": "q-en-quicwg-base-drafts-72b771bdb760344b3ffef76cc70915b04ffc21529ec5d9c201ce68023952af84", "old_text": "Contains additional ranges of packets that are alternately not acknowledged (Gap) and acknowledged (ACK Range); see ack-ranges. The three ECN Counts; see ack-ecn-counts. 19.3.1.", "comments": "This one slipped the net.\nEither way is fine by me. Just ship it already :PThanks, Martin!", "new_text": "Contains additional ranges of packets that are alternately not acknowledged (Gap) and acknowledged (ACK Range); see ack-ranges. The three ECN counts; see ack-ecn-counts. 19.3.1."}
{"id": "q-en-quicwg-base-drafts-72b771bdb760344b3ffef76cc70915b04ffc21529ec5d9c201ce68023952af84", "old_text": "The ACK frame uses the least significant bit of the type value (that is, type 0x03) to indicate ECN feedback and report receipt of QUIC packets with associated ECN codepoints of ECT(0), ECT(1), or ECN-CE in the packet's IP header. ECN Counts are only present when the ACK frame type is 0x03. When present, there are three ECN counts, as shown in ecn-count- format. The three ECN Counts are: A variable-length integer representing the total number of packets received with the ECT(0) codepoint in the packet number space of", "comments": "This one slipped the net.\nEither way is fine by me. Just ship it already :PThanks, Martin!", "new_text": "The ACK frame uses the least significant bit of the type value (that is, type 0x03) to indicate ECN feedback and report receipt of QUIC packets with associated ECN codepoints of ECT(0), ECT(1), or ECN-CE in the packet's IP header. ECN counts are only present when the ACK frame type is 0x03. When present, there are three ECN counts, as shown in ecn-count- format. The ECN count fields are: A variable-length integer representing the total number of packets received with the ECT(0) codepoint in the packet number space of"}
{"id": "q-en-quicwg-base-drafts-493291ddc3a6e6d13762636797fed8dc8f50f3f0fbd0dc4e9a994d115c14bddd", "old_text": "\"HTTP fields\"; see Sections RFC9110 and RFC9110 of RFC9110. For a listing of registered HTTP fields, see the \"Hypertext Transfer Protocol (HTTP) Field Name Registry\" maintained at . Field names are strings containing a subset of ASCII characters. Properties of HTTP field names and values are discussed in more detail in RFC9110. As in HTTP/2, characters in field names MUST be converted to lowercase prior to their encoding. A request or response containing uppercase characters in field names MUST be treated as . Like HTTP/2, HTTP/3 does not use the Connection header field to indicate connection-specific fields; in this protocol, connection- specific metadata is conveyed by other means. An endpoint MUST NOT generate an HTTP/3 field section containing connection-specific fields; any message containing connection-specific fields MUST be treated as . The only exception to this is the TE header field, which MAY be present in an HTTP/3 request header; when it is, it MUST NOT contain", "comments": "Removed one HTTP/2 reference which seems extraneous, and corrected the polarity of another. This by doing almost nothing; speak now if you think more is warranted.\nRFC Editor says: But fair enough -- are we overdoing it?\nHaving looked at the references, they all seem reasonable with the possible exception of PUSH_PROMISE noting it behaves \"as in HTTP/2.\" Most references to HTTP/2 occur either in the first two sections of the document or in the Appendix which specifically discusses the relationship between HTTP/2 and HTTP/3. removes that one extraneous reference. Other than than, I plan to make no further changes to address this feedback unless someone wants to provide a more concrete list of places where the comparison is unnecessary.\nMarginal: Section 4.1.1 and 4.1.1.1 seems to have some egregious use of phrasing similar to \"Like HTTP/2, ...\" or \"as in HTTP/2\". You could possibly condense these to a sentence such as \"Like HTTP/2, HTTP/3 has additional considerations related to use of characters in field names, the Connect header field, and pseudo-header fields\". That could be inserted into to thefirst para of 4.1.1, or the second para (thus breaking the para so the next one starts with \"Characters in field names\").\nFixed in .", "new_text": "\"HTTP fields\"; see Sections RFC9110 and RFC9110 of RFC9110. For a listing of registered HTTP fields, see the \"Hypertext Transfer Protocol (HTTP) Field Name Registry\" maintained at . Like HTTP/2, HTTP/3 has additional considerations related to the use of characters in field names, the Connection header field, and pseudo-header fields. Field names are strings containing a subset of ASCII characters. Properties of HTTP field names and values are discussed in more detail in RFC9110. Characters in field names MUST be converted to lowercase prior to their encoding. A request or response containing uppercase characters in field names MUST be treated as . HTTP/3 does not use the Connection header field to indicate connection-specific fields; in this protocol, connection-specific metadata is conveyed by other means. An endpoint MUST NOT generate an HTTP/3 field section containing connection-specific fields; any message containing connection-specific fields MUST be treated as . The only exception to this is the TE header field, which MAY be present in an HTTP/3 request header; when it is, it MUST NOT contain"}
{"id": "q-en-quicwg-base-drafts-493291ddc3a6e6d13762636797fed8dc8f50f3f0fbd0dc4e9a994d115c14bddd", "old_text": "7.2.5. The frame (type=0x05) is used to carry a promised request header section from server to client on a , as in HTTP/2. The payload consists of:", "comments": "Removed one HTTP/2 reference which seems extraneous, and corrected the polarity of another. This by doing almost nothing; speak now if you think more is warranted.\nRFC Editor says: But fair enough -- are we overdoing it?\nHaving looked at the references, they all seem reasonable with the possible exception of PUSH_PROMISE noting it behaves \"as in HTTP/2.\" Most references to HTTP/2 occur either in the first two sections of the document or in the Appendix which specifically discusses the relationship between HTTP/2 and HTTP/3. removes that one extraneous reference. Other than than, I plan to make no further changes to address this feedback unless someone wants to provide a more concrete list of places where the comparison is unnecessary.\nMarginal: Section 4.1.1 and 4.1.1.1 seems to have some egregious use of phrasing similar to \"Like HTTP/2, ...\" or \"as in HTTP/2\". You could possibly condense these to a sentence such as \"Like HTTP/2, HTTP/3 has additional considerations related to use of characters in field names, the Connect header field, and pseudo-header fields\". That could be inserted into to thefirst para of 4.1.1, or the second para (thus breaking the para so the next one starts with \"Characters in field names\").\nFixed in .", "new_text": "7.2.5. The frame (type=0x05) is used to carry a promised request header section from server to client on a . The payload consists of:"}
{"id": "q-en-quicwg-base-drafts-a0105fa4bbf2cd55de58bbb174a47804fb1ce22b49e23c2f343a0299e9460aec", "old_text": "The QUIC transport protocol (QUIC-TRANSPORT) is designed to support HTTP semantics, and its design subsumes many of the features of HTTP/2 (RFC7540). HTTP/2 uses HPACK (RFC7541) for compression of the header and trailer sections. If HPACK were used for HTTP/3 (HTTP3), it would induce head-of-line blocking for field sections due to built-in assumptions of a total ordering across frames on all streams. QPACK reuses core concepts from HPACK, but is redesigned to allow", "comments": "These track corrections I've asked the RFC Editor to make directly in their copy prior to publication. It appears that some http-core sections moved since the last time we did a full sweep of our references.", "new_text": "The QUIC transport protocol (QUIC-TRANSPORT) is designed to support HTTP semantics, and its design subsumes many of the features of HTTP/2 (RFC9113). HTTP/2 uses HPACK (RFC7541) for compression of the header and trailer sections. If HPACK were used for HTTP/3 (RFC9114), it would induce head-of-line blocking for field sections due to built-in assumptions of a total ordering across frames on all streams. QPACK reuses core concepts from HPACK, but is redesigned to allow"}
{"id": "q-en-quicwg-base-drafts-a0105fa4bbf2cd55de58bbb174a47804fb1ce22b49e23c2f343a0299e9460aec", "old_text": "fields; this document uses \"fields\" for generality. A name-value pair sent as part of an HTTP field section. See Sections 6.3 and 6.5 of SEMANTICS. Data associated with a field name, composed from all field line values with that field name in that section, concatenated together", "comments": "These track corrections I've asked the RFC Editor to make directly in their copy prior to publication. It appears that some http-core sections moved since the last time we did a full sweep of our references.", "new_text": "fields; this document uses \"fields\" for generality. A name-value pair sent as part of an HTTP field section. See Sections 6.3 and 6.5 of RFC9110. Data associated with a field name, composed from all field line values with that field name in that section, concatenated together"}
{"id": "q-en-quicwg-base-drafts-a0105fa4bbf2cd55de58bbb174a47804fb1ce22b49e23c2f343a0299e9460aec", "old_text": "8. This document makes multiple registrations in the registries defined by HTTP3. The allocations created by this document are all assigned permanent status and list a change controller of the IETF and a contact of the HTTP working group (ietf-http-wg@w3.org). 8.1. This document specifies two settings. The entries in the following table are registered in the \"HTTP/3 Settings\" registry established in HTTP3. For formatting reasons, the setting names here are abbreviated by removing the 'SETTINGS_' prefix.", "comments": "These track corrections I've asked the RFC Editor to make directly in their copy prior to publication. It appears that some http-core sections moved since the last time we did a full sweep of our references.", "new_text": "8. This document makes multiple registrations in the registries defined by RFC9114. The allocations created by this document are all assigned permanent status and list a change controller of the IETF and a contact of the HTTP working group (ietf-http-wg@w3.org). 8.1. This document specifies two settings. The entries in the following table are registered in the \"HTTP/3 Settings\" registry established in RFC9114. For formatting reasons, the setting names here are abbreviated by removing the 'SETTINGS_' prefix."}
{"id": "q-en-quicwg-base-drafts-a0105fa4bbf2cd55de58bbb174a47804fb1ce22b49e23c2f343a0299e9460aec", "old_text": "This document specifies two stream types. The entries in the following table are registered in the \"HTTP/3 Stream Types\" registry established in HTTP3. 8.3. This document specifies three error codes. The entries in the following table are registered in the \"HTTP/3 Error Codes\" registry established in HTTP3. ", "comments": "These track corrections I've asked the RFC Editor to make directly in their copy prior to publication. It appears that some http-core sections moved since the last time we did a full sweep of our references.", "new_text": "This document specifies two stream types. The entries in the following table are registered in the \"HTTP/3 Stream Types\" registry established in RFC9114. 8.3. This document specifies three error codes. The entries in the following table are registered in the \"HTTP/3 Error Codes\" registry established in RFC9114. "}
{"id": "q-en-quicwg-datagram-78f2a183353e421c73bb7270efcf9bd2322d4fba1852c417e098e54687845de2", "old_text": "integer value (represented as a variable-length integer) that represents the maximum size of a DATAGRAM frame (including the frame type, length, and payload) the endpoint is willing to receive, in bytes. An endpoint that includes this parameter supports the DATAGRAM frame types and is willing to receive such frames on this connection. Endpoints MUST NOT send DATAGRAM frames until they have received the max_datagram_frame_size transport parameter. Endpoints MUST NOT send DATAGRAM frames of size strictly larger than the value of max_datagram_frame_size the endpoint has received from its peer. An endpoint that receives a DATAGRAM frame when it has not sent the max_datagram_frame_size transport parameter MUST terminate the connection with error PROTOCOL_VIOLATION. An endpoint that receives a DATAGRAM frame that is strictly larger than the value it sent in its max_datagram_frame_size transport parameter MUST terminate the connection with error PROTOCOL_VIOLATION. Endpoints that wish to use DATAGRAM frames need to ensure they send a max_datagram_frame_size value sufficient to allow their peer to use them. It is RECOMMENDED to send the value 65535 in the max_datagram_frame_size transport parameter as that indicates to the peer that this endpoint will accept any DATAGRAM frame that fits inside a QUIC packet. The max_datagram_frame_size transport parameter is a unidirectional limit and indication of support of DATAGRAM frames. Application", "comments": "Clarify meaning of 0 in the datagram transport parameter, based on the discussion at IETF 110. This follows one of the suggestions to have 0 mean no datagrams, and > 0 mean support for datagrams. This should be a non-breaking change with existing implementations.\nThis change is very surprising. Why not take the approach that was discussed in the issue, wherein the limit describes the payload size, and presence of the transport parameter indicates extension support? It's not useful to have a bunch of ill-formed possibilities where the limit describes the frame size but is smaller than needed to encode the frame. See also URL\nNAME this is based on the WG discussion, minuted here: URL\nSo it comes down to preserving the weird semantics nobody wants because renumbering the TP is aesthetically unappealing? That seems like a shame. Unless saying \"I support it, but you can't send it\" is actually useful somehow?\nRight now, the TP specifies a maximum frame size, including frame type, length and payload. This makes certain values invalid (0, 1?). Also, since this values practically is a kind of flow control, indicating how much data I'm willing to receive at a time, it's the payload length that's important here, not the framing. For these reasons, I'm arguing to change this to specifying a maximum payload length. Then, the question of what a value of zero means. Should a value of 0 be the same thing as not present or should it mean that only 0 length datagrams are allowed? I think it is simpler to say that a value of zero is the same as not present (i.e. disabled). (Issue copied from , by NAME on 2019-11-18)\nfrom NAME on 2019-11-18: Wasn't the legality of zero-length frames agreed upon in URL I didn't even realize that the parameter wasn't payload size. Strongly agree that it should be.\nfrom NAME on 2019-11-19: I agree that is simpler to define zero to mean not allowed, but it is highly confusing to be forced to require a length of one if you specifically only use 0 length datagrams for heatbeats or similar. It would be better to a have separate indicator to reject datagrams altogether.\nIsn't this indicated by omitting the transport parameter entirely?\nNAME If omitting the parameter indicates that data frames are not permitted, then this achieves the purpose, yes. So there is no need to have 0 means disabling. Hence it is better to have 0 mean 0 length, but still valid. This is the most natural representation, and there is a an actual meaning ful use of these, namely heartbeats. Further, by making it explicit that length 0 is the only valid, you can have a fast or lightweight implementation.\nI wonder if we need this limit at all. It introduces implementation complexity and it's unclear why any implementation would like to limit the size of DATAGRAM frames (or payloads) they can receive. We already have for implementations that have a limit on their stack size. Does anyone have a use-case where they'd like to limit the size of DATAGRAM frames (or payloads) specifically?\nI'm all for removing the limit. As I have it coded up in MsQuic, I always send a max value. It's not even configurable by the app. I don't think \"I have a buffering limit\" a good reason to have a limit. It's limited by a single packet already. That should be enough IMO.\nI don't think it's particularly useful to set this smaller than a packet, given that a reasonable buffer is bigger than that, and I'm not sure what a peer could usefully do with the information that the limit is larger than a packet. :+1:\nThe more I think about it, limiting the size of the QUIC DATAGRAMS (frames or payloads) is weird. In part because there is no wiresignal to describe a max number of DATAGRAMS per packet, and we seem to have discounted the need for an internal API to configure how frames are composed in a packet. I don't hear anyone with use cases; removing the limit removes a whole load of edge cases and/or implementation defined behaviour. I think the simplification would be a win.\nNAME One reason you might want to limit Datagrams is if you have a slottet buffering system for datagrams, for example 1K, or 256 bytes per datagram, or a paged system of multiple datagrams. In this case you cannot always fit a whole package into the buffering. For streams you have an entirely different mechanism, and other frames are generally not large.\nNAME that's an interesting thought - is this something you're planning on implementing and have a use for, or is it more of a thought experiment?\nNAME Unfortunately I've had to push back on my QUIC dev, but I did work with an emulation of netmap, on top of select / poll, that uses slots to receive packets from the OS without context switching. Sometimes a packet needs to span multiple slots, but often it just drops into a 2K slot or so. I could imagine many use cases where datagrams are redistributed to other threads or processes in a similar manner without having an explicit use case. I believe there are now several kernel interfaces that uses a similar approach.\nAs far as I know, those kernel interfaces all use slots that are larger than the interface MTU, so I don't think they'd require reducing the payload size. Am I missing something?\nWell, I can only speak for netmap, but it works by having the user specify one large memory block divided into a number of equal sized slots of a user specified size. Each slot has a control record with some flags. A record can indicate that the slot is partial and reference the next slot in the packet. So you can can choose a slot of 64K and waste a lot of memory, or choose a slot of 512 bytes and frequently have to deal with fragmenting. Or chose 2K for a use case where you know that you will not have larger payload and can afford the space overhead. EDIT: to clarify, it is the network driver that chooses to link data and update the control record when receiving, and the user process when writing. I'm not sure exactly how the other kernel interfaces behave but from memory it is rather much the same. For QUIC datagrams it is a bit different in that you likely want to avoid fragmentation at all cost due to the added complexity and the non-stream nature of the data. If you could afford fragmentation it would make less sense to have a limit on the size.\nI support switching to TP specifying max payload size.\nIIRC, the original reason this limit was introduced is to allow QUIC proxies to communicate the max datagram size it gets from the MTU between itself and the backend.\nIn my experience, a MASQUE proxy doesn't know that information at the time it opens a listening QUIC connection., and there is nothing to say that all backends would share the same value.\nThis is not for MASQUE, but for a direct QUIC-to-QUIC proxy.\nOK, do you expect the max DATAGRAM frame size a backend expects to receive is smaller than the QUIC packet size it is willing to receive?\nIf we do have a max frame/perhaps size still defined, perhaps it makes sense to let this be updated by a new frame (MAXDATAGRAMSIZE).\nA MAXDATAGRAMSIZE frame would be tricky because of reordering. In QUIC, all the MAX_FOOBAR frames can only increase limits, but here I suspect for this to be useful we'd want to be able to lower the limit which greatly increases complexity.\nI expect this to be fairly uncommon, but whenever this does happen, it would make the entire connection unreliable for datagrams, meaning the applications would have to do their own MTU discovery on top of the one QUIC already does.\nThe default ought to be \"whatever fits in the PMTU\".\nYes, you have to do PMTUD. (What is pernicious here is that if you get too good at that, the value that you resolve to might not be good forever because your varints will use more bytes over time and so reduce available space.)\nNAME which varints will use more bytes here?\nI was thinking about flow IDs. If you are not reusing flow IDs, there is a good chance that those will get bigger over time. But I guess that assumes use of flow IDs. Mostly, I guess that DATAGRAM can be each sent in their own packet/datagram, so the size of other frames won't affect the space that is available.\nYeah the flow ID is part of the payload as far as this document is concerned. Our implementation will try to coalesce DATAGRAM frames with other frames, but that doesn't impact the max possible payload size because if the DATAGRAM frame doesn't fit with other frames we just send it in its own packet.", "new_text": "integer value (represented as a variable-length integer) that represents the maximum size of a DATAGRAM frame (including the frame type, length, and payload) the endpoint is willing to receive, in bytes. The default for this parameter is 0, which indicates that the endpoint does not support DATAGRAM frames. A value greater than 0 indicates that the endpoint supports the DATAGRAM frame types and is willing to receive such frames on this connection. An endpoint MUST NOT send DATAGRAM frames until it has received the max_datagram_frame_size transport parameter with a non-zero value. An endpoint MUST NOT send DATAGRAM frames that are larger than the max_datagram_frame_size value it has received from its peer. An endpoint that receives a DATAGRAM frame when it has not indicated support via the transport parameter MUST terminate the connection with an error of type PROTOCOL_VIOLATION. Similarly, an endpoint that receives a DATAGRAM frame that is larger than the value it sent in its max_datagram_frame_size transport parameter MUST terminate the connection with an error of type PROTOCOL_VIOLATION. For most uses of DATAGRAM frames, it is RECOMMENDED to send a value of 65535 in the max_datagram_frame_size transport parameter to indicate that this endpoint will accept any DATAGRAM frame that fits inside a QUIC packet. The max_datagram_frame_size transport parameter is a unidirectional limit and indication of support of DATAGRAM frames. Application"}
{"id": "q-en-quicwg-datagram-78f2a183353e421c73bb7270efcf9bd2322d4fba1852c417e098e54687845de2", "old_text": "max_datagram_frame_size Indicates that the connection should enable support for unreliable DATAGRAM frames. An endpoint that advertises this transport parameter can receive datagrams frames from the other endpoint, up to and including the length in bytes provided in the transport parameter. This document also registers a new value in the QUIC Frame Type registry:", "comments": "Clarify meaning of 0 in the datagram transport parameter, based on the discussion at IETF 110. This follows one of the suggestions to have 0 mean no datagrams, and > 0 mean support for datagrams. This should be a non-breaking change with existing implementations.\nThis change is very surprising. Why not take the approach that was discussed in the issue, wherein the limit describes the payload size, and presence of the transport parameter indicates extension support? It's not useful to have a bunch of ill-formed possibilities where the limit describes the frame size but is smaller than needed to encode the frame. See also URL\nNAME this is based on the WG discussion, minuted here: URL\nSo it comes down to preserving the weird semantics nobody wants because renumbering the TP is aesthetically unappealing? That seems like a shame. Unless saying \"I support it, but you can't send it\" is actually useful somehow?\nRight now, the TP specifies a maximum frame size, including frame type, length and payload. This makes certain values invalid (0, 1?). Also, since this values practically is a kind of flow control, indicating how much data I'm willing to receive at a time, it's the payload length that's important here, not the framing. For these reasons, I'm arguing to change this to specifying a maximum payload length. Then, the question of what a value of zero means. Should a value of 0 be the same thing as not present or should it mean that only 0 length datagrams are allowed? I think it is simpler to say that a value of zero is the same as not present (i.e. disabled). (Issue copied from , by NAME on 2019-11-18)\nfrom NAME on 2019-11-18: Wasn't the legality of zero-length frames agreed upon in URL I didn't even realize that the parameter wasn't payload size. Strongly agree that it should be.\nfrom NAME on 2019-11-19: I agree that is simpler to define zero to mean not allowed, but it is highly confusing to be forced to require a length of one if you specifically only use 0 length datagrams for heatbeats or similar. It would be better to a have separate indicator to reject datagrams altogether.\nIsn't this indicated by omitting the transport parameter entirely?\nNAME If omitting the parameter indicates that data frames are not permitted, then this achieves the purpose, yes. So there is no need to have 0 means disabling. Hence it is better to have 0 mean 0 length, but still valid. This is the most natural representation, and there is a an actual meaning ful use of these, namely heartbeats. Further, by making it explicit that length 0 is the only valid, you can have a fast or lightweight implementation.\nI wonder if we need this limit at all. It introduces implementation complexity and it's unclear why any implementation would like to limit the size of DATAGRAM frames (or payloads) they can receive. We already have for implementations that have a limit on their stack size. Does anyone have a use-case where they'd like to limit the size of DATAGRAM frames (or payloads) specifically?\nI'm all for removing the limit. As I have it coded up in MsQuic, I always send a max value. It's not even configurable by the app. I don't think \"I have a buffering limit\" a good reason to have a limit. It's limited by a single packet already. That should be enough IMO.\nI don't think it's particularly useful to set this smaller than a packet, given that a reasonable buffer is bigger than that, and I'm not sure what a peer could usefully do with the information that the limit is larger than a packet. :+1:\nThe more I think about it, limiting the size of the QUIC DATAGRAMS (frames or payloads) is weird. In part because there is no wiresignal to describe a max number of DATAGRAMS per packet, and we seem to have discounted the need for an internal API to configure how frames are composed in a packet. I don't hear anyone with use cases; removing the limit removes a whole load of edge cases and/or implementation defined behaviour. I think the simplification would be a win.\nNAME One reason you might want to limit Datagrams is if you have a slottet buffering system for datagrams, for example 1K, or 256 bytes per datagram, or a paged system of multiple datagrams. In this case you cannot always fit a whole package into the buffering. For streams you have an entirely different mechanism, and other frames are generally not large.\nNAME that's an interesting thought - is this something you're planning on implementing and have a use for, or is it more of a thought experiment?\nNAME Unfortunately I've had to push back on my QUIC dev, but I did work with an emulation of netmap, on top of select / poll, that uses slots to receive packets from the OS without context switching. Sometimes a packet needs to span multiple slots, but often it just drops into a 2K slot or so. I could imagine many use cases where datagrams are redistributed to other threads or processes in a similar manner without having an explicit use case. I believe there are now several kernel interfaces that uses a similar approach.\nAs far as I know, those kernel interfaces all use slots that are larger than the interface MTU, so I don't think they'd require reducing the payload size. Am I missing something?\nWell, I can only speak for netmap, but it works by having the user specify one large memory block divided into a number of equal sized slots of a user specified size. Each slot has a control record with some flags. A record can indicate that the slot is partial and reference the next slot in the packet. So you can can choose a slot of 64K and waste a lot of memory, or choose a slot of 512 bytes and frequently have to deal with fragmenting. Or chose 2K for a use case where you know that you will not have larger payload and can afford the space overhead. EDIT: to clarify, it is the network driver that chooses to link data and update the control record when receiving, and the user process when writing. I'm not sure exactly how the other kernel interfaces behave but from memory it is rather much the same. For QUIC datagrams it is a bit different in that you likely want to avoid fragmentation at all cost due to the added complexity and the non-stream nature of the data. If you could afford fragmentation it would make less sense to have a limit on the size.\nI support switching to TP specifying max payload size.\nIIRC, the original reason this limit was introduced is to allow QUIC proxies to communicate the max datagram size it gets from the MTU between itself and the backend.\nIn my experience, a MASQUE proxy doesn't know that information at the time it opens a listening QUIC connection., and there is nothing to say that all backends would share the same value.\nThis is not for MASQUE, but for a direct QUIC-to-QUIC proxy.\nOK, do you expect the max DATAGRAM frame size a backend expects to receive is smaller than the QUIC packet size it is willing to receive?\nIf we do have a max frame/perhaps size still defined, perhaps it makes sense to let this be updated by a new frame (MAXDATAGRAMSIZE).\nA MAXDATAGRAMSIZE frame would be tricky because of reordering. In QUIC, all the MAX_FOOBAR frames can only increase limits, but here I suspect for this to be useful we'd want to be able to lower the limit which greatly increases complexity.\nI expect this to be fairly uncommon, but whenever this does happen, it would make the entire connection unreliable for datagrams, meaning the applications would have to do their own MTU discovery on top of the one QUIC already does.\nThe default ought to be \"whatever fits in the PMTU\".\nYes, you have to do PMTUD. (What is pernicious here is that if you get too good at that, the value that you resolve to might not be good forever because your varints will use more bytes over time and so reduce available space.)\nNAME which varints will use more bytes here?\nI was thinking about flow IDs. If you are not reusing flow IDs, there is a good chance that those will get bigger over time. But I guess that assumes use of flow IDs. Mostly, I guess that DATAGRAM can be each sent in their own packet/datagram, so the size of other frames won't affect the space that is available.\nYeah the flow ID is part of the payload as far as this document is concerned. Our implementation will try to coalesce DATAGRAM frames with other frames, but that doesn't impact the max possible payload size because if the DATAGRAM frame doesn't fit with other frames we just send it in its own packet.", "new_text": "max_datagram_frame_size A non-zero value indicates that the endpoint supports receiving unreliable DATAGRAM frames. An endpoint that advertises this transport parameter can receive DATAGRAM frames from the other endpoint, up to and including the length in bytes provided in the transport parameter. The default value is 0. This document also registers a new value in the QUIC Frame Type registry:"}
{"id": "q-en-quicwg-datagram-580ede1a7e5ce6a3793823a4ce3c3235ea1dc3009e74b991cd4b1285bc9ac405", "old_text": "This document defines two new DATAGRAM QUIC frame types, which carry application data without requiring retransmissions. Discussion of this work is encouraged to happen on the QUIC IETF mailing list quic@ietf.org [3] or on the GitHub repository which contains the draft: https://github.com/quicwg/datagram [4].", "comments": "Some applications might want to send messages, and not care about the order of delivery, but still get reliability. QUIC streams give them reliability, but within a stream it enforces order of delivery. This datagram proposal gives unordered delivery of messages, but without reliability. Why not provide datagrams with optional (opt-in) reliability? It shouldn't be a lot of extra work for QUIC implementors to provide reliable datagrams as well as unreliable, given that retransmission is already there for QUIC streams. (I guess an application could create a separate QUIC stream for each unordered reliable message. That might work, but seems ugly and may have some overhead.)\nWhat you describe can already be accomplished using streams: each \u201creliable datagram\u201d is its own stream. That provides reliable delivery without ordering between independent datagrams. I don\u2019t think we need to add optional reliability to the datagram spec.\nStreams are very lightweight, and well suited to this task. Beware applying intuition from TCP too heavily.\nSo, here's a suggestion then: since I'm sure I'm not the only person who is going to think of this, maybe add some verbiage to the spec explaining why supporting reliable datagrams is unnecessary?\nSounds good. We can keep this issue open as requesting an editorial change.", "new_text": "This document defines two new DATAGRAM QUIC frame types, which carry application data without requiring retransmissions. Note that DATAGRAM frames are only meant for unreliable transmissions. Reliable transmission is already supported by QUIC via STREAM frames. Discussion of this work is encouraged to happen on the QUIC IETF mailing list quic@ietf.org [3] or on the GitHub repository which contains the draft: https://github.com/quicwg/datagram [4]."}
{"id": "q-en-quicwg-datagram-43ab4f9cbafef0943723a8127b3ec8cd50cdb3e9744f400a55b175105aff4367", "old_text": "7. This document registers a new value in the QUIC Transport Parameter Registry: 0x0020 (if this document is approved) max_datagram_frame_size A non-zero value indicates that the endpoint supports receiving unreliable DATAGRAM frames. An endpoint that advertises this transport parameter can receive DATAGRAM frames from the other endpoint, up to and including the length in bytes provided in the transport parameter. The default value is 0. This document also registers a new value in the QUIC Frame Type registry: 0x30 and 0x31 (if this document is approved) DATAGRAM Unreliable application data 8.", "comments": "The IANA section was added before the QUIC registries had a notion of permanent vs provisional, and we then forgot to update it. This PR fixes that oversight.\nLatest change looks good to me\nNoticed during Shepherd review. All of the core types registered in RFC 9000 simply link to the relevant document (RFC 9000) and section in the document where the protocol element is defined. In contrast, in datagram's IANA considerations section, the transport parameter and frame type registrations include the required field with some text that is already included in the document. This will manifest as a weird juxtaposition in the IANA table. I suggest you simply reference the section 3 and section 4 respectively. If you really want some prose (I don't recommend that) then it can be added as a note to the table.\nI noticed that when writing and I agree, we should just list \"This specification\" in there as most RFCs do.\nNoticed during the shepherd writeup of IANA considerations: The document is attempting to register: maxdatagramframe_size Transport Parameter with value 0x0020 DATAGRAM frame type with values 0x30 and 0x31 According to IANA (URL) registrations can be of different types. Here the requested values are in the bracket. So I suggest that the document state clearly that this is a request for permanent registration of these types.\nAgreed. Wrote up to fix this.\nLGTM, thanks!", "new_text": "7. 7.1. This document registers a new value in the QUIC Transport Parameter Registry maintained at . 0x20 (if this document is approved) max_datagram_frame_size permanent This document 7.2. This document registers two new values in the QUIC Frame Type registry maintained at . 0x30 and 0x31 (if this document is approved) DATAGRAM permanent This document 8."}
{"id": "q-en-ratelimit-headers-f2bbcc85de84c270a3a3eba117e5e0d42f1f8f23fe1e21e2292bbf7a2f3edead", "old_text": "extension parameters: To avoid clashes, implementers SHOULD prefix unregistered parameters with an \"x-\" identifier, e.g. \"x-acme-policy\", \"x-acme- burst\". While it is useful to define a clear syntax and semantics even for custom parameters, it is important to note that user agents are not required to process quota policy information. 2.2.", "comments": "See .\niirc there was a discussion on stuff. In general I'm ok with just NAME do you see any issue with that?\nNo issue, I was aware of the BCP and I think this case fits the first provision for segregating the name spaces in the Appendix B, so any one of the proposed solutions there, including yours, is fine with me.", "new_text": "extension parameters: To avoid clashes, implementers SHOULD prefix unregistered parameters with a vendor identifier, e.g. \"acme-policy\", \"acme-burst\". While it is useful to define a clear syntax and semantics even for custom parameters, it is important to note that user agents are not required to process quota policy information. 2.2."}
{"id": "q-en-resource-directory-2e77b9c04ee89eadc0d7b03c6c34158ecb0333c6c383fddd6ef7c2ca9fd92c4f", "old_text": "references. Like all search criteria, on a resource lookup it can match the target reference of the resource link itself, but also the registration resource of the endpoint that registered it, or any group resource that endpoint is contained in. Clients that are interested in a lookup result repeatedly or continuously can use mechanisms like ETag caching, resource", "comments": "This decides the open question of how to match the in a query in favor of client simplicity. Closes: URL NAME NAME would that change be OK with you and solve the issue?\nI know what an absolute form is, where do I find out what a path absolute form is\npath-absolute is an ABNF in RFC3986, described as \"begins with a '/' but not '//'\".\nI agree with the addition. This means that one can match against an absolute registration target as well.\nJim in :\nIt's a trade-off between not being a surprise to the RD implementation (with b, it needs to compare values that it has not seen in that form) versus not being a surprise to the lookup client (with a, it queries for something it does not see in that form in the query result). I think that the latter is preferable. Here are some more questions on the document. The registration text in section 5.3 says that the interface is idempotent, however it is not clear to me just how idempotent this is supposed to be. It is obvious that the goal is to make sure that a different resource is not created, however I am not clear what is supposed to happen with some of the parameters such as the 'lt' parameter. One way to implement this is to just make it a POST to the previously existing resource which would inherit old attributes on the endpoint. The other way is to say we are going to create a new resource, it just has the same address. It is not clear which, if any, of these two methods is to be used. Never mind I found the answer to this one. Need to think if I want clearer language. When doing an endpoint lookup, I assume that the 'lt' parameter would be the same as the one registered at creation. Is there/should there be a parameter which is a TTL value? How are href comparisons done. a) compare against what was registered b) Resolve what was registered and the href in the current context and compare c) something I did not think of compare on group for remote endpoint. Should this do the remote lookup or can it just be ignored for attribute comparison Value of lt = should it be the set parameter or the time left - how would time left be represented outherwise - should it be represented. Inferred context based on source IP when an update is done. If no con is provided, is that supposed to be checked again to see if it is \"right\"? Only if we did the infer to begin with? I don't understand the presence of the content-format for the empty message posted for simple registration. This seems to be odd if the content is empty. 8 Is there supposed to be a way for simple registration to return an error message? As written this is not necessarily done. Specifically, the response to the post can be executed before the query to /.well-known/core is done and the registration is attempted.", "new_text": "references. Like all search criteria, on a resource lookup it can match the target reference of the resource link itself, but also the registration resource of the endpoint that registered it, or any group resource that endpoint is contained in. Queries for resource link targets MUST be in absolute form and are matched against a resolved link target. Queries for groups and endpoints SHOULD be expressed in path-absolute form if possible and MUST be expressed in absolute form otherwise; the RD SHOULD recognize either. Clients that are interested in a lookup result repeatedly or continuously can use mechanisms like ETag caching, resource"}
{"id": "q-en-resource-directory-5170a10cc5e69740df297d7b7f53d31e6eaa2b9def5a8bff05398b2eee82a74a", "old_text": "This document registers one new ND option type under the sub-registry \"IPv6 Neighbor Discovery Option Formats\": Resource Directory address Option (38) 9.3.", "comments": "These should with exception as described there.\nMerging for lack of objections and as it makes further development easier.\nThe editorial points of : (plus from \"mostly mandatory\" line: find better term for \"optional\" and \"mostly mandatory\") I'm in progress of working them in, they'll come in a single PR.\nThe \"For several operations\" part I kept back pending and , which will affect that as well; the rest is pending review in , which I'll merge soon unless objections arrive.\nNote to self: The \"web interfaces\" got lost somewhere: Dear authors, below are some of the comments to the draft, they are mostly editorials with few ones that may warrant further discussion. p1 - \"Web interfaces\" is often used interchangeably with \"REST interfaces\" it'd be better to use one only, maybe the latter. p3 - \"with limited RAM and ROM\" Adding a reference to the expected RAM and ROM would be good. URL provides an estimation of : -> : : Which in markdown would look like: ~ p22 - Endpoint Name \"mostly mandatory\" is ambiguous. If the ep name is not needed to register to an RD then it is optional. If, on the other hand the RD needs an ep name and assigns one then it is mandatory ( btw if the RD assigns the ep name, shouldn't it return that information to the ep, together with the location within the rd, once the registration is successful? ) Github Issue 190 (below) raises the need to define a bit better what the ep name is supposed to contain. To me, we could keep it mandatory and add some guidelines for its creation, in OMA for example they are defined as an URN URL p22 - URL \"All we know from the draft is that it is \u2264 63 bytes. But we need to say that it is a UTF-8 string. Not entirely sure we want as an endpoint name either. Same for sector.\" p23 - Sorry for asking, but why is base URI only mandatory when filled by the Commissioning Tool? Wouldn't it be better to make it always mandatory to avoid the differentiation? p26 - Section 5.3.1 seems like could use a bit of reordering, I can explain this in another email or call with the authors. p30 - lt, base and extra-attrs were already explained in 5.3 you might want to keep it as it is or change it by adding a reference to 5.3 and comment on the changes that the Registration Update has over the Registration. p33 - As per the document structure, there are two interfaces: registration and lookup. Shouldn't they be at the same level in the document? Registration is 5.3, shouldn't lookup be 5.5 instead of 6? p41 p42 - Just to clarify, is then the tuple [endpoint,sector] the way RD identifies endpoints? Or is it derived from the device's credentials? \"A given endpoint that registers itself, needs to proof its possession of its unique (endpoint name, sector) value pair.\" - \"An Endpoint (name, sector) pair is unique within the et of endpoints registered by the RD. An Endpoint MUST NOT be identified by its protocol, port or IP address as these may change over the lifetime of an Endpoint.\" p44 - typo address/Address", "new_text": "This document registers one new ND option type under the sub-registry \"IPv6 Neighbor Discovery Option Formats\": Resource Directory Address Option (38) 9.3."}
{"id": "q-en-resource-directory-4d10181c7312464a76da39ce42fd5a4a54b614373e00667da86cf2c2f831ee1c", "old_text": "references) and are matched against a resolved link target. Queries for endpoints SHOULD be expressed in path-absolute form if possible and MUST be expressed in URI form otherwise; the RD SHOULD recognize either. Endpoints that are interested in a lookup result repeatedly or continuously can use mechanisms like ETag caching, resource", "comments": "Closes: URL\nIn the lookup filtering description, we talk about how href can be queried for (that it only applies in the URI form), but say nothing about anchor (to which the same applies in principle, with slightly different statements about application to the endpoints). Thanks Christer for pointing that out. As RFC6690 contains a lookup example for ?anchor=, we may want to add one here as well.", "new_text": "references) and are matched against a resolved link target. Queries for endpoints SHOULD be expressed in path-absolute form if possible and MUST be expressed in URI form otherwise; the RD SHOULD recognize either. The \"anchor\" attribute is usable for resource lookups, and, if queried, MUST be for in URI form as well. Endpoints that are interested in a lookup result repeatedly or continuously can use mechanisms like ETag caching, resource"}
{"id": "q-en-resource-directory-cc20d9211c4d5bc442bc75087d662fe98f1b352b428bf343d70cf1fe28cf5a0c", "old_text": "registration if it were not to be used in lookup or attributes, but would make a bad parameter for lookup, because a resource lookup with an \"if\" query parameter could ambiguously filter by the registered endpoint property or the RFC6690 target attribute). It is expected that the registry will receive between 5 and 50 registrations in total over the next years. 9.3.1.", "comments": "Something around \"whether many registrations are expected over time\" is supposed to be said according to BCP26, but it seems this belongs in the shepherd write-up, where it is now being added. To be merged right away.", "new_text": "registration if it were not to be used in lookup or attributes, but would make a bad parameter for lookup, because a resource lookup with an \"if\" query parameter could ambiguously filter by the registered endpoint property or the RFC6690 target attribute). 9.3.1."}
{"id": "q-en-resource-directory-d903532232fda5d3faef0e4c689340c93bf59f4a2157b722d9df8772c79ae4e7", "old_text": "the same properties for operations on the registration resource. Registrants that are prepared to pick a different identifier when their initial attempt at registration is unauthorized should pick an identifier at least twice as long as the expected number of registrants; registrants without such a recovery options should pick significantly longer endpoint names (e.g. using UUID URNs RFC4122).", "comments": "Part of the responses. Will merge (and upload as part of a -26) if there is an ACK from authors or chairs.\nBefore the 2020-08-13 telechat, some reviews have come in: : Not Ready Minor: Clarification on \"subject\" -- my intention was to include anything \"subject\", but I don't really know what else to put in there. Minor: Why is the first attempt special? -- Can clarify, PR to come. [edit: ] Nits addressed in URL More nits (IDnits) do not require editing, but some explanation -- where would that go? Shepherd writeup? (that would be \u2192 NAME : Ready Notes that the DDOS mitigation section is a bit history heavy. I had similar comment in URL; should we use that comment to shorten here? [edit: If so, would want an ACK.] \"It is my impression, that Security Considerations were mostly written having in mind that (D)TLS is always used, however it is only \"SHOULD\" in this draft (or even \"MAY\" if we look at RFC6690 which Security Considerations this draft refers to). I think that adding a few words describing which consequences for security not using (D)TLS would have and in which cases it is allowed will make the Security Considerations more consistent. \" -- [edit: covered in ] I'll issue PRs for what I think we can do something about quickly, but won't merge them or create a -26 without an ACK on them (given we're late in the whole publication stage and my impression is that ill-coordinated uploads do more harm than good) and will follow up on this comment when they're ready for review.\nWith the additional reviews that have come in today, I see no point in addressing some of what has come in in a pre-telechat update (as we can't resolve all the discuss points) -- taking the urgency off this. I'll still file PRs for changes resulting from the reviewer comments. Procedurally (so this goes to NAME or NAME Is the general expectation with IESG comments that the text be further improved to answer all the points (especially when they are comments), or can a plain \"we did it that way because\" suffice? In the latter case, would it make sense to collect point-to-point responses here so they get author or WG review before \"we\" answer them?\nWith URL and its follow-ups sent out, this is closed; any further comments will be opened as new issues. Hello Reviewers, CoRE and IESG members, thanks for your input to the Resource Directory document, and your patience while it was being processed. Before the individual point to point responses can go out, this mail is to address the general status, and answer a few points that were brought up by different reviewers (it will be referred back to in the point to point responses). In summary, the points raised in the reviews led to the changes of the recently uploaded [-26], and where no changes were made, the reasoning behind the choices in the document is outlined below or in the following mails. (Either way, short of aggregated nits paragraphs, all points are responded to in the following mails). A few larger issues evolved from the review comments being discussed in the interim meetings, so please expect a -27 at a later point (after IETF109) with the changes mentioned below. Kind regards Christian [-26]: URL Preface References into the issue tracker are primarily intended for bookkeeping and to give you access to the altered text if you're curious -- but for general review purposes, the responses are self-contained. Anticipated changes We have opted to submit a new version that should clear out all the other comments, so that the remaining work can be focused and swift: Replay and Freshness (OPEN-REPLAY-FRESHNESS) The comment on DTLS replay made us reevaluate the security properties of operations on the registration resource. While these are successively harder going from DTLS without replay protection to DTLS with replay protection and hardest with OSCORE, the text as it is is still vulnerable to an attacker stopping a reoccurring registration, or undoing alterations in registration attributes. Mitigation from a concurrent document (the Echo mechanism of I-D.ietf-core-echo-request-tag in event freshness mode) is planned, but the text is not complete yet. Necessity of the value in all responses The requirements around Limited Link Format and the necessity to express all looked up links in fully resolved form have stemmed from different ways RFC6690 has been read, both in the IETF community and by implementers. A comment from Esko Dijk has shown a clear error in one of the readings. That reading is single-handedly responsible for most of the overhead in the wire format (savings can go up to 50%), which is of high importance to the CoRE field. Due to that, it is the WG's and authors' intention to remove the reading from the set of interpretations that led to the rules set forth. The change will not render any existing RD server implementation incompatible (older RDs would merely be needlessly verbose), and the known implementations of Link-Format parsers can easily be updated. The updates to the document could not be completed in time with the rest of the changes, because a) it requires revisiting the known implementations for confirmation and b) will affect almost every example that contains lookup results. Server authorization (OPEN-SERVER) The two areas of \"how is the RD discovery secured\" and \"what do applications based on the RD need to consider\" jointly opened up new questions about server authorization and the linkage between a client's intention and the server's promises that may easily not conclusde in timeframe reasonable for RD. While steps have been taken to address the issue both on the RD discovery and the applications side (URL), the ongoing discussions in the working group may turn up with a less cumbersome phrasing for these parts. There's a few small open issues Esko brought up on the issue tracker that could not be addressed in time; most of them are about enhancing the examples, which best happens after the anchor cleanup anyway. Repeated topics A few common topics came up in multiple reviews, and are addressed in the following section; the individual responses can be found in follow-up mails, and refer to the common topics as necesary. GENERIC-FFxxDB \"Where do the ff35:30:2001:... addresses come from?\" response: There's a coverage gap between RFC3849 (defining example unicast addresses) and RFC3306 (defining unicast-prefix-based multicast addresses). For lack of an agreed-on example multicast address, the used ones were created by applying the unicast-prefix derivation process to the example address. The results (ff35:30:2001:db8::1) should be intuitively recognizable both as a multicast address and as an example address. The generated address did, however, fail to follow the guidance of RFC 3307 and set the first bit of the Group ID. Consequently, the addresses were changed from ff35:30:2001:db8::x to ff35:30:2001:db8::8000:x, which should finally be a legal choice of a group address for anyone operating the 2001:db8:: site. (The changes can be viewed at URL). GENERIC-6MAN \"Whas this discussed with 6MAN?\" response: Not yet; a request for feedback was sent out recently. (Mail at URL). GENERIC-WHOPICKSMODEL \"How does the client know which security policies the RD supports?\" response: The client can only expect any level of trustworthiness if there is a claim to that in the RD's credentials. Typically, though, that claim will not be encoded there, but implied. For example, when configuring an application that relies authenticated endpoint names, then telling the application to use the RD authenticated via PKI as URL should only be done if the configurator is sure that URL will do the required checks on endpoint names. Some clarification has been added in URL GENERIC-SUBJECT \"Section 7.1 says: '... can be transported in the subject.' -- where precisely?\" response: That text was only meant to illustrate to the reader what such a policy could contain, not to set one (which would require more precision). It has been made more explicit that this is merely setting the task for the policies. As an example, some more precision is given by referring to SubjectAltName dNSName entries. Changes in URL (If you have read up to this point, please continue reading the follow-up mails addressing the individual reviewer's points.)", "new_text": "the same properties for operations on the registration resource. Registrants that are prepared to pick a different identifier when their initial attempt (or attempts, in the unlikely case of two subsequent collisions) at registration is unauthorized should pick an identifier at least twice as long as the expected number of registrants; registrants without such a recovery options should pick significantly longer endpoint names (e.g. using UUID URNs RFC4122)."}
{"id": "q-en-resource-directory-d903532232fda5d3faef0e4c689340c93bf59f4a2157b722d9df8772c79ae4e7", "old_text": "Minor editorial fixes in response to Gen-ART review. changes from -24 to -25 Large rework of section 7 (Security policies)", "comments": "Part of the responses. Will merge (and upload as part of a -26) if there is an ACK from authors or chairs.\nBefore the 2020-08-13 telechat, some reviews have come in: : Not Ready Minor: Clarification on \"subject\" -- my intention was to include anything \"subject\", but I don't really know what else to put in there. Minor: Why is the first attempt special? -- Can clarify, PR to come. [edit: ] Nits addressed in URL More nits (IDnits) do not require editing, but some explanation -- where would that go? Shepherd writeup? (that would be \u2192 NAME : Ready Notes that the DDOS mitigation section is a bit history heavy. I had similar comment in URL; should we use that comment to shorten here? [edit: If so, would want an ACK.] \"It is my impression, that Security Considerations were mostly written having in mind that (D)TLS is always used, however it is only \"SHOULD\" in this draft (or even \"MAY\" if we look at RFC6690 which Security Considerations this draft refers to). I think that adding a few words describing which consequences for security not using (D)TLS would have and in which cases it is allowed will make the Security Considerations more consistent. \" -- [edit: covered in ] I'll issue PRs for what I think we can do something about quickly, but won't merge them or create a -26 without an ACK on them (given we're late in the whole publication stage and my impression is that ill-coordinated uploads do more harm than good) and will follow up on this comment when they're ready for review.\nWith the additional reviews that have come in today, I see no point in addressing some of what has come in in a pre-telechat update (as we can't resolve all the discuss points) -- taking the urgency off this. I'll still file PRs for changes resulting from the reviewer comments. Procedurally (so this goes to NAME or NAME Is the general expectation with IESG comments that the text be further improved to answer all the points (especially when they are comments), or can a plain \"we did it that way because\" suffice? In the latter case, would it make sense to collect point-to-point responses here so they get author or WG review before \"we\" answer them?\nWith URL and its follow-ups sent out, this is closed; any further comments will be opened as new issues. Hello Reviewers, CoRE and IESG members, thanks for your input to the Resource Directory document, and your patience while it was being processed. Before the individual point to point responses can go out, this mail is to address the general status, and answer a few points that were brought up by different reviewers (it will be referred back to in the point to point responses). In summary, the points raised in the reviews led to the changes of the recently uploaded [-26], and where no changes were made, the reasoning behind the choices in the document is outlined below or in the following mails. (Either way, short of aggregated nits paragraphs, all points are responded to in the following mails). A few larger issues evolved from the review comments being discussed in the interim meetings, so please expect a -27 at a later point (after IETF109) with the changes mentioned below. Kind regards Christian [-26]: URL Preface References into the issue tracker are primarily intended for bookkeeping and to give you access to the altered text if you're curious -- but for general review purposes, the responses are self-contained. Anticipated changes We have opted to submit a new version that should clear out all the other comments, so that the remaining work can be focused and swift: Replay and Freshness (OPEN-REPLAY-FRESHNESS) The comment on DTLS replay made us reevaluate the security properties of operations on the registration resource. While these are successively harder going from DTLS without replay protection to DTLS with replay protection and hardest with OSCORE, the text as it is is still vulnerable to an attacker stopping a reoccurring registration, or undoing alterations in registration attributes. Mitigation from a concurrent document (the Echo mechanism of I-D.ietf-core-echo-request-tag in event freshness mode) is planned, but the text is not complete yet. Necessity of the value in all responses The requirements around Limited Link Format and the necessity to express all looked up links in fully resolved form have stemmed from different ways RFC6690 has been read, both in the IETF community and by implementers. A comment from Esko Dijk has shown a clear error in one of the readings. That reading is single-handedly responsible for most of the overhead in the wire format (savings can go up to 50%), which is of high importance to the CoRE field. Due to that, it is the WG's and authors' intention to remove the reading from the set of interpretations that led to the rules set forth. The change will not render any existing RD server implementation incompatible (older RDs would merely be needlessly verbose), and the known implementations of Link-Format parsers can easily be updated. The updates to the document could not be completed in time with the rest of the changes, because a) it requires revisiting the known implementations for confirmation and b) will affect almost every example that contains lookup results. Server authorization (OPEN-SERVER) The two areas of \"how is the RD discovery secured\" and \"what do applications based on the RD need to consider\" jointly opened up new questions about server authorization and the linkage between a client's intention and the server's promises that may easily not conclusde in timeframe reasonable for RD. While steps have been taken to address the issue both on the RD discovery and the applications side (URL), the ongoing discussions in the working group may turn up with a less cumbersome phrasing for these parts. There's a few small open issues Esko brought up on the issue tracker that could not be addressed in time; most of them are about enhancing the examples, which best happens after the anchor cleanup anyway. Repeated topics A few common topics came up in multiple reviews, and are addressed in the following section; the individual responses can be found in follow-up mails, and refer to the common topics as necesary. GENERIC-FFxxDB \"Where do the ff35:30:2001:... addresses come from?\" response: There's a coverage gap between RFC3849 (defining example unicast addresses) and RFC3306 (defining unicast-prefix-based multicast addresses). For lack of an agreed-on example multicast address, the used ones were created by applying the unicast-prefix derivation process to the example address. The results (ff35:30:2001:db8::1) should be intuitively recognizable both as a multicast address and as an example address. The generated address did, however, fail to follow the guidance of RFC 3307 and set the first bit of the Group ID. Consequently, the addresses were changed from ff35:30:2001:db8::x to ff35:30:2001:db8::8000:x, which should finally be a legal choice of a group address for anyone operating the 2001:db8:: site. (The changes can be viewed at URL). GENERIC-6MAN \"Whas this discussed with 6MAN?\" response: Not yet; a request for feedback was sent out recently. (Mail at URL). GENERIC-WHOPICKSMODEL \"How does the client know which security policies the RD supports?\" response: The client can only expect any level of trustworthiness if there is a claim to that in the RD's credentials. Typically, though, that claim will not be encoded there, but implied. For example, when configuring an application that relies authenticated endpoint names, then telling the application to use the RD authenticated via PKI as URL should only be done if the configurator is sure that URL will do the required checks on endpoint names. Some clarification has been added in URL GENERIC-SUBJECT \"Section 7.1 says: '... can be transported in the subject.' -- where precisely?\" response: That text was only meant to illustrate to the reader what such a policy could contain, not to set one (which would require more precision). It has been made more explicit that this is merely setting the task for the policies. As an example, some more precision is given by referring to SubjectAltName dNSName entries. Changes in URL (If you have read up to this point, please continue reading the follow-up mails addressing the individual reviewer's points.)", "new_text": "Minor editorial fixes in response to Gen-ART review. Random EP names: Point that multiple collisions are possible but unlikely. (For background, in the first round of attempts there's a good chance that one of the random IDs will collide due to the birthday paradox; the likelyhood for a second (or even third) collision in a row is way below that because only the colliding node(s) will retry). changes from -24 to -25 Large rework of section 7 (Security policies)"}
{"id": "q-en-resource-directory-31b84afdb47d64119814b953a9274a5cfe902971027e73611389a2bff7c48a8e", "old_text": "the complete LwM2M model to a summary of how RD is used in there, with a reference to the current specification. changes from -24 to -25 Large rework of section 7 (Security policies)", "comments": "... as otherwise, code squatting again. See-Also: URL Thanks for the updates in the -26; they help a lot. I'm particularly happy to see the (default/example) \"first-come-first-remembered\" policy in \u00a7 7.5, which does a good job of documenting its properties (including potential deficiencies) and can be usable for many environments. (I do have some suggestions for improvements to the specific wording in the section-by-section comments, but on the whole I like it.) It does lead in to a more general comment, though (which is not exactly new to the changes in the -26 though it is reflected in several of them): some aspects of the concept of \"authorization\" to appear in a directory, and the verification of authorization for the entire flow from client locating the RD server to final request at the discovered resource, are somewhat subtle. (The new text discusses, e.g., \"repeat the URI discovery step at the /.well-known/core resource of the indicated host to obtain the resource type information from an authorized source\" and \"a straightforward way to verify [discovered information] is to request it again from an authorized server, typically the one that hosts the target resource\".) The aspects I'm concerned about are not matters of a resource server being authorized to publish to a directory or a client deeming that a server is authorized to act as an RD (though both of those are also important), but seems to rather just be relating to the RD faithfully disseminating the information the RD retrieved from /.well-known/core discovery on the given server(s) (or otherwise learned via some other mechanism). In this sense the RD is just acting as a cache or efficiency aid, but the authoritative source of the information remains (in general) the server hosting the resource. To be clear, I think the procedures that this document specifies are good and correct procedures for a client to perform, I'm just not sure whether \"authorized\" is the right term in these cases; \"authoritative\" might be more appropriate, in that the operation in question is to verify the information retrieved from the RD (which acts in some sense as an intermediary) at the more authoritative source. (On a tangent from that note, it seems that there is not necessarily any indication to a client that a server claiming to provide a given resource (whether discovered via RD or /.well-known/core) is actually authorized to provide such a resource and be trusted by the client. But that's not really a property of RD, and rather the underlying CoRE mechanism and any provisioned trust database, so my thoughts about it seem out of scope for this document.) (It's not entirely clear to me what the reader is expected to do with the information about the previous formulation.) (It seems that something also needs to cause the RD to check the authorization of lookup clients to receive the information in question, which might be worth reiterating in this section.) I'm pretty sure the intent here is that only the subject names that are certified by the recognized authority(ies) are stored, in which case I'd suggest to s/all those/all such/. (I can understand why the \"store the whole list\" logic is used, though there may be some subtleties about name constraints on the (X.509) CAs used, which is why we typically don't recomment treating the entire list of names in the certificate as validated from a single operation, instead preferring \"is this certificate authenticated for this specific name?\" when the specific target name is available.) Should we talk about whether the lifetime of registration entries is related to the various lifetimes of these credentials/identifiers? DTLS without resumption is perhaps an interesting case, in that keeping the DTLS association \"live\" just to be able to update the registration information seems impractical, so perhaps some fixed cap would be applied on the lifetime in that case. If resumption is used, the resumption information (ticket or session state) has some finite lifetime that could be used to bound the lifetime of the registration. IIRC an OSCORE security context can also have an expiration time; certificates and JWT/CWT have expiration time as well (though the procedures in the first bullet point would effectively allow a \"renewal\" operation), but the validity period (if any) of a raw public key would have to be provisioned along with that key (or just have RPK be handled akin to DTLS without resumption, I guess). (Similar to my earlier parenthetical comment, the requirement for exact match of all subject name information is unusual. In effect this is a way of pinning the set of names that the CA has chosen to include in a single certificate, which is arguably a stronger requirement than needed but should not lead to any unauthorized access.)", "new_text": "the complete LwM2M model to a summary of how RD is used in there, with a reference to the current specification. impl-info: Add note about the type being WIP changes from -24 to -25 Large rework of section 7 (Security policies)"}
{"id": "q-en-resource-directory-f1695f2b396e6b11bcc14aaaacd12e0e521e816f58598ad0c1abd2ee0afe2b7d", "old_text": "4. Several mechanisms can be employed for discovering the RD, including assuming a default location (e.g. on an Edge Router in a LoWPAN), assigning an anycast address to the RD, using DHCP, or discovering the RD using .well-known/core and hyperlinks as specified in CoRE Link Format RFC6690. Endpoints that want to contact a Resource Directory (RD) can obtain candidate IP addresses of RD servers dependent on the operational conditions. It is assumed that the nodes looking for a Resource Directory are connected to a Border Router (6LBR). Given the various existing network installation procedures, the 6LBR is either connected or disconnected from the Internet and its services like DHCP and DNS. Two conditions are separately considered: (1) Internet services present, and (2) no Internet services present. When no Internet services are present, the following techniques are used (in random order): Sending a Multicast to [ff02::1]/.well-known/core with ?rt=core.rd*. The receiving RDs will answer the query. The Authoritative Border Router Option (ABRO) present in the Router Advertisements (RA), contains the address of the border router. Assuming that an RD is located at the 6LBR. Confirmation can be obtained by sending a Unicast to [6LBR]/.well-known/core with ?rt=core.rd*. Address of 6LBR is obtained from ABRO. The 6LBR can send the Resource Directory Address Option (RDAO) during neighbour Discovery containing one or more RD addresses, as explained in The installation manager can decide to preconfigure an ANYCAST address with as destination the interface of the RDs. Clients send a discovery message to the predefined RD anycast address. Each target network environment in which some of these preconfigured nodes are to be brought up is then configured with a route for this anycast address that leads to an appropriate RD. Instead of using an ANYCAST address, a multicast address can be preconfigured. All RD directory servers need to configure one of their interfaces with this multicast address. After installation of mDNS on all nodes, mDNS can be queried for the IP-address of RD. When Internet services are present, the techniques cited above are still valid. Using standard techniques to obtain the addresses of DNS or DHCP servers, the RD can be discovered in the following way: Using a DNSSD query to return the Resource Records (RR) describing the service with name rd._sub._coap._udp, preferably within the domain of the querying nodes. Using DHCPv6 with options to be defined. Assisting the 9 techniques above, requires manual intervention or may be done automatically. Manual management: IP address of RD is present in each node. ANYCAST address or Multicast address of RD is present in each node [see 5,6]. 6LBR receives IP address of RD for distribution via RDAO [see 4]. RD address is configured in DHCP [see 9]. DNS is configured with Resource Record specifying address of RD service [see 8]. Automatic management Multicasting to all nodes to return IP address [see 1]. Querying mDNS to receive IP address [see 7]. 6LBR contains RD service [see 2, 3]. As some of these RD addresses are just (more or less educated) guesses, endpoints MUST make use of any error messages to very strictly rate-limit requests to candidate IP addresses that don't work out. For example, an ICMP Destination Unreachable message (and, in particular, the port unreachable code for this message) may indicate the lack of a CoAP server on the candidate host, or a CoAP error response code such as 4.05 \"Method Not Allowed\" may indicate unwillingness of a CoAP server to act as a directory server. 4.1.", "comments": "... as promised for yesterday (oops)\nad \"what is sub\": it's DNS-SD's syntax for subtypes (RFC6763 Section 7.1). Might also be URL (\"Subtype strings are not required to begin with an underscore, though they often do\").\nLooks good to me.\nThere are several ways specified for finding a host serving an RD: static (unicast or anycast, addresses or DNS? could contain full RD path (as in LWM2M)?) various neighbor discovery derived RDAO the border router (ABRO) \"other ND options that happen to point to servers (such as RDNSS)\" might-be-defined DHCPv6 options multicast (all-coap-nodes + .well-known/core; serves the path by design) There is an open issue to describe how this can be used with DNS-SD (), and it came up 2017-04-07 that discovery via mDNS by those mechanisms might be possible but is not advisable. Can we give a set of discovery options that should be supported by all general-purpose clients and servers to avoid fragmentation where one vendor's clients expect an RDAO while the other vendor's server relies on clients doing a multicast? Can we say anything about which options are recommended to use, and which are rather to be viewed as shortcuts in extremely constrained cases? Using RDNSS would be a candidate for such a statement. Maybe those cases are better described as \"explicitly configured to use whichever RDNSS is announced\", so this is something that's better than static configuration but still not to be expected from a generic client.\nThe new section 4 introduced in URL takes care of this.", "new_text": "4. A device coming up may want to find one or more resource directories to make itself known with. The device may be pre-configured to exercise specific mechanisms for finding the resource directory: It may be configured with a specific IP address for the RD. That IP address may also be an anycast address, allowing the network to forward RD requests to an RD that is topologically close; each target network environment in which some of these preconfigured nodes are to be brought up is then configured with a route for this anycast address that leads to an appropriate RD. (Instead of using an anycast address, a multicast address can also be preconfigured. The RD directory servers then need to configure one of their interfaces with this multicast address.) It may be configured with a DNS name for the RD and a resource- record type to look up under this name; it can find a DNS server to perform the lookup using the usual mechanisms for finding DNS servers. It may be configured to use a service discovery mechanism such as DNS-SD RFC6763. The present specification suggests configuring the service with name rd._sub._coap._udp, preferably within the domain of the querying nodes. [_1] For cases where the device is not specifically configured with a way to find a resource directory, the network may want to provide a suitable default. If the address configuration of the network is performed via SLAAC, this is provided by the RDAO option rdao. If the address configuration of the network is performed via DHCP, this could be provided via a DHCP option (no such option is defined at the time of writing). Finally, if neither the device nor the network offer any specific configuration, the device may want to employ heuristics to find a suitable resource directory. The present specification does not fully define these heuristics, but suggests a number of candidates: In a 6LoWPAN, just assume the Edge Router (6LBR) can act as a resource directory (using the ABRO option to find that RFC6775). Confirmation can be obtained by sending a Unicast to coap://[6LBR]/.well-known/core?rt=core.rd*`. In a network that supports multicast well, discovering the RD using a multicast query for /.well-known/core as specified in CoRE Link Format RFC6690: Sending a Multicast GET to \"coap://[ff02::1]/.well-known/core?rt=core.rd*\". RDs within the multicast scope will answer the query. As some of the RD addresses obtained by the methods listed here are just (more or less educated) guesses, endpoints MUST make use of any error messages to very strictly rate-limit requests to candidate IP addresses that don't work out. For example, an ICMP Destination Unreachable message (and, in particular, the port unreachable code for this message) may indicate the lack of a CoAP server on the candidate host, or a CoAP error response code such as 4.05 \"Method Not Allowed\" may indicate unwillingness of a CoAP server to act as a directory server. 4.1."}
{"id": "q-en-resource-directory-f1695f2b396e6b11bcc14aaaacd12e0e521e816f58598ad0c1abd2ee0afe2b7d", "old_text": "Changed the lookup interface to accept endpoint and Domain as query string parameters to control the scope of a lookup. ", "comments": "... as promised for yesterday (oops)\nad \"what is sub\": it's DNS-SD's syntax for subtypes (RFC6763 Section 7.1). Might also be URL (\"Subtype strings are not required to begin with an underscore, though they often do\").\nLooks good to me.\nThere are several ways specified for finding a host serving an RD: static (unicast or anycast, addresses or DNS? could contain full RD path (as in LWM2M)?) various neighbor discovery derived RDAO the border router (ABRO) \"other ND options that happen to point to servers (such as RDNSS)\" might-be-defined DHCPv6 options multicast (all-coap-nodes + .well-known/core; serves the path by design) There is an open issue to describe how this can be used with DNS-SD (), and it came up 2017-04-07 that discovery via mDNS by those mechanisms might be possible but is not advisable. Can we give a set of discovery options that should be supported by all general-purpose clients and servers to avoid fragmentation where one vendor's clients expect an RDAO while the other vendor's server relies on clients doing a multicast? Can we say anything about which options are recommended to use, and which are rather to be viewed as shortcuts in extremely constrained cases? Using RDNSS would be a candidate for such a statement. Maybe those cases are better described as \"explicitly configured to use whichever RDNSS is announced\", so this is something that's better than static configuration but still not to be expected from a generic client.\nThe new section 4 introduced in URL takes care of this.", "new_text": "Changed the lookup interface to accept endpoint and Domain as query string parameters to control the scope of a lookup. Editorial Comments [_1] cabo: What is _sub here? "}
{"id": "q-en-resource-directory-7a1b30a439791aea01f74f10e06ba3cf94bc22d6cf656fb19f25f59519496094", "old_text": "POST {+rd}{?ep,d,et,lt,con,extra-attrs*} ", "comments": "As planned in the last authors' meeting: et should become just one of the registrable endpoint attributes (registry is already being prepared as RD parameter registry, but needs some work) lookup should allow using any of those registered attributes lookup should define how endpoint attributes are used in resource lookups (after all, should be possible even though none of the resource links actually carry that attribute) This PR is not final as it is, but will track my branch as I'm assembling (and possibly rewriting/rebasing) commits for this change.\net is not used by Fairhair, but that should not influence the text, I think\nThe bulk of the announced changes is now in the pull request; I'll go through it one more time myself before I merge it into master. (I might still add a big reshuffling change to the lookup interface, because the text has grown in the past by always inserting paragraphs, and they're not necessarily in a logical overall sequence.)\nNAME I only just now merged the current state of the branch into master, the rebuild just completed.\nSee-Also: Some of what could be the review of this PR has come up in URL\nI don't see any reason to keep this section as pointed out during the last telco. When it is removed some examples i Registration update must be changed. The request GET /rd/4521 should change to GET /rd-lookup/ep?href=rd/4521\nApologies, that will not return the links of the endpoint The request GET /rd/4521 should change to GET /rd-lookup/res?ep=/rd/4521 or GET /rd-lookup/res?ep=node1 I am afraid we need to stay with ep name, difficult to use /rd/4521 as ep identifier.\nThe equivalent requests would be GET /rd-lookup/res?ep=node1 and GET /rd-lookup/res?href=/rd/4521 (where the link has neither the ep=node1 nor the href=/rd/4521 link parameter, but its registration does).\nI think that we don't need the GET as long as there are no PATCHes (and I recently saw we don't even define PUT for the registration resource). The discussion has shown that this is one of the points where our understanding of resolved and unresolved addresses is tested: Can we have a resource whose content is \"unresolved link-format\" to operate on? Or do we need to resolve the links even in there, and any PATCH format would need to work semantically from a shared understanding of both the RD and the client? NAME you wanted to meditate on that; have you found deeper truth, or a path there?\nThis weird indeed: In my understanding this means return all link resource paths with ep name node1 and GET /rd-lookup/res?href=skins/larry/rd/4521 means return all link resource paths with href is skins/larry/rd/4521 and will be empty chrysn schreef op 2017-10-30 13:03:\nreturns resources that either have as their own link href, or are contained in an endpoint or group with that href. This was introduced in because we do not have (due to the extensibility of the link attributes and the lack of a definitive registry) a partition on attribute names that clearly states that one attribute can only be on a group, another only on an endpoint and a third only on a resource. Thus, we need to expect matches from either level -- and once we already have to do this, we can just as well do it with href because it's treated like any other attribute. IMO the alternatives to this would be either having registries set up to partition the names so every query parameter is clearly an endpoint, group or resource attribute (probably not going to happen with resource attributes, as requested and denied in RFC5988bis), or we'd need a more complex lookup language than RFC6690-style search -- something like so we could be explicit there. (See also URL )\nOk, you go faster than my shadow. As long as the text covers the example, and the intention is clear. I will need to read the finished text again. chrysn schreef op 2017-10-30 16:29:\n(Moving some comments here from EMail to keep track:) [Peter] I thought we agreed to remove section 5.4.3, [Christian] I had that taken down as \"we should think about it some more\"; I do expect that interfact to only become useful when we have a way to patch it.\nGiven the current direction of lookup with construction of absolute URI, we can leave section 5.4.3 in\nThe \"RD Parameters\" subregistry is set up in the IANA considerations, but only referenced at the registration interface. The lookup interface allows lookup by parameters of the same names (and page and count are exclusively used for lookup), but doesn't make a reference to the iana-registry. Next steps (\u2192CA): Add a reference to lookup interface description (and reviewing that text, decide on whether that does create linkage to any other existing registry).", "new_text": "POST {+rd}{?ep,d,lt,con,extra-attrs*} "}
{"id": "q-en-resource-directory-7a1b30a439791aea01f74f10e06ba3cf94bc22d6cf656fb19f25f59519496094", "old_text": "parameter is elided, the RD MAY associate the endpoint with a configured default domain. Endpoint Type (optional). The semantic type of the endpoint. This parameter SHOULD be less than 63 bytes. Lifetime (optional). Lifetime of the registration in seconds. Range of 60-4294967295. If no lifetime is included in the initial registration, a default value of 86400 (24 hours)", "comments": "As planned in the last authors' meeting: et should become just one of the registrable endpoint attributes (registry is already being prepared as RD parameter registry, but needs some work) lookup should allow using any of those registered attributes lookup should define how endpoint attributes are used in resource lookups (after all, should be possible even though none of the resource links actually carry that attribute) This PR is not final as it is, but will track my branch as I'm assembling (and possibly rewriting/rebasing) commits for this change.\net is not used by Fairhair, but that should not influence the text, I think\nThe bulk of the announced changes is now in the pull request; I'll go through it one more time myself before I merge it into master. (I might still add a big reshuffling change to the lookup interface, because the text has grown in the past by always inserting paragraphs, and they're not necessarily in a logical overall sequence.)\nNAME I only just now merged the current state of the branch into master, the rebuild just completed.\nSee-Also: Some of what could be the review of this PR has come up in URL\nI don't see any reason to keep this section as pointed out during the last telco. When it is removed some examples i Registration update must be changed. The request GET /rd/4521 should change to GET /rd-lookup/ep?href=rd/4521\nApologies, that will not return the links of the endpoint The request GET /rd/4521 should change to GET /rd-lookup/res?ep=/rd/4521 or GET /rd-lookup/res?ep=node1 I am afraid we need to stay with ep name, difficult to use /rd/4521 as ep identifier.\nThe equivalent requests would be GET /rd-lookup/res?ep=node1 and GET /rd-lookup/res?href=/rd/4521 (where the link has neither the ep=node1 nor the href=/rd/4521 link parameter, but its registration does).\nI think that we don't need the GET as long as there are no PATCHes (and I recently saw we don't even define PUT for the registration resource). The discussion has shown that this is one of the points where our understanding of resolved and unresolved addresses is tested: Can we have a resource whose content is \"unresolved link-format\" to operate on? Or do we need to resolve the links even in there, and any PATCH format would need to work semantically from a shared understanding of both the RD and the client? NAME you wanted to meditate on that; have you found deeper truth, or a path there?\nThis weird indeed: In my understanding this means return all link resource paths with ep name node1 and GET /rd-lookup/res?href=skins/larry/rd/4521 means return all link resource paths with href is skins/larry/rd/4521 and will be empty chrysn schreef op 2017-10-30 13:03:\nreturns resources that either have as their own link href, or are contained in an endpoint or group with that href. This was introduced in because we do not have (due to the extensibility of the link attributes and the lack of a definitive registry) a partition on attribute names that clearly states that one attribute can only be on a group, another only on an endpoint and a third only on a resource. Thus, we need to expect matches from either level -- and once we already have to do this, we can just as well do it with href because it's treated like any other attribute. IMO the alternatives to this would be either having registries set up to partition the names so every query parameter is clearly an endpoint, group or resource attribute (probably not going to happen with resource attributes, as requested and denied in RFC5988bis), or we'd need a more complex lookup language than RFC6690-style search -- something like so we could be explicit there. (See also URL )\nOk, you go faster than my shadow. As long as the text covers the example, and the intention is clear. I will need to read the finished text again. chrysn schreef op 2017-10-30 16:29:\n(Moving some comments here from EMail to keep track:) [Peter] I thought we agreed to remove section 5.4.3, [Christian] I had that taken down as \"we should think about it some more\"; I do expect that interfact to only become useful when we have a way to patch it.\nGiven the current direction of lookup with construction of absolute URI, we can leave section 5.4.3 in\nThe \"RD Parameters\" subregistry is set up in the IANA considerations, but only referenced at the registration interface. The lookup interface allows lookup by parameters of the same names (and page and count are exclusively used for lookup), but doesn't make a reference to the iana-registry. Next steps (\u2192CA): Add a reference to lookup interface description (and reviewing that text, decide on whether that does create linkage to any other existing registry).", "new_text": "parameter is elided, the RD MAY associate the endpoint with a configured default domain. Lifetime (optional). Lifetime of the registration in seconds. Range of 60-4294967295. If no lifetime is included in the initial registration, a default value of 86400 (24 hours)"}
{"id": "q-en-resource-directory-7a1b30a439791aea01f74f10e06ba3cf94bc22d6cf656fb19f25f59519496094", "old_text": "the link-format payload to register. The endpoint MUST include the endpoint name and MAY include the registration parameters d, lt, et and extra-attrs, in the POST request as per registration. The context of the registration is taken from the requesting server's URI. The endpoints MUST be deleted after the expiration of their lifetime. Additional operations cannot be executed because no registration", "comments": "As planned in the last authors' meeting: et should become just one of the registrable endpoint attributes (registry is already being prepared as RD parameter registry, but needs some work) lookup should allow using any of those registered attributes lookup should define how endpoint attributes are used in resource lookups (after all, should be possible even though none of the resource links actually carry that attribute) This PR is not final as it is, but will track my branch as I'm assembling (and possibly rewriting/rebasing) commits for this change.\net is not used by Fairhair, but that should not influence the text, I think\nThe bulk of the announced changes is now in the pull request; I'll go through it one more time myself before I merge it into master. (I might still add a big reshuffling change to the lookup interface, because the text has grown in the past by always inserting paragraphs, and they're not necessarily in a logical overall sequence.)\nNAME I only just now merged the current state of the branch into master, the rebuild just completed.\nSee-Also: Some of what could be the review of this PR has come up in URL\nI don't see any reason to keep this section as pointed out during the last telco. When it is removed some examples i Registration update must be changed. The request GET /rd/4521 should change to GET /rd-lookup/ep?href=rd/4521\nApologies, that will not return the links of the endpoint The request GET /rd/4521 should change to GET /rd-lookup/res?ep=/rd/4521 or GET /rd-lookup/res?ep=node1 I am afraid we need to stay with ep name, difficult to use /rd/4521 as ep identifier.\nThe equivalent requests would be GET /rd-lookup/res?ep=node1 and GET /rd-lookup/res?href=/rd/4521 (where the link has neither the ep=node1 nor the href=/rd/4521 link parameter, but its registration does).\nI think that we don't need the GET as long as there are no PATCHes (and I recently saw we don't even define PUT for the registration resource). The discussion has shown that this is one of the points where our understanding of resolved and unresolved addresses is tested: Can we have a resource whose content is \"unresolved link-format\" to operate on? Or do we need to resolve the links even in there, and any PATCH format would need to work semantically from a shared understanding of both the RD and the client? NAME you wanted to meditate on that; have you found deeper truth, or a path there?\nThis weird indeed: In my understanding this means return all link resource paths with ep name node1 and GET /rd-lookup/res?href=skins/larry/rd/4521 means return all link resource paths with href is skins/larry/rd/4521 and will be empty chrysn schreef op 2017-10-30 13:03:\nreturns resources that either have as their own link href, or are contained in an endpoint or group with that href. This was introduced in because we do not have (due to the extensibility of the link attributes and the lack of a definitive registry) a partition on attribute names that clearly states that one attribute can only be on a group, another only on an endpoint and a third only on a resource. Thus, we need to expect matches from either level -- and once we already have to do this, we can just as well do it with href because it's treated like any other attribute. IMO the alternatives to this would be either having registries set up to partition the names so every query parameter is clearly an endpoint, group or resource attribute (probably not going to happen with resource attributes, as requested and denied in RFC5988bis), or we'd need a more complex lookup language than RFC6690-style search -- something like so we could be explicit there. (See also URL )\nOk, you go faster than my shadow. As long as the text covers the example, and the intention is clear. I will need to read the finished text again. chrysn schreef op 2017-10-30 16:29:\n(Moving some comments here from EMail to keep track:) [Peter] I thought we agreed to remove section 5.4.3, [Christian] I had that taken down as \"we should think about it some more\"; I do expect that interfact to only become useful when we have a way to patch it.\nGiven the current direction of lookup with construction of absolute URI, we can leave section 5.4.3 in\nThe \"RD Parameters\" subregistry is set up in the IANA considerations, but only referenced at the registration interface. The lookup interface allows lookup by parameters of the same names (and page and count are exclusively used for lookup), but doesn't make a reference to the iana-registry. Next steps (\u2192CA): Add a reference to lookup interface description (and reviewing that text, decide on whether that does create linkage to any other existing registry).", "new_text": "the link-format payload to register. The endpoint MUST include the endpoint name and MAY include the registration parameters d, lt and extra-attrs, in the POST request as per registration. The context of the registration is taken from the requesting server's URI. The endpoints MUST be deleted after the expiration of their lifetime. Additional operations cannot be executed because no registration"}
{"id": "q-en-resource-directory-7a1b30a439791aea01f74f10e06ba3cf94bc22d6cf656fb19f25f59519496094", "old_text": "Endpoint and group lookups result in links to the selected registration resource and group resources. Endpoint registration resources are annotated with their endpoint names (ep), domains (d, if present), context (con), endpoint type (et, if present) and lifetime (lt, if present). Additional endpoint attributes are added as link attributes to their endpoint link unless their specification says otherwise. Group resources are annotated with their group names (gp), domain (d, if present) and multicast address (con, if present). While Endpoint Lookup does expose the registration resources, the RD does not need to make them accessible to clients. Clients SHOULD NOT", "comments": "As planned in the last authors' meeting: et should become just one of the registrable endpoint attributes (registry is already being prepared as RD parameter registry, but needs some work) lookup should allow using any of those registered attributes lookup should define how endpoint attributes are used in resource lookups (after all, should be possible even though none of the resource links actually carry that attribute) This PR is not final as it is, but will track my branch as I'm assembling (and possibly rewriting/rebasing) commits for this change.\net is not used by Fairhair, but that should not influence the text, I think\nThe bulk of the announced changes is now in the pull request; I'll go through it one more time myself before I merge it into master. (I might still add a big reshuffling change to the lookup interface, because the text has grown in the past by always inserting paragraphs, and they're not necessarily in a logical overall sequence.)\nNAME I only just now merged the current state of the branch into master, the rebuild just completed.\nSee-Also: Some of what could be the review of this PR has come up in URL\nI don't see any reason to keep this section as pointed out during the last telco. When it is removed some examples i Registration update must be changed. The request GET /rd/4521 should change to GET /rd-lookup/ep?href=rd/4521\nApologies, that will not return the links of the endpoint The request GET /rd/4521 should change to GET /rd-lookup/res?ep=/rd/4521 or GET /rd-lookup/res?ep=node1 I am afraid we need to stay with ep name, difficult to use /rd/4521 as ep identifier.\nThe equivalent requests would be GET /rd-lookup/res?ep=node1 and GET /rd-lookup/res?href=/rd/4521 (where the link has neither the ep=node1 nor the href=/rd/4521 link parameter, but its registration does).\nI think that we don't need the GET as long as there are no PATCHes (and I recently saw we don't even define PUT for the registration resource). The discussion has shown that this is one of the points where our understanding of resolved and unresolved addresses is tested: Can we have a resource whose content is \"unresolved link-format\" to operate on? Or do we need to resolve the links even in there, and any PATCH format would need to work semantically from a shared understanding of both the RD and the client? NAME you wanted to meditate on that; have you found deeper truth, or a path there?\nThis weird indeed: In my understanding this means return all link resource paths with ep name node1 and GET /rd-lookup/res?href=skins/larry/rd/4521 means return all link resource paths with href is skins/larry/rd/4521 and will be empty chrysn schreef op 2017-10-30 13:03:\nreturns resources that either have as their own link href, or are contained in an endpoint or group with that href. This was introduced in because we do not have (due to the extensibility of the link attributes and the lack of a definitive registry) a partition on attribute names that clearly states that one attribute can only be on a group, another only on an endpoint and a third only on a resource. Thus, we need to expect matches from either level -- and once we already have to do this, we can just as well do it with href because it's treated like any other attribute. IMO the alternatives to this would be either having registries set up to partition the names so every query parameter is clearly an endpoint, group or resource attribute (probably not going to happen with resource attributes, as requested and denied in RFC5988bis), or we'd need a more complex lookup language than RFC6690-style search -- something like so we could be explicit there. (See also URL )\nOk, you go faster than my shadow. As long as the text covers the example, and the intention is clear. I will need to read the finished text again. chrysn schreef op 2017-10-30 16:29:\n(Moving some comments here from EMail to keep track:) [Peter] I thought we agreed to remove section 5.4.3, [Christian] I had that taken down as \"we should think about it some more\"; I do expect that interfact to only become useful when we have a way to patch it.\nGiven the current direction of lookup with construction of absolute URI, we can leave section 5.4.3 in\nThe \"RD Parameters\" subregistry is set up in the IANA considerations, but only referenced at the registration interface. The lookup interface allows lookup by parameters of the same names (and page and count are exclusively used for lookup), but doesn't make a reference to the iana-registry. Next steps (\u2192CA): Add a reference to lookup interface description (and reviewing that text, decide on whether that does create linkage to any other existing registry).", "new_text": "Endpoint and group lookups result in links to the selected registration resource and group resources. Endpoint registration resources are annotated with their endpoint names (ep), domains (d, if present), context (con) and lifetime (lt, if present). Additional endpoint attributes are added as link attributes to their endpoint link unless their specification says otherwise. Group resources are annotated with their group names (gp), domain (d, if present) and multicast address (con, if present). While Endpoint Lookup does expose the registration resources, the RD does not need to make them accessible to clients. Clients SHOULD NOT"}
{"id": "q-en-resource-directory-7a1b30a439791aea01f74f10e06ba3cf94bc22d6cf656fb19f25f59519496094", "old_text": "that the registry will receive between 5 and 50 registrations in total over the next years. 10. Two examples are presented: a Lighting Installation example in lt-ex", "comments": "As planned in the last authors' meeting: et should become just one of the registrable endpoint attributes (registry is already being prepared as RD parameter registry, but needs some work) lookup should allow using any of those registered attributes lookup should define how endpoint attributes are used in resource lookups (after all, should be possible even though none of the resource links actually carry that attribute) This PR is not final as it is, but will track my branch as I'm assembling (and possibly rewriting/rebasing) commits for this change.\net is not used by Fairhair, but that should not influence the text, I think\nThe bulk of the announced changes is now in the pull request; I'll go through it one more time myself before I merge it into master. (I might still add a big reshuffling change to the lookup interface, because the text has grown in the past by always inserting paragraphs, and they're not necessarily in a logical overall sequence.)\nNAME I only just now merged the current state of the branch into master, the rebuild just completed.\nSee-Also: Some of what could be the review of this PR has come up in URL\nI don't see any reason to keep this section as pointed out during the last telco. When it is removed some examples i Registration update must be changed. The request GET /rd/4521 should change to GET /rd-lookup/ep?href=rd/4521\nApologies, that will not return the links of the endpoint The request GET /rd/4521 should change to GET /rd-lookup/res?ep=/rd/4521 or GET /rd-lookup/res?ep=node1 I am afraid we need to stay with ep name, difficult to use /rd/4521 as ep identifier.\nThe equivalent requests would be GET /rd-lookup/res?ep=node1 and GET /rd-lookup/res?href=/rd/4521 (where the link has neither the ep=node1 nor the href=/rd/4521 link parameter, but its registration does).\nI think that we don't need the GET as long as there are no PATCHes (and I recently saw we don't even define PUT for the registration resource). The discussion has shown that this is one of the points where our understanding of resolved and unresolved addresses is tested: Can we have a resource whose content is \"unresolved link-format\" to operate on? Or do we need to resolve the links even in there, and any PATCH format would need to work semantically from a shared understanding of both the RD and the client? NAME you wanted to meditate on that; have you found deeper truth, or a path there?\nThis weird indeed: In my understanding this means return all link resource paths with ep name node1 and GET /rd-lookup/res?href=skins/larry/rd/4521 means return all link resource paths with href is skins/larry/rd/4521 and will be empty chrysn schreef op 2017-10-30 13:03:\nreturns resources that either have as their own link href, or are contained in an endpoint or group with that href. This was introduced in because we do not have (due to the extensibility of the link attributes and the lack of a definitive registry) a partition on attribute names that clearly states that one attribute can only be on a group, another only on an endpoint and a third only on a resource. Thus, we need to expect matches from either level -- and once we already have to do this, we can just as well do it with href because it's treated like any other attribute. IMO the alternatives to this would be either having registries set up to partition the names so every query parameter is clearly an endpoint, group or resource attribute (probably not going to happen with resource attributes, as requested and denied in RFC5988bis), or we'd need a more complex lookup language than RFC6690-style search -- something like so we could be explicit there. (See also URL )\nOk, you go faster than my shadow. As long as the text covers the example, and the intention is clear. I will need to read the finished text again. chrysn schreef op 2017-10-30 16:29:\n(Moving some comments here from EMail to keep track:) [Peter] I thought we agreed to remove section 5.4.3, [Christian] I had that taken down as \"we should think about it some more\"; I do expect that interfact to only become useful when we have a way to patch it.\nGiven the current direction of lookup with construction of absolute URI, we can leave section 5.4.3 in\nThe \"RD Parameters\" subregistry is set up in the IANA considerations, but only referenced at the registration interface. The lookup interface allows lookup by parameters of the same names (and page and count are exclusively used for lookup), but doesn't make a reference to the iana-registry. Next steps (\u2192CA): Add a reference to lookup interface description (and reviewing that text, decide on whether that does create linkage to any other existing registry).", "new_text": "that the registry will receive between 5 and 50 registrations in total over the next years. 9.4. The Endpoint Type parameter is described as follows: An endpoint registering at an RD can describe itself with endpoint types, similar to how resources are described with Resource Types in RFC6690. An endpoint type is expressed as a string, which can be either a URI or one of the values defined in the Endpoint Type subregistry; it SHOULD be shorter than 63 bytes. Endpoint types can be passed in the \"et\" query parameter as part of extra-attrs at the Registration step, are shown on endpoint lookups using the \"et\" target attribute, and can be filtered for using \"et\" as a search criterion in resource and endpoint lookup. Multiple endpoint types are given as separate query parameters or link attributes. Note that Endpoint Type differs from Resource Type in that it uses multiple attributes rather than space separated values. As a result, Resource Directory implementations automatically support correct filtering in the lookup interfaces from the rules for unknown endpoint attributes. This specification establishes a new sub-registry under \"CoRE Parameters\" called \"Endpoint Type\". The registry properties (required policy, requirements, template) are identical to those of the Resource Type parameters in RFC6690. The registry is initially empty. 10. Two examples are presented: a Lighting Installation example in lt-ex"}
{"id": "q-en-resource-directory-70e9fc49ac54fbcb5835f08e434fec88033a902d998daac4e6be8fb85b998f3c", "old_text": "RD registration URI (mandatory). This is the location of the RD, as obtained from discovery. Endpoint name (mandatory). The endpoint name is an identifier that MUST be unique within a domain. The maximum length of this parameter is 63 bytes. Domain (optional). The domain to which this endpoint belongs. The maximum length of this parameter is 63 bytes. When this", "comments": "This allows registration without endpoint names, which is important for clients that only keep their absolutely essential cryptographic state (see URL, which this closes). The wording in simple registration was changed to not repeat the individual parameters of registration but to only refer there.\nCurrently, we prescribe that the registering endpoint needs to identify itself with at least an endpoint name, while we allow the RD to be configured to assign a domain based on its configuration. We could allow the endpoint identifier to be absent as well, at least in principle (eg. make it a \"SHOULD be set\" but no syntactical requirement on its presence), and the RD's configuration then would need to set an endpoint name (like it can now do with a domain), or the RD would refuse the registration like it can now do for unauthenticated hosts. (The RD interface right now doesn't allow a 4.01 Unauthorized response, even though the security considerations indicate that that can happen.) This would allow endpoints that have just enough persistent state to have their cryptographic identity in form of a PSK to register at the RD. Note that for those cases (and PSK seems not to be too uncommon in the CoRE security scenarios), the RD already needs to have received (or have a way to fetch) configuration about that key, which easily can contain a default (or the only permissible) endpoint name for that key. I'd write a pull request to that respect unless controversial discussion about this starts here. (This was fleshed out from in a discussion with Pekka Nikander during ACM ICN about very constrained devices).\nFirst, domain is optional. The RD should not invent a domain for registrations that did not include a domain parameter. If the RD sets the endpoint name, then a client should be able to learn it from the registration resource, which will then depend on the implementation of read from the rd resource returning the registration parameters in the link for the registration resource, and/or the inclusion of a \"self\" link in the registration resource that includes \"ep\" as a link attribute.\nCurrently, we say that \"When this parameter is elided, the RD MAY associate the endpoint with a configured default domain.\" (And it doesn't say that that default would be the absent domain). It'd need to learn it from the registration resource's metadata (because the registration resource contains right what the host posted, and a self-link would be semantically ambiguous there), but it can be looked up like that: POST /rd: ;rt=\"temp\" Created, Location: /registration/42 GET /rd-lookup/ep?href=/registration/42 Content: ;con=\"coaps://your-address\";ep=\"what-i-decided\" I doubt that those nodes would be really interested in doing that (I reckon they'd even tend to do simple registrations which won't even tell them their registration resource), but they can. Best regards Christian\nTo add to my example, the client could just as well have asked to get the same result, although I'd recommend that a client in that situation should use the query instead. NAME would that lookup interface address your concerns? Unless not, I'd go ahead and write a patch that allows such behavior (but makes it clear that this can only be used where explicitly configured or allowed in a given setting).\nThe changes in implement the proposed. With many things now using registration resources rather than names (cf. ), I frankly don't see a reason to mandate names anywhere (Why would a group need a name? The operations don't require it, and we sure can store one if the user desires; same for the end point\u2026), but I won't stir that any further (we should finish rather than starting such discussions; I'd be game for it, but milestones says hurry), as just allowing the name to be configured is sufficient for the ACM ICN use case.", "new_text": "RD registration URI (mandatory). This is the location of the RD, as obtained from discovery. Endpoint name (mostly mandatory). The endpoint name is an identifier that MUST be unique within a domain. The maximum length of this parameter is 63 bytes. If the RD is configured to recognize the endpoint (eg. based on its security context), the endpoint can elide the endpoint name, and assign one based on the configuration. Domain (optional). The domain to which this endpoint belongs. The maximum length of this parameter is 63 bytes. When this"}
{"id": "q-en-resource-directory-70e9fc49ac54fbcb5835f08e434fec88033a902d998daac4e6be8fb85b998f3c", "old_text": "requests at the requesting server's default discovery URI to obtain the link-format payload to register. The endpoint MUST include the endpoint name and MAY include the registration parameters d, lt and extra-attrs, in the POST request as per registration. The context of the registration is taken from the requesting server's URI. The endpoints MUST be deleted after the expiration of their lifetime. Additional operations cannot be executed because no registration", "comments": "This allows registration without endpoint names, which is important for clients that only keep their absolutely essential cryptographic state (see URL, which this closes). The wording in simple registration was changed to not repeat the individual parameters of registration but to only refer there.\nCurrently, we prescribe that the registering endpoint needs to identify itself with at least an endpoint name, while we allow the RD to be configured to assign a domain based on its configuration. We could allow the endpoint identifier to be absent as well, at least in principle (eg. make it a \"SHOULD be set\" but no syntactical requirement on its presence), and the RD's configuration then would need to set an endpoint name (like it can now do with a domain), or the RD would refuse the registration like it can now do for unauthenticated hosts. (The RD interface right now doesn't allow a 4.01 Unauthorized response, even though the security considerations indicate that that can happen.) This would allow endpoints that have just enough persistent state to have their cryptographic identity in form of a PSK to register at the RD. Note that for those cases (and PSK seems not to be too uncommon in the CoRE security scenarios), the RD already needs to have received (or have a way to fetch) configuration about that key, which easily can contain a default (or the only permissible) endpoint name for that key. I'd write a pull request to that respect unless controversial discussion about this starts here. (This was fleshed out from in a discussion with Pekka Nikander during ACM ICN about very constrained devices).\nFirst, domain is optional. The RD should not invent a domain for registrations that did not include a domain parameter. If the RD sets the endpoint name, then a client should be able to learn it from the registration resource, which will then depend on the implementation of read from the rd resource returning the registration parameters in the link for the registration resource, and/or the inclusion of a \"self\" link in the registration resource that includes \"ep\" as a link attribute.\nCurrently, we say that \"When this parameter is elided, the RD MAY associate the endpoint with a configured default domain.\" (And it doesn't say that that default would be the absent domain). It'd need to learn it from the registration resource's metadata (because the registration resource contains right what the host posted, and a self-link would be semantically ambiguous there), but it can be looked up like that: POST /rd: ;rt=\"temp\" Created, Location: /registration/42 GET /rd-lookup/ep?href=/registration/42 Content: ;con=\"coaps://your-address\";ep=\"what-i-decided\" I doubt that those nodes would be really interested in doing that (I reckon they'd even tend to do simple registrations which won't even tell them their registration resource), but they can. Best regards Christian\nTo add to my example, the client could just as well have asked to get the same result, although I'd recommend that a client in that situation should use the query instead. NAME would that lookup interface address your concerns? Unless not, I'd go ahead and write a patch that allows such behavior (but makes it clear that this can only be used where explicitly configured or allowed in a given setting).\nThe changes in implement the proposed. With many things now using registration resources rather than names (cf. ), I frankly don't see a reason to mandate names anywhere (Why would a group need a name? The operations don't require it, and we sure can store one if the user desires; same for the end point\u2026), but I won't stir that any further (we should finish rather than starting such discussions; I'd be game for it, but milestones says hurry), as just allowing the name to be configured is sufficient for the ACM ICN use case.", "new_text": "requests at the requesting server's default discovery URI to obtain the link-format payload to register. The endpoint includes the same registration parameters in the POST request as it would per registration. The context of the registration is taken from the requesting server's URI. The endpoints MUST be deleted after the expiration of their lifetime. Additional operations cannot be executed because no registration"}
{"id": "q-en-rfc-censorship-tech-8f035ac013554c63c3b5f7be8c363071661ff3bf7277ce8d867fa3154339cd3f", "old_text": "Internet Backbone: If a censor controls the gateways into a region, they can filter undesirable traffic that is traveling into and out of the region by sniffing and mirroring at the relevant exchange points. Censorship at this point-of-control is most effective at controlling the flow of information between a region and the rest of the Internet, but is ineffective at identifying content traveling between the users within a region. Internet Service Providers: Internet Service Providers are perhaps the most natural point-of-control. They have a benefit of being easily enumerable by a censor paired with the ability to identify the regional and international traffic of all their users. The censor's filtration mechanisms can be placed on an ISP via", "comments": "From Stephane:\nShouldn't the CA / cert hierarchy be a point of control? For example, KZ is requiring users to install their CA, to allow them to MITM all traffic: URL Another example of CA abuse is ROA whacking, RPKI manipulation, etc.: URL\nAlso got this feedback from Will Scott:", "new_text": "Internet Backbone: If a censor controls the gateways into a region, they can filter undesirable traffic that is traveling into and out of the region by sniffing and mirroring at the relevant exchange points. Censorship at this point of control is most effective at controlling the flow of information between a region and the rest of the Internet, but is ineffective at identifying content traveling between the users within a region. Internet Service Providers: Internet Service Providers are perhaps the most natural point of control. They have a benefit of being easily enumerable by a censor paired with the ability to identify the regional and international traffic of all their users. The censor's filtration mechanisms can be placed on an ISP via"}
{"id": "q-en-rfc-censorship-tech-8f035ac013554c63c3b5f7be8c363071661ff3bf7277ce8d867fa3154339cd3f", "old_text": "than potentially excluding content, users, or uses of their service. At all levels of the network hierarchy, the filtration mechanisms used to detect undesirable traffic are essentially the same: a censor sniffs transmitting packets and identifies undesirable content, and", "comments": "From Stephane:\nShouldn't the CA / cert hierarchy be a point of control? For example, KZ is requiring users to install their CA, to allow them to MITM all traffic: URL Another example of CA abuse is ROA whacking, RPKI manipulation, etc.: URL\nAlso got this feedback from Will Scott:", "new_text": "than potentially excluding content, users, or uses of their service. Certificate Authorities: Authorities that issue cryptographically secured resources can be a significant point of control. Certificate Authorities that issue certificates to domain holders for TLS/HTTPS or Regional/Local Internet Registries that issue Route Origination Authorizations to BGP operators can be forced to issue rogue certificates that may allow compromises in confidentiatlity guarantees - allowing censorship software to engage in identification and interference where not possible before - or integrity degrees - allowing, for example, adversarial routing of traffic. At all levels of the network hierarchy, the filtration mechanisms used to detect undesirable traffic are essentially the same: a censor sniffs transmitting packets and identifies undesirable content, and"}
{"id": "q-en-rfc-censorship-tech-8f035ac013554c63c3b5f7be8c363071661ff3bf7277ce8d867fa3154339cd3f", "old_text": "domain all other DNS servers will be unable to properly forward and cache the site. Domain name registration is only really a risk where undesirable content is hosted on TLD controlled by the censoring country, such as .cn or .ru Anderson-2011. 6.3.", "comments": "From Stephane:\nShouldn't the CA / cert hierarchy be a point of control? For example, KZ is requiring users to install their CA, to allow them to MITM all traffic: URL Another example of CA abuse is ROA whacking, RPKI manipulation, etc.: URL\nAlso got this feedback from Will Scott:", "new_text": "domain all other DNS servers will be unable to properly forward and cache the site. Domain name registration is only really a risk where undesirable content is hosted on TLD controlled by the censoring country, such as .cn or .ru Anderson-2011 or where legal processes in countries like the United States result in domain name seizures and/ or DNS redirection by the government Kopel-2013. 6.3."}
{"id": "q-en-rfc7807bis-62cfd15a20e1c404ee38a358939f9c21c786b3a27c0b0278f6e6e8c2423c6715", "old_text": "possible, and that when relative URIs are used, they include the full path (e.g., \"/types/123\"). 3.1.2. The \"status\" member is a JSON number indicating the HTTP status code", "comments": "NAME done; see change. P.S. GitHub lets you make suggestions using the little +/- icon in the inline comment UX :)\nPulling this conversation out of : It's clear that there's demand for error codes scoped to the API, without creating documentation pages for each code and linking to them in the field. There are many ways that implementers could do this, from linking to non-existent URLs to URNs to tags URIs, etc. It would be useful if the specification included examples of this, and made a clear recommendation for this case.\nWe (well, mostly NAME have made some good progress on the other issues raised out of but we haven't looked at this one yet. As strawman to get this moving, how about: we register a URN namespace id, and we recommend that non-dereferenceable URIs should be URNs using that, namespaced with an appropriate domain and raw type id, for example: . Would that work? It would give us a clear solution to recommend, which is explicit but easy to read, with minimal extra complexity or overhead for implementers (imo). I don't have examples, but I suspect there are other standards elsewhere that would appreciate having a problem id namespace for URNs as well. I don't love using a domain for namespacing a non-locator, but I think some namespacing is preferable, and I'm not sure what other namespaces would be suitable. Alternatives very welcome :smile:\nWe penciled in the URI prefix for registered types as \"URL\" in , but that was not finalised. I think registering one or more problems as exemplars is a good path forward for this and other issues (e.g., as discussed in ). We need to be very careful to set a good example in them, which means we need good candidates; I'm thinking 'JSON Validation Error' and 'XML Validation Error' might be appropriate (perhaps along with corresponding wellformedness errors). If we choose a urn, I suspect there will be considerable pressure to use or something similar, as per .\nmy understanding of the \"urn:ietf:params\" namespace is that it is meant to be used for parameters managed/defined by IETF specs. i think the original's suggestion intent was to have a \"safe prefix\" for a namespace which then would be \"unmanaged\" with some conventions around that, such as using domains names. i think the idea of giving examples (and even a suggested namespace along with conventions for using it) is a good one. that way we can clearly support both scenarios: if you're planning on making documentation available, use a resolvable URI but keep in mind that this best should be managed in a way so that it works for a good number of years. if you're planning on just identifying problem types, use a non-resolvable URI, and here's a recipe for doing it.\n:+1: exactly I think is relevant in that it would be helpful to everybody if we can follow our own recommendations in our own problem id registry, but it's otherwise independent.\nthe difficulty may be that IETF has an institutional approach for managing parameters (urn:ietf:params:http-problems mentioned by NAME above), whereas most orgs minting problem types probably won't have one and won't create one. which may lead to us to saying three things about what kind of URI to use: check out common problem types registered under urn:ietf:params:http-problems and consider reusing one if it fits. use URIs when creating dereferencable problem type identifiers. when using non-dereferencable problem type identifiers be clear about that by using non-dereferencable URIs, and here's an URN-based recipe how to do that.\nAh, I see, ok. Yes, that makes sense, and given that the 3 part recommendation seems reasonable to me.\nto be used for parameters managed/defined by IETF specs. No, it's for parameter name spaces managed by IETF documents -- which this would be. We could also give an example that uses a URI.\nthat's what i wanted to say. so these URIs/examples would just cover the common problem types that would be managed by the spec. which then would be for those other types that people/orgs are minting and where they say these are not resolvable, right? i think for those terry proposed a URN namespace instead of tag URIs, but either way it simply would help to state the use case (\"your own types, not intended to be resolvable\") and provide a recipe (\"here's one way to do it\"). the main point seems to be: the spec should mention both \"common types\" which may even get a registry, and \"private types\" which probably are using a different namespace, and we could suggest one pattern for doing that.", "new_text": "possible, and that when relative URIs are used, they include the full path (e.g., \"/types/123\"). The type URI can also be a non-resolvable URI. For example, the tag URI scheme RFC4151 can be used to uniquely identify problem types: Non-resolvable URIs ought not be used when there is some future possibility that it might become desireable to do so. For example, if the URI above were used in an API and later a tool was adopted that resolves type URIs to discover information about the error, taking advantage of that capability would require switching to a resolvable URI, thereby creating a new identity for the problem type and thus introducing a breaking change. 3.1.2. The \"status\" member is a JSON number indicating the HTTP status code"}
{"id": "q-en-rfc8447bis-c97cd9dec30a8c9b9459429f9a7c034b0bfa0414dc4367247a4add0af0576887", "old_text": "Changed the registry policy to: Updated the \"Reference\" to also refer to this document. See expert-pool for additional information about the designated expert pool.", "comments": "Tweaks to add the D values, explanations about D, etc.\nWe might want to check that the subsequent registries all have an appropriate value in the column.", "new_text": "Changed the registry policy to: Updated the \"Reference\" to refer to this document instead of RFC8447. See expert-pool for additional information about the designated expert pool."}
{"id": "q-en-rfc8447bis-c97cd9dec30a8c9b9459429f9a7c034b0bfa0414dc4367247a4add0af0576887", "old_text": "Add a \"Recommended\" column with the contents as listed below. This table has been generated by marking Standards Track RFCs as \"Y\" and all others as \"N\". The \"Recommended\" column is assigned a value of \"N\" unless explicitly requested, and adding a value with a \"Recommended\" value of \"Y\" requires Standards Action RFC8126. IESG Approval is REQUIRED for a Y->N transition. IANA [SHALL update/has added] the following notes: The extensions added by I-D.ietf-tls-rfc8446bis are omitted from the above table; additionally, token_binding is omitted, since RFC8472 specifies the value of the \"Recommended\" column as for this extension. I-D.ietf-tls-rfc8446bis also uses the TLS ExtensionType Values registry originally created in RFC4366. The following text is from", "comments": "Tweaks to add the D values, explanations about D, etc.\nWe might want to check that the subsequent registries all have an appropriate value in the column.", "new_text": "Add a \"Recommended\" column with the contents as listed below. This table has been generated by marking Standards Track RFCs as \"Y\", \"truncated_hmac\" (4) and \"connection_id (deprecated)\" (53) as \"D\", and all others as \"N\". The \"Recommended\" column is assigned a value of \"N\" unless explicitly requested, and adding a value with a \"Recommended\" value of \"Y\" requires Standards Action RFC8126. IESG Approval is REQUIRED for a Y->N, Y->D, and D->Y|N transitions. IANA [SHALL update/has added] the following notes: The extensions added by I-D.ietf-tls-rfc8446bis are omitted from the above table. Likewise, extensions defined after RFC8447 are also not listed in the table as those RFCs specify the value of the extension's \"Recommended\"; extensions points defined after RFC8447 include token_binding, compress_certificate, record_size_limit, pwd_protect, pwd_clear, password_salt, ticket_pinning, tls_cert_with_extern_psk, delegated_credentials, supported_ekt_ciphers, connection_id, external_id_hash, external_session_id, quic_transport_parameters, ticket_request, and dnssec_chain. I-D.ietf-tls-rfc8446bis also uses the TLS ExtensionType Values registry originally created in RFC4366. The following text is from"}
{"id": "q-en-rtcweb-overview-bba83896596f671c20373486e41abb34e8dcaeb58a57e71a5fd190817e44aa70", "old_text": "gateway to convert between the signaling on the web side to the SIP signaling may be needed. When a new codec is specified, and the SDP for the new codec is specified in the MMUSIC WG, no other standardization should be required for it to be possible to use that in the web browsers. Adding new codecs which might have new SDP parameters should not change the APIs between the browser and Javascript application. As soon as the browsers support the new codecs, old applications written before the codecs were specified should automatically be able to use the new codecs where appropriate with no changes to the JS applications. The particular choices made for WebRTC, and their implications for the API offered by a browser implementing WebRTC, are described in I-", "comments": "Tweaks to future proof document.\nSpencer had the following comment: I found the reference to MMUSIC WG in When a new codec is specified, and the SDP for the new codec is specified in the MMUSIC WG, no other standardization should be required for it to be possible to use that in the web browsers. to be odd. MMUSIC may be around forever, but this work might be refactored at some point in the future. Is the point that When SDP for a new codec is specified, no other standardization should be required for it to be used in the web browsers. Or is there another way to say this? I'm also wondering if the statement is true for any WebRTC endpoint, not just browsers.", "new_text": "gateway to convert between the signaling on the web side to the SIP signaling may be needed. When an SDP for a new codec is specified, no other standardization should be required for it to be possible to use that in the web browsers. Adding new codecs which might have new SDP parameters should not change the APIs between the browser and Javascript application. As soon as the browsers support the new codecs, old applications written before the codecs were specified should automatically be able to use the new codecs where appropriate with no changes to the JS applications. The particular choices made for WebRTC, and their implications for the API offered by a browser implementing WebRTC, are described in I-"}
{"id": "q-en-rtcweb-transport-6cf9162643eb78082e4c92fdb7bf1dd226812eb3f5253ff68fae0cc56c53d450", "old_text": "elements described. TCP RFC0793. This is used for HTTP/WebSockets, as well as for TURN/SSL and ICE-TCP. For both protocols, IPv4 and IPv6 support is assumed.", "comments": "From Ben Campbell's review:\nSecond point is fixed by . I'm not sure the third point is an improvement.\nSSL is used exactly once, as part of the acronym \"TURN/SSL\". This is also the first mention of TURN, but the second was much more convenient to attach the expansion to. Given that it was so awkward to attach them, I included a new section listing the \"used protocols\" instead.", "new_text": "elements described. TCP RFC0793. This is used for HTTP/WebSockets, as well as for TURN/TLS and ICE-TCP. For both protocols, IPv4 and IPv6 support is assumed."}
{"id": "q-en-rtcweb-transport-6cf9162643eb78082e4c92fdb7bf1dd226812eb3f5253ff68fae0cc56c53d450", "old_text": "This specification does not assume that the implementation will have access to ICMP or raw IP. 3.2. Web applications running in a WebRTC browser MUST be able to utilize", "comments": "From Ben Campbell's review:\nSecond point is fixed by . I'm not sure the third point is an improvement.\nSSL is used exactly once, as part of the acronym \"TURN/SSL\". This is also the first mention of TURN, but the second was much more convenient to attach the expansion to. Given that it was so awkward to attach them, I included a new section listing the \"used protocols\" instead.", "new_text": "This specification does not assume that the implementation will have access to ICMP or raw IP. The following protocols may be used, but can be implemented by a WebRTC endpoint, and are therefore not defined as \"system-provided interfaces\": TURN - Traversal Using Relays Around NAT, STUN - Session Traversal Utilities for NAT, ICE - Interactive Connectivity Establishment, RFC5245 TLS - Trasnsport Layer Security, DTLS - Datagram Transport Layer Security, RFC6347. 3.2. Web applications running in a WebRTC browser MUST be able to utilize"}
{"id": "q-en-rtcweb-transport-00b89b3a8bd67d3932ffc86c9dc2af6a3a13f2107bb8204c6f6f012e8bc089e0", "old_text": "The WebRTC implementation MAY support accessing the Internet through an HTTP proxy. If it does so, it MUST support the \"connect\" header as specified in I-D.ietf-httpbis-tunnel-protocol. 3.5.", "comments": "As described in URL\nIn the Dallas presentation on \u201ctransports\u201d you stated that the reference to proxy authentication would be restored (See URL) but it does not seem to have happened. This is an oversight. (Noted by Andy Hutton)", "new_text": "The WebRTC implementation MAY support accessing the Internet through an HTTP proxy. If it does so, it MUST support the \"connect\" header as specified in I-D.ietf-httpbis-tunnel-protocol, and proxy authentication as described in Section 4.3.6 of RFC7231 and RFC7235 MUST also be supported. 3.5."}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "4. This section describes a typical RTCWeb session and shows how the various security elements interact and what guarantees are provided to the user. The example in this section is a \"best case\" scenario in which we provide the maximal amount of user authentication and", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "4. This section describes a typical WebRTC session and shows how the various security elements interact and what guarantees are provided to the user. The example in this section is a \"best case\" scenario in which we provide the maximal amount of user authentication and"}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "the calling service via HTTPS and so know the origin with some level of confidence. They also have accounts with some identity provider. This sort of identity service is becoming increasingly common in the Web environment in technologies such (BrowserID, Federated Google Login, Facebook Connect, OAuth, OpenID, WebFinger), and is often provided as a side effect service of a user's ordinary accounts with some service. In this example, we show Alice and Bob using a separate identity service, though the identity service may be the same entity as the calling service or there may be no identity service at all.", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "the calling service via HTTPS and so know the origin with some level of confidence. They also have accounts with some identity provider. This sort of identity service is becoming increasingly common in the Web environment (with technologies such as BrowserID, Federated Google Login, Facebook Connect, OAuth, OpenID, WebFinger), and is often provided as a side effect service of a user's ordinary accounts with some service. In this example, we show Alice and Bob using a separate identity service, though the identity service may be the same entity as the calling service or there may be no identity service at all."}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "freshness\", i.e., allowing Alice to verify that Bob is still prepared to receive her communications so that Alice does not continue to send large traffic volumes to entities which went abruptly offline. ICE specifies periodic STUN keepalizes but only if media is not flowing. Because the consent issue is more difficult here, we require RTCWeb implementations to periodically send keepalives. As described in Section 5.3, these keepalives MUST be based on the consent freshness mechanism specified in I-D.muthu-behave-consent-freshness. If a", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "freshness\", i.e., allowing Alice to verify that Bob is still prepared to receive her communications so that Alice does not continue to send large traffic volumes to entities which went abruptly offline. ICE specifies periodic STUN keepalives but only if media is not flowing. Because the consent issue is more difficult here, we require WebRTC implementations to periodically send keepalives. As described in Section 5.3, these keepalives MUST be based on the consent freshness mechanism specified in I-D.muthu-behave-consent-freshness. If a"}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "NOT be permitted to control the local ufrag and password, though it of course knows it. While continuing consent is required, that ICE RFC5245; Section 10 keepalives STUN Binding Indications are one-way and therefore not sufficient. The current WG consensus is to use ICE Binding Requests for continuing consent freshness. ICE already requires that implementations respond to such requests, so this approach is maximally compatible. A separate document will profile the ICE timers to be used; see I-D.muthu-behave-consent-freshness. 5.4.", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "NOT be permitted to control the local ufrag and password, though it of course knows it. While continuing consent is required, the ICE RFC5245; Section 10 keepalives use STUN Binding Indications which are one-way and therefore not sufficient. The current WG consensus is to use ICE Binding Requests for continuing consent freshness. ICE already requires that implementations respond to such requests, so this approach is maximally compatible. A separate document will profile the ICE timers to be used; see I-D.muthu-behave-consent-freshness. 5.4."}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "5.6. In a number of cases, it is desirable for the endpoint (i.e., the browser) to be able to directly identity the endpoint on the other side without trusting only the signaling service to which they are connected. For instance, users may be making a call via a federated system where they wish to get direct authentication of the other side. Alternately, they may be making a call on a site which they", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "5.6. In a number of cases, it is desirable for the endpoint (i.e., the browser) to be able to directly identify the endpoint on the other side without trusting the signaling service to which they are connected. For instance, users may be making a call via a federated system where they wish to get direct authentication of the other side. Alternately, they may be making a call on a site which they"}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "identity on a site they do trust (such as a social network.) Recently, a number of Web-based identity technologies (OAuth, BrowserID, Facebook Connect), etc. have been developed. While the details vary, what these technologies share is that they have a Web- based (i.e., HTTP/HTTPS) identity provider which attests to your identity. For instance, if I have an account at example.org, I could", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "identity on a site they do trust (such as a social network.) Recently, a number of Web-based identity technologies (OAuth, BrowserID, Facebook Connect etc.) have been developed. While the details vary, what these technologies share is that they have a Web- based (i.e., HTTP/HTTPS) identity provider which attests to your identity. For instance, if I have an account at example.org, I could"}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "The precise information from the signaling message that must be cryptographically bound to the user's identity and a mechanism for carrying assertions in JSEP messages. sec.jsep-binding The interface to the IdP. sec.protocol-details specifies a specific protocol mechanism which allows the use of any identity", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "The precise information from the signaling message that must be cryptographically bound to the user's identity and a mechanism for carrying assertions in JSEP messages. This is specified in sec.jsep-binding. The interface to the IdP. sec.protocol-details specifies a specific protocol mechanism which allows the use of any identity"}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "The specific IdP protocol which the IdP is using. This is a completely opaque IdP-specific string, but allows an IdP to implement two protocols in parallel. This value may be the empty string. Each IdP MUST serve its initial entry page (i.e., the one loaded by the IdP proxy) from a RFC5785. The well-known URI for an IdP proxy", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "The specific IdP protocol which the IdP is using. This is a completely opaque IdP-specific string, but allows an IdP to implement multiple protocols in parallel. This value may be the empty string. Each IdP MUST serve its initial entry page (i.e., the one loaded by the IdP proxy) from a RFC5785. The well-known URI for an IdP proxy"}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "who have not implemented WebRTC at all, WebRTC implementations need to be more careful. Consider the case of a call center which accepts calls via RTCWeb. An attacker proxies the call center's front-end and arranges for multiple clients to initiate calls to the call center. Note that this requires user consent in many cases but because the data channel", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "who have not implemented WebRTC at all, WebRTC implementations need to be more careful. Consider the case of a call center which accepts calls via WebRTC. An attacker proxies the call center's front-end and arranges for multiple clients to initiate calls to the call center. Note that this requires user consent in many cases but because the data channel"}
{"id": "q-en-security-arch-7c83fcc1ae3544f0966b499e402686d89bbb0f1e1a3dd0e231a8fb82073c5926", "old_text": "6.4.1. Fundamentally, the IdP proxy is just a piece of HTML and JS loaded by the browser, so nothing stops a Web attacker o from creating their own IFRAME, loading the IdP proxy HTML/JS, and requesting a signature. In order to prevent this attack, we require that all signatures be tied to a specific origin (\"rtcweb://...\") which cannot be produced by content JavaScript. Thus, while an attacker can instantiate the IdP proxy, they cannot send messages from an appropriate origin and so cannot create acceptable assertions. I.e., the assertion request must have come from the browser. This origin check is enforced on the relying party side, not on the authenticating party side. The reason for this is to take the burden of knowing which origins are valid off of the IdP, thus making this mechanism extensible to other applications besides WebRTC. The IdP simply needs to gather the origin information (from the posted message) and attach it to the assertion. Note that although this origin check is enforced on the RP side and not at the IdP, it is absolutely imperative that it be done. The", "comments": "I've found a few typos and fixed some sentences that didn't quite make sense. I don't think the meaning has been changed in any significant way.", "new_text": "6.4.1. Fundamentally, the IdP proxy is just a piece of HTML and JS loaded by the browser, so nothing stops a Web attacker from creating their own IFRAME, loading the IdP proxy HTML/JS, and requesting a signature. In order to prevent this attack, we require that all signatures be tied to a specific origin (\"rtcweb://...\") which cannot be produced by content JavaScript. Thus, while an attacker can instantiate the IdP proxy, they cannot send messages from an appropriate origin and so cannot create acceptable assertions. I.e., the assertion request must have come from the browser. This origin check is enforced on the relying party side, not on the authenticating party side. The reason for this is to take the burden of knowing which origins are valid off of the IdP, thus making this mechanism extensible to other applications besides WebRTC. The IdP simply needs to gather the origin information (from the posted message) and attach it to the assertion. Note that although this origin check is enforced on the RP side and not at the IdP, it is absolutely imperative that it be done. The"}
{"id": "q-en-security-arch-3c48a6716e405a7123d90a0b36030e19824788fe3d0468037594358c0b0a6a1a", "old_text": "Abstract The Real-Time Communications on the Web (RTCWEB) working group is tasked with standardizing protocols for enabling real-time communications within user-agents using web technologies (commonly called \"WebRTC\"). This document defines the security architecture for WebRTC. 1.", "comments": "Pretty sure we'll get a GEN-ART reviewing say to not refer to the WG by name because it will get closed at some point. I basically stole the intro from overview and modified it slightly.", "new_text": "Abstract This document defines the security architecture for WebRTC, a protocol suite intended for use with real-time applications that can be deployed in browsers - \"real time communication on the Web\". 1."}
{"id": "q-en-senml-spec-9535c44ac31f473216191e92083d4557c8d1d592eded0620efd750357ab429d6", "old_text": "All SenML Records in a Pack MUST have the same version number. This is typically done by adding a Base Version field to only the first Record in the Pack. Systems reading one of the objects MUST check for the Version field. If this value is a version number larger than the version which the", "comments": "As per Adam's comment, clarify that only the element() scheme of the XPointer Framework is supported (as that is the one that is required by RFC 7303).\nLGTM", "new_text": "All SenML Records in a Pack MUST have the same version number. This is typically done by adding a Base Version field to only the first Record in the Pack, or by using the default value. Systems reading one of the objects MUST check for the Version field. If this value is a version number larger than the version which the"}
{"id": "q-en-senml-spec-9535c44ac31f473216191e92083d4557c8d1d592eded0620efd750357ab429d6", "old_text": "4.5.2. If the Record has no Unit, the Base Unit is used as the Unit. Having no Unit and no Base Unit is allowed. 4.5.3.", "comments": "As per Adam's comment, clarify that only the element() scheme of the XPointer Framework is supported (as that is the one that is required by RFC 7303).\nLGTM", "new_text": "4.5.2. If the Record has no Unit, the Base Unit is used as the Unit. Having no Unit and no Base Unit is allowed; any information that may be required about units applicable to the value then needs to be provided by the application context. 4.5.3."}
{"id": "q-en-senml-spec-9535c44ac31f473216191e92083d4557c8d1d592eded0620efd750357ab429d6", "old_text": "9.2. In addition to the SenML Fragment Identifiers described above, with the XML and EXI SenML formats also the syntax defined in XPointerFramework can be used. (This is required by RFC7303 for media types using the \"+xml\" structured syntax suffix. SenML allows this for the EXI formats as well for consistency.)", "comments": "As per Adam's comment, clarify that only the element() scheme of the XPointer Framework is supported (as that is the one that is required by RFC 7303).\nLGTM", "new_text": "9.2. In addition to the SenML Fragment Identifiers described above, with the XML and EXI SenML formats also the syntax defined in the XPointer element() Scheme XPointerElement of the XPointer Framework XPointerFramework can be used. (This is required by RFC7303 for media types using the \"+xml\" structured syntax suffix. SenML allows this for the EXI formats as well for consistency.)"}
{"id": "q-en-senml-spec-5f5a4efe2d5a9e8930728300d6bcc89e4d97671cf9bcfec772ca75854e454f15", "old_text": "Some devices have accurate time while others do not so SenML supports absolute and relative times. Time is represented in floating point as seconds. Values greater than zero represent an absolute time relative to the Unix epoch (1970-01-01T00:00Z in UTC time) and the time is counted same way as the Portable Operating System Interface (POSIX) \"seconds since the epoch\" TIME_T. Values of 0 or less represent a relative time in the past from the current time. A simple sensor with no absolute wall clock time might take a measurement every second, batch up 60 of them, and then send the batch to a server. It would include the relative time each measurement was made compared to the time the batch was sent in each", "comments": "One possible way to address the problem with positive relative times\nNAME Fixed\nLooks like two of you are still working details of phrasing but I like where it is going and all looks about right to me.", "new_text": "Some devices have accurate time while others do not so SenML supports absolute and relative times. Time is represented in floating point as seconds. Values greater than or equal to 2*28 represent an absolute time relative to the Unix epoch. Values less than 2*28 represent time relative to the current time. A simple sensor with no absolute wall clock time might take a measurement every second, batch up 60 of them, and then send the batch to a server. It would include the relative time each measurement was made compared to the time the batch was sent in each"}
{"id": "q-en-senml-spec-5f5a4efe2d5a9e8930728300d6bcc89e4d97671cf9bcfec772ca75854e454f15", "old_text": "All comparisons of text strings are performed byte-by-byte (and therefore necessarily case-sensitive). 4. Each SenML Pack carries a single array that represents a set of", "comments": "One possible way to address the problem with positive relative times\nNAME Fixed\nLooks like two of you are still working details of phrasing but I like where it is going and all looks about right to me.", "new_text": "All comparisons of text strings are performed byte-by-byte (and therefore necessarily case-sensitive). Where arithmetic is used, this specification uses the notation familiar from the programming language C, except that the operator \"**\" stands for exponentiation. 4. Each SenML Pack carries a single array that represents a set of"}
{"id": "q-en-senml-spec-5f5a4efe2d5a9e8930728300d6bcc89e4d97671cf9bcfec772ca75854e454f15", "old_text": "If either the Base Time or Time value is missing, the missing field is considered to have a value of zero. The Base Time and Time values are added together to get the time of measurement. A time of zero indicates that the sensor does not know the absolute time and the measurement was made roughly \"now\". A negative value is used to indicate seconds in the past from roughly \"now\". A positive value is used to indicate the number of seconds, excluding leap seconds, since the start of the year 1970 in UTC. Obviously, \"now\"-referenced SenML records are only useful within a specific communication context (e.g., based on information on when", "comments": "One possible way to address the problem with positive relative times\nNAME Fixed\nLooks like two of you are still working details of phrasing but I like where it is going and all looks about right to me.", "new_text": "If either the Base Time or Time value is missing, the missing field is considered to have a value of zero. The Base Time and Time values are added together to get the time of measurement. Values less than 268,435,456 (2*28) represent time relative to the current time. That is, a time of zero indicates that the sensor does not know the absolute time and the measurement was made roughly \"now\". A negative value indicates seconds in the past from roughly \"now\". Positive values up to 2*28 indicate seconds in the future from \"now\". Positive values can be used, e.g., for actuation use when the desired change should happen in the future but the sender or the receiver does not have accurate time available. Values greater than or equal to 2*28 represent an absolute time relative to the Unix epoch (1970-01-01T00:00Z in UTC time) and the time is counted same way as the Portable Operating System Interface (POSIX) \"seconds since the epoch\" . Therefore the smallest absolute time value that can be expressed (2*TIME_T28) is 1978-07-04 21:24:16 UTC. Because time values up to 2*28 are used for presenting time relative to \"now\" and Time and Base Time are added together, care must be taken to ensure that the sum does not inadvertently reach 2*28 (i.e., absolute time) when relative time was intended to be used. Obviously, \"now\"-referenced SenML records are only useful within a specific communication context (e.g., based on information on when"}
{"id": "q-en-senml-spec-41245364f3c3ec8102ef7f053704318112b537b76e0be0c2216d2ebaf12d7a8f", "old_text": "A base value is added to the value found in an entry, similar to Base Time. Version number of media type format. This attribute is an optional positive integer and defaults to 5 if not present. [RFC Editor: change the default value to 10 when this specification is", "comments": "The link was missing from schema. After this is merged, all the example need to be regenerated.\nGiven we want the l to have a defined integer value in the CBOR table, I think a good approach is define the l attribute as a string here then say what goes in it in the other draft that defines link. This is also good for EXI.", "new_text": "A base value is added to the value found in an entry, similar to Base Time. A base sum is added to the sum found in an entry, similar to Base Time. Version number of media type format. This attribute is an optional positive integer and defaults to 5 if not present. [RFC Editor: change the default value to 10 when this specification is"}
{"id": "q-en-senml-spec-41245364f3c3ec8102ef7f053704318112b537b76e0be0c2216d2ebaf12d7a8f", "old_text": "Base Unit is allowed. Value of the entry. Optional if a Sum value is present, otherwise required. Values are represented using three basic data types, Floating point numbers (\"v\" field for \"Value\"), Booleans (\"vb\" for \"Boolean Value\"), Strings (\"vs\" for \"String Value\") and Binary Data (\"vd\" for \"Data Value\") . Exactly one of these four fields MUST appear unless there is Sum field in which case it is allowed to have no Value field or to have \"v\" field. Integrated sum of the values over time. Optional. This attribute is in the units specified in the Unit value multiplied by seconds.", "comments": "The link was missing from schema. After this is merged, all the example need to be regenerated.\nGiven we want the l to have a defined integer value in the CBOR table, I think a good approach is define the l attribute as a string here then say what goes in it in the other draft that defines link. This is also good for EXI.", "new_text": "Base Unit is allowed. Value of the entry. Optional if a Sum value is present, otherwise required. Values are represented using basic data types. This specification defines floating point numbers (\"v\" field for \"Value\"), booleans (\"vb\" for \"Boolean Value\"), strings (\"vs\" for \"String Value\") and binary data (\"vd\" for \"Data Value\"). Exactly one value field MUST appear unless there is Sum field in which case it is allowed to have no Value field. Integrated sum of the values over time. Optional. This attribute is in the units specified in the Unit value multiplied by seconds."}
{"id": "q-en-senml-spec-41245364f3c3ec8102ef7f053704318112b537b76e0be0c2216d2ebaf12d7a8f", "old_text": "attributes to provide better information about the statistical properties of the measurement. A SenML object is referred to as \"expanded\" if it does not contain any base values and has no relative times. 4.4. SenML is designed to carry the minimum dynamic information about measurements, and for efficiency reasons does not carry significant static meta-data about the device, object or sensors. Instead, it is", "comments": "The link was missing from schema. After this is merged, all the example need to be regenerated.\nGiven we want the l to have a defined integer value in the CBOR table, I think a good approach is define the l attribute as a string here then say what goes in it in the other draft that defines link. This is also good for EXI.", "new_text": "attributes to provide better information about the statistical properties of the measurement. 4.4. Sometimes it is useful to be able to refer to a defined normalized format for SenML records. This normalized format tends to get used for big data applications and intermediate forms when converting to other formats. A SenML Record is referred to as \"resolved\" if it does not contain any base values and has no relative times, but the the base values of the SenML Pack (if any) are applied to the Record. That is, name and base name are concatenated, base time is added to the time of the Record, if Record did not contain Unit the Base Unit is applied to the record, etc. In addition the records need to be in chronological order. TODO add example 4.5. SenML is designed to carry the minimum dynamic information about measurements, and for efficiency reasons does not carry significant static meta-data about the device, object or sensors. Instead, it is"}
{"id": "q-en-senml-spec-25ee63c1640e092426f1b45daa2d2a2fc0832cc1bcb1b24bb86a644977673756", "old_text": "A base value is added to the value found in an entry, similar to Base Time. Version number of media type format. This attribute is an optional positive integer and defaults to 5 if not present. [RFC Editor: change the default value to 10 when this specification is", "comments": "Fixes URL\nThis makes some of the examples very long. I preferred the presentation where all attributes of a Record are on a single line, when feasible. Especially for the \"multiple data points\" examples. Anyway, Michael's comment was about indentation, which I agree should be (and is now) fixed.\nSee email from Michael Koster \"Formatting of JSON examples in SenML\"", "new_text": "A base value is added to the value found in an entry, similar to Base Time. A base sum is added to the sum found in an entry, similar to Base Time. Version number of media type format. This attribute is an optional positive integer and defaults to 5 if not present. [RFC Editor: change the default value to 10 when this specification is"}
{"id": "q-en-senml-spec-25ee63c1640e092426f1b45daa2d2a2fc0832cc1bcb1b24bb86a644977673756", "old_text": "Base Unit is allowed. Value of the entry. Optional if a Sum value is present, otherwise required. Values are represented using three basic data types, Floating point numbers (\"v\" field for \"Value\"), Booleans (\"vb\" for \"Boolean Value\"), Strings (\"vs\" for \"String Value\") and Binary Data (\"vd\" for \"Data Value\") . Exactly one of these four fields MUST appear unless there is Sum field in which case it is allowed to have no Value field or to have \"v\" field. Integrated sum of the values over time. Optional. This attribute is in the units specified in the Unit value multiplied by seconds.", "comments": "Fixes URL\nThis makes some of the examples very long. I preferred the presentation where all attributes of a Record are on a single line, when feasible. Especially for the \"multiple data points\" examples. Anyway, Michael's comment was about indentation, which I agree should be (and is now) fixed.\nSee email from Michael Koster \"Formatting of JSON examples in SenML\"", "new_text": "Base Unit is allowed. Value of the entry. Optional if a Sum value is present, otherwise required. Values are represented using basic data types. This specification defines floating point numbers (\"v\" field for \"Value\"), booleans (\"vb\" for \"Boolean Value\"), strings (\"vs\" for \"String Value\") and binary data (\"vd\" for \"Data Value\"). Exactly one value field MUST appear unless there is Sum field in which case it is allowed to have no Value field. Integrated sum of the values over time. Optional. This attribute is in the units specified in the Unit value multiplied by seconds."}
{"id": "q-en-senml-spec-25ee63c1640e092426f1b45daa2d2a2fc0832cc1bcb1b24bb86a644977673756", "old_text": "attributes to provide better information about the statistical properties of the measurement. A SenML object is referred to as \"expanded\" if it does not contain any base values and has no relative times. 4.4. SenML is designed to carry the minimum dynamic information about measurements, and for efficiency reasons does not carry significant static meta-data about the device, object or sensors. Instead, it is", "comments": "Fixes URL\nThis makes some of the examples very long. I preferred the presentation where all attributes of a Record are on a single line, when feasible. Especially for the \"multiple data points\" examples. Anyway, Michael's comment was about indentation, which I agree should be (and is now) fixed.\nSee email from Michael Koster \"Formatting of JSON examples in SenML\"", "new_text": "attributes to provide better information about the statistical properties of the measurement. 4.4. Sometimes it is useful to be able to refer to a defined normalized format for SenML records. This normalized format tends to get used for big data applications and intermediate forms when converting to other formats. A SenML Record is referred to as \"resolved\" if it does not contain any base values and has no relative times, but the the base values of the SenML Pack (if any) are applied to the Record. That is, name and base name are concatenated, base time is added to the time of the Record, if Record did not contain Unit the Base Unit is applied to the record, etc. In addition the records need to be in chronological order. An example of this is show in resolved-ex. 4.5. SenML is designed to carry the minimum dynamic information about measurements, and for efficiency reasons does not carry significant static meta-data about the device, object or sensors. Instead, it is"}
{"id": "q-en-senml-spec-25ee63c1640e092426f1b45daa2d2a2fc0832cc1bcb1b24bb86a644977673756", "old_text": "5.1. TODO - Add example with string, data, boolean, and base value 5.1.1. The following shows a temperature reading taken approximately \"now\"", "comments": "Fixes URL\nThis makes some of the examples very long. I preferred the presentation where all attributes of a Record are on a single line, when feasible. Especially for the \"multiple data points\" examples. Anyway, Michael's comment was about indentation, which I agree should be (and is now) fixed.\nSee email from Michael Koster \"Formatting of JSON examples in SenML\"", "new_text": "5.1. 5.1.1. The following shows a temperature reading taken approximately \"now\""}
{"id": "q-en-senml-spec-25ee63c1640e092426f1b45daa2d2a2fc0832cc1bcb1b24bb86a644977673756", "old_text": "5.1.4. The following example shows how to query one device that can provide multiple measurements. The example assumes that a client has fetched information from a device at 2001:db8::2 by performing a GET", "comments": "Fixes URL\nThis makes some of the examples very long. I preferred the presentation where all attributes of a Record are on a single line, when feasible. Especially for the \"multiple data points\" examples. Anyway, Michael's comment was about indentation, which I agree should be (and is now) fixed.\nSee email from Michael Koster \"Formatting of JSON examples in SenML\"", "new_text": "5.1.4. The following shows the example from the previos section show in resolved format. 5.1.5. The following example shows a sensor that returns different data types. 5.1.6. The following example shows how to query one device that can provide multiple measurements. The example assumes that a client has fetched information from a device at 2001:db8::2 by performing a GET"}
{"id": "q-en-senml-spec-19f841a7cc9cb70cefa2f0c0b3b6cb4bdb5e68b3dc6d9829c377a09016a00581", "old_text": "Characters in the String Value are encoded using a definite length text string (type 3). Octets in the Data Value are encoded using a definite length byte string (type 2) . For compactness, the CBOR representation uses integers for the map keys defined in tbl-cbor-labels. This table is conclusive, i.e., there is no intention to define any additional integer map keys; any extensions will use string map keys. This allows translators converting between CBOR and JSON representations to convert also all future labels without needing to update implementations. For streaming SensML in CBOR representation, the array containing the records SHOULD be an CBOR indefinite length array while for non streaming SenML, a definite length array MUST be used. The following example shows a dump of the CBOR example for the same sensor measurement as in co-ex.", "comments": "Set of small editorial tweaks and aligned fragment text with new way of positioning base values\nLGTM ( other than we need to decide if we use field or attribute but we can do that as separate PR)", "new_text": "Characters in the String Value are encoded using a definite length text string (type 3). Octets in the Data Value are encoded using a definite length byte string (type 2). For compactness, the CBOR representation uses integers for the labels, as defined in tbl-cbor-labels. This table is conclusive, i.e., there is no intention to define any additional integer map keys; any extensions will use string map keys. This allows translators converting between CBOR and JSON representations to convert also all future labels without needing to update implementations. For streaming SensML in CBOR representation, the array containing the records SHOULD be a CBOR indefinite length array while for non-streaming SenML, a definite length array MUST be used. The following example shows a dump of the CBOR example for the same sensor measurement as in co-ex."}
{"id": "q-en-senml-spec-19f841a7cc9cb70cefa2f0c0b3b6cb4bdb5e68b3dc6d9829c377a09016a00581", "old_text": "The compressed form, using the byte alignment option of EXI, for the above XML is the following: A small temperature sensor devices that only generates this one EXI file does not really need an full EXI implementation. It can simply hard code the output replacing the 1-wire device ID starting at byte 0x20 and going to byte 0x2F with it's device ID, and replacing the", "comments": "Set of small editorial tweaks and aligned fragment text with new way of positioning base values\nLGTM ( other than we need to decide if we use field or attribute but we can do that as separate PR)", "new_text": "The compressed form, using the byte alignment option of EXI, for the above XML is the following: A small temperature sensor device that only generates this one EXI file does not really need an full EXI implementation. It can simply hard code the output replacing the 1-wire device ID starting at byte 0x20 and going to byte 0x2F with it's device ID, and replacing the"}
{"id": "q-en-senml-spec-19f841a7cc9cb70cefa2f0c0b3b6cb4bdb5e68b3dc6d9829c377a09016a00581", "old_text": "first record is at position 1. A range of records can be selected by giving the first and the last record number separated by a '-' character. Instead of the second number, the '*' character can be used to indicate the last Senml Record in the Pack. A set of records can also be selected using a comma separated list of record positions or ranges. (We use the term \"selecting a record\" for identifying it as part of the fragment, not in the sense of isolating it from the Pack -- the record still needs to be interpreted as part of the Pack, e.g., using the base values defined in record 1.) 9.1.", "comments": "Set of small editorial tweaks and aligned fragment text with new way of positioning base values\nLGTM ( other than we need to decide if we use field or attribute but we can do that as separate PR)", "new_text": "first record is at position 1. A range of records can be selected by giving the first and the last record number separated by a '-' character. Instead of the second number, the '*' character can be used to indicate the last SenML Record in the Pack. A set of records can also be selected using a comma separated list of record positions or ranges. (We use the term \"selecting a record\" for identifying it as part of the fragment, not in the sense of isolating it from the Pack -- the record still needs to be interpreted as part of the Pack, e.g., using the base values defined in earlier records) 9.1."}
{"id": "q-en-senml-spec-19f841a7cc9cb70cefa2f0c0b3b6cb4bdb5e68b3dc6d9829c377a09016a00581", "old_text": "Extensions that add a label that is intended for use with EXI need to create a new XSD Schema that includes all the labels in the IANA registry then allocate a new EXI schemaId value. Moving to the next letter in the alphabet is the suggested way to create the new value for the EXI schemaId. Any labels with previously blank ID values SHOULD be updated in the IANA table to have their ID set to this new schemaId value. Extensions that are mandatory to understand to correctly process the Pack MUST have a label name that ends with the '_' character.", "comments": "Set of small editorial tweaks and aligned fragment text with new way of positioning base values\nLGTM ( other than we need to decide if we use field or attribute but we can do that as separate PR)", "new_text": "Extensions that add a label that is intended for use with EXI need to create a new XSD Schema that includes all the labels in the IANA registry and then allocate a new EXI schemaId value. Moving to the next letter in the alphabet is the suggested way to create the new value for the EXI schemaId. Any labels with previously blank ID values SHOULD be updated in the IANA table to have their ID set to this new schemaId value. Extensions that are mandatory to understand to correctly process the Pack MUST have a label name that ends with the '_' character."}
{"id": "q-en-senml-spec-a475ff1079bb82b4b7b76115bf5e02c821585d73df47121ba8fe5465e2744623", "old_text": "2. The design goal is to be able to send simple sensor measurements in small packets on mesh networks from large numbers of constrained devices. Keeping the total size of payload under 80 bytes makes this easy to use on a wireless mesh network. It is always difficult to define what small code is, but there is a desire to be able to implement this in roughly 1 KB of flash on a 8 bit microprocessor. Experience with power meters and other large scale deployments has indicated that the solution needs to support allowing multiple measurements to be batched into a single HTTP or CoAP request. This \"batch\" upload capability allows the server side to efficiently support a large number of devices. It also conveniently supports batch transfers from proxies and storage devices, even in situations where the sensor itself sends just a single data item at a time. The multiple measurements could be from multiple related sensors or from the same sensor but at different times. The basic design is an array with a series of measurements. The following example shows two measurements made at different times.", "comments": "100 bytes? (Here again I'd remove the text about mesh networks.) mesh networks result in about 80 bytes of usable payload per packet without fragmentation. How about we say keeping it small is good and quote examples of 802.15.4 has about 80 byte payload and that LoRa has 51 byte of applications payload in long range mode and just remove any reference to \"mesh\"\nI'm fine with suggestion above. I'm also fine with just leaving draft as is.\nAddressed by PR", "new_text": "2. The design goal is to be able to send simple sensor measurements in small packets from large numbers of constrained devices. Keeping the total size of payload small makes it easy to use SenML also in constrained networks, e.g., in a 6LoWPAN RFC4944. It is always difficult to define what small code is, but there is a desire to be able to implement this in roughly 1 KB of flash on a 8 bit microprocessor. Experience with power meters and other large scale deployments has indicated that the solution needs to support allowing multiple measurements to be batched into a single HTTP or CoAP request. This \"batch\" upload capability allows the server side to efficiently support a large number of devices. It also conveniently supports batch transfers from proxies and storage devices, even in situations where the sensor itself sends just a single data item at a time. The multiple measurements could be from multiple related sensors or from the same sensor but at different times. The basic design is an array with a series of measurements. The following example shows two measurements made at different times."}
{"id": "q-en-senml-spec-12aec5463c17f41b292054d208e5b0b0123dede44719d32be50063806d43fb5c", "old_text": "fields to provide better information about the statistical properties of the measurement. 4.4. Sometimes it is useful to be able to refer to a defined normalized", "comments": "Dave's comments: The issue is that neither of the above have an asterisk by them, so both are recommended and per Carsten\u2019s definition of \u201cunits\u201d during the meeting (i.e., where the reference point is separate from the units), then Kelvin and Celsius have the same units as they only vary by what the reference point is.", "new_text": "fields to provide better information about the statistical properties of the measurement. In summary, the structure of a SenML record is laid out to support a single measurement per record. If multiple data values are measured at the same time (e.g., air pressure and altitude), they are best kept as separate records linked through their Time value; this is even true where one of the data values is more \"meta\" than others (e.g., describes a condition that influences other measurements at the same time). 4.4. Sometimes it is useful to be able to refer to a defined normalized"}
{"id": "q-en-senml-spec-12aec5463c17f41b292054d208e5b0b0123dede44719d32be50063806d43fb5c", "old_text": "as mile, foot, light year are not allowed. For most cases, the SI unit is preferred. Symbol names that could be easily confused with existing common units or units combined with prefixes should be avoided. For example, selecting a unit name of \"mph\" to indicate something that", "comments": "Dave's comments: The issue is that neither of the above have an asterisk by them, so both are recommended and per Carsten\u2019s definition of \u201cunits\u201d during the meeting (i.e., where the reference point is separate from the units), then Kelvin and Celsius have the same units as they only vary by what the reference point is.", "new_text": "as mile, foot, light year are not allowed. For most cases, the SI unit is preferred. (Note that some amount of judgement will be required here, as even SI itself is not entirely consistent in this respect. For instance, for temperature ISO-80000-5 defines a quantity, item 5-1 (thermodynamic temperature), and a corresponding unit 5-1.a (Kelvin), and then goes ahead to define another quantity right besides that, item 5-2 (\"Celsius temperature\"), and the corresponding unit 5-2.a (degree Celsius). The latter quantity is defined such that it gives the thermodynamic temperature as a delta from T0 = 275.15 K. ISO 80000-5 is defining both units side by side, and not really expressing a preference. This level of recognition of the alternative unit degree Celsius is the reason why Celsius temperatures exceptionally seem acceptable in the SenML units list alongside Kelvin.) Symbol names that could be easily confused with existing common units or units combined with prefixes should be avoided. For example, selecting a unit name of \"mph\" to indicate something that"}
{"id": "q-en-senml-spec-12aec5463c17f41b292054d208e5b0b0123dede44719d32be50063806d43fb5c", "old_text": "does not lose any information from the JSON value. EXI uses the XML types. New entries can be added to the registration by Expert Review as defined in RFC8126. Experts should exercise their own good judgment but need to consider that shorter labels should have more strict review. All new SenML labels that have \"base\" semantics (see senml-base) MUST start with character 'b'. Regular labels MUST NOT start with that", "comments": "Dave's comments: The issue is that neither of the above have an asterisk by them, so both are recommended and per Carsten\u2019s definition of \u201cunits\u201d during the meeting (i.e., where the reference point is separate from the units), then Kelvin and Celsius have the same units as they only vary by what the reference point is.", "new_text": "does not lose any information from the JSON value. EXI uses the XML types. New entries can be added to the registration by either Expert Review or IESG Approval as defined in RFC8126. Experts should exercise their own good judgment but need to consider that shorter labels should have more strict review. New entries should not be made that counteract the advice at the end of considerations. All new SenML labels that have \"base\" semantics (see senml-base) MUST start with character 'b'. Regular labels MUST NOT start with that"}
{"id": "q-en-sframe-78cb8e3e27b77edbfed0f03da4df946efe9b0c138232e6ea4510c1cb08140ee8", "old_text": "encryption IV. The frame counter must be unique and monodically increasing to avoid IV reuse. The frame counter itself can be encoded in a variable length format to decrease the overhead, the following encoding schema is used The first byte in the header is fixed and contains the header metadata S 1 bit Signature flag, indicates the payload contains a signature of set. Reserved (4 bits) Reserved bits LEN (3 bits) The length of the CTR fields in bytes. CTR (Variable length) Frame counter up to 8 bytes long 3.2.", "comments": "LGTM\nI have a doubt about why an SRC for the CTR is needed at all and if a global counter for all streams would be equally secure in terms forming the IV or was due to implementation details to be able to have different context for each streams?\nBecause the key is used to encrypt all outgoing stream, we need a global counter for the IV to avoid counter reuse. If we use the frame counter only, then each stream will have their own counter and IV will collide\nIf the frame counter is global for the participant instead that per stream, i.e. increased whenever a new frame is encoded regarding the media stream, there would be no IV collision.\nCorrect. But I imagine this will be more complex to implement, having a single global counter across all media sources including audio and video\nPrepared a PR to change it to a global counter, however I want to make sure I understand how the counter is maintained globally, If the frame N is encrypted in 3 different layers, they will have different counter values right? One potential benefit of having a global counter it will free up some bits in the header which we can use as a keyId (up to 16 keys) which is nice\nYes, each call to the frame encryptor which would require to atomically increment the counter. In the case of the simulcast, the same input image will produce N different encoded frames which would have their unique counter. Regarding the free bits, I was thinking in something like this: If x=0 then K is the key id ( 0-7) if x=1 then K is the length of the key that is inserted before the This would allow to avoid the extra byte for conferences with less than 9 participants. Another option would be to always use the KLEN/KEYID and reserve that bit for extensions in the future.\nI like it :-)", "new_text": "encryption IV. The frame counter must be unique and monodically increasing to avoid IV reuse. As each sender will use their own key for encryption, so the SFrame header will include the key id to allow the receiver to identify the key that needs to be used for decrypting. Both the frame counter and the key id are encoded in a variable length format to decrease the overhead, so the first byte in the Sframe header is fixed and contains the header metadata with the following format: Signature flag (S): 1 bit This field indicates the payload contains a signature of set. Counter Length (LEN): 3 bits This field indicates the length of the CTR fields in bytes. Extended Key Id Flag (X): 1 bit Indicates if the key field contains the key id or the key length. Key or Key Length: 3 bits This field containts the key id (KID) if the X flag is set to 0, or the key length (KLEN) if set to 1. If X flag is 0 then the KID is in the range of 0-7 and the frame counter (CTR) is found in the next LEN bytes: Key id (KID): 3 bits The key id (0-7). Frame counter (CTR): (Variable length) Frame counter value up to 8 bytes long. if X flag is 1 then KLEN is the length of the key (KID), that is found after the SFrame header metadata byte. After the key id (KID), the frame counter (CTR) will be found in the next LEN bytes: 0 1 2 3 4 5 6 7 +-+-+-+-+-+-+-+-+----------------------+------------- ---------+ |S|LEN |1|KLEN | KID... (length=KLEN) | CTR... (length=LEN) | +-+-+-+-+-+-+-+-+----------------------+----------------------+ Key length (KLEN): 3 bits The key length in bytes. Key id (KID): (Variable length) The key id value up to 8 bytes long. Frame counter (CTR): (Variable length) Frame counter value up to 8 bytes long. 3.2."}
{"id": "q-en-sframe-78cb8e3e27b77edbfed0f03da4df946efe9b0c138232e6ea4510c1cb08140ee8", "old_text": "3- Authentication key to authenticate the encrypted frame and the media metadata The IV is 128 bits long and calculated from the SRC and CTR field of the Frame header: 3.2.2.", "comments": "LGTM\nI have a doubt about why an SRC for the CTR is needed at all and if a global counter for all streams would be equally secure in terms forming the IV or was due to implementation details to be able to have different context for each streams?\nBecause the key is used to encrypt all outgoing stream, we need a global counter for the IV to avoid counter reuse. If we use the frame counter only, then each stream will have their own counter and IV will collide\nIf the frame counter is global for the participant instead that per stream, i.e. increased whenever a new frame is encoded regarding the media stream, there would be no IV collision.\nCorrect. But I imagine this will be more complex to implement, having a single global counter across all media sources including audio and video\nPrepared a PR to change it to a global counter, however I want to make sure I understand how the counter is maintained globally, If the frame N is encrypted in 3 different layers, they will have different counter values right? One potential benefit of having a global counter it will free up some bits in the header which we can use as a keyId (up to 16 keys) which is nice\nYes, each call to the frame encryptor which would require to atomically increment the counter. In the case of the simulcast, the same input image will produce N different encoded frames which would have their unique counter. Regarding the free bits, I was thinking in something like this: If x=0 then K is the key id ( 0-7) if x=1 then K is the length of the key that is inserted before the This would allow to avoid the extra byte for conferences with less than 9 participants. Another option would be to always use the KLEN/KEYID and reserve that bit for extensions in the future.\nI like it :-)", "new_text": "3- Authentication key to authenticate the encrypted frame and the media metadata The IV is 128 bits long and calculated from the CTR field of the Frame header: 3.2.2."}
{"id": "q-en-sframe-78cb8e3e27b77edbfed0f03da4df946efe9b0c138232e6ea4510c1cb08140ee8", "old_text": "3.2.3. The sending client maps the outgoing streams and give them unique indices: 0, 1,.. etc. As mentioned above SFrame supports up to 16 outgoing stream. After encoding the frame and before packetizing it, the necessary media metadata will be moved out of the encoded frame buffer, to be used later in the RTP header extension. The encoded frame, the metadata buffer and the stream index are passed to SFrame encryptor which internally keeps track of the number of frames encrypted so far for that stream. The encryptor constructs SFrame header using the stream index and frame counter and derive the encryption IV. The frame is encrypted using the encryption key and the header, encrypted frame and the media metadata are authenticated using the authentication key. The authentication tag is then truncated (If supported by the cipher suite) and prepended at the end of the ciphertext. The encrypted payload is then passed to a generic RTP packetized to construct the RTP packets and encrypts it using SRTP keys for the", "comments": "LGTM\nI have a doubt about why an SRC for the CTR is needed at all and if a global counter for all streams would be equally secure in terms forming the IV or was due to implementation details to be able to have different context for each streams?\nBecause the key is used to encrypt all outgoing stream, we need a global counter for the IV to avoid counter reuse. If we use the frame counter only, then each stream will have their own counter and IV will collide\nIf the frame counter is global for the participant instead that per stream, i.e. increased whenever a new frame is encoded regarding the media stream, there would be no IV collision.\nCorrect. But I imagine this will be more complex to implement, having a single global counter across all media sources including audio and video\nPrepared a PR to change it to a global counter, however I want to make sure I understand how the counter is maintained globally, If the frame N is encrypted in 3 different layers, they will have different counter values right? One potential benefit of having a global counter it will free up some bits in the header which we can use as a keyId (up to 16 keys) which is nice\nYes, each call to the frame encryptor which would require to atomically increment the counter. In the case of the simulcast, the same input image will produce N different encoded frames which would have their unique counter. Regarding the free bits, I was thinking in something like this: If x=0 then K is the key id ( 0-7) if x=1 then K is the length of the key that is inserted before the This would allow to avoid the extra byte for conferences with less than 9 participants. Another option would be to always use the KLEN/KEYID and reserve that bit for extensions in the future.\nI like it :-)", "new_text": "3.2.3. After encoding the frame and before packetizing it, the necessary media metadata will be moved out of the encoded frame buffer, to be used later in the RTP header extension. The encoded frame and the metadata buffer are passed to SFrame encryptor which internally keeps track of the number of frames encrypted so far. The encryptor constructs SFrame header using frame counter and key id and derive the encryption IV. The frame is encrypted using the encryption key and the header, encrypted frame and the media metadata are authenticated using the authentication key. The authentication tag is then truncated (If supported by the cipher suite) and prepended at the end of the ciphertext. The encrypted payload is then passed to a generic RTP packetized to construct the RTP packets and encrypts it using SRTP keys for the"}
{"id": "q-en-sframe-78cb8e3e27b77edbfed0f03da4df946efe9b0c138232e6ea4510c1cb08140ee8", "old_text": "5.1.2. The sender of a simulcast stream may use the same SRC for all the simulcast streams from the same media source or use a different SRC for each of them, in any case it is transparent to the SFU which will be able to perform the simulcast layer switching normally. The senders are already able to receive different SRCs from different participants due to LastN and RTP Stream reuse, so supporting simulcast uses same mechanisms. 5.1.3.", "comments": "LGTM\nI have a doubt about why an SRC for the CTR is needed at all and if a global counter for all streams would be equally secure in terms forming the IV or was due to implementation details to be able to have different context for each streams?\nBecause the key is used to encrypt all outgoing stream, we need a global counter for the IV to avoid counter reuse. If we use the frame counter only, then each stream will have their own counter and IV will collide\nIf the frame counter is global for the participant instead that per stream, i.e. increased whenever a new frame is encoded regarding the media stream, there would be no IV collision.\nCorrect. But I imagine this will be more complex to implement, having a single global counter across all media sources including audio and video\nPrepared a PR to change it to a global counter, however I want to make sure I understand how the counter is maintained globally, If the frame N is encrypted in 3 different layers, they will have different counter values right? One potential benefit of having a global counter it will free up some bits in the header which we can use as a keyId (up to 16 keys) which is nice\nYes, each call to the frame encryptor which would require to atomically increment the counter. In the case of the simulcast, the same input image will produce N different encoded frames which would have their unique counter. Regarding the free bits, I was thinking in something like this: If x=0 then K is the key id ( 0-7) if x=1 then K is the length of the key that is inserted before the This would allow to avoid the extra byte for conferences with less than 9 participants. Another option would be to always use the KLEN/KEYID and reserve that bit for extensions in the future.\nI like it :-)", "new_text": "5.1.2. When using simulcast, the same input image will produce N different encoded frames (one per simulcat layer) which would be processed inpependently by the frame encryptor and assigned an unique counter for each. 5.1.3."}
{"id": "q-en-sframe-78cb8e3e27b77edbfed0f03da4df946efe9b0c138232e6ea4510c1cb08140ee8", "old_text": "sizes or frames per second. In order to support it, the sender MUST encode each spatial layer of a given picture in a different frame. That is, an RTP frame may contain more than one SFrame encrypted frame with same source (SRC) and incrementing frame counter. 5.2.", "comments": "LGTM\nI have a doubt about why an SRC for the CTR is needed at all and if a global counter for all streams would be equally secure in terms forming the IV or was due to implementation details to be able to have different context for each streams?\nBecause the key is used to encrypt all outgoing stream, we need a global counter for the IV to avoid counter reuse. If we use the frame counter only, then each stream will have their own counter and IV will collide\nIf the frame counter is global for the participant instead that per stream, i.e. increased whenever a new frame is encoded regarding the media stream, there would be no IV collision.\nCorrect. But I imagine this will be more complex to implement, having a single global counter across all media sources including audio and video\nPrepared a PR to change it to a global counter, however I want to make sure I understand how the counter is maintained globally, If the frame N is encrypted in 3 different layers, they will have different counter values right? One potential benefit of having a global counter it will free up some bits in the header which we can use as a keyId (up to 16 keys) which is nice\nYes, each call to the frame encryptor which would require to atomically increment the counter. In the case of the simulcast, the same input image will produce N different encoded frames which would have their unique counter. Regarding the free bits, I was thinking in something like this: If x=0 then K is the key id ( 0-7) if x=1 then K is the length of the key that is inserted before the This would allow to avoid the extra byte for conferences with less than 9 participants. Another option would be to always use the KLEN/KEYID and reserve that bit for extensions in the future.\nI like it :-)", "new_text": "sizes or frames per second. In order to support it, the sender MUST encode each spatial layer of a given picture in a different frame. That is, an RTP frame may contain more than one SFrame encrypted frame with an incrementing frame counter. 5.2."}
{"id": "q-en-sframe-33e39c12b7307afd7c74aa976fc10c7c6152caeca45c3306bcab39000d67adec", "old_text": "Secure Frame (SFrame) draft-omara-sframe-01 Abstract", "comments": "This: Changes the name of the draft (I made something up, but you are free to suggest a different name) Adds venue information to the draft (working group, area, stream, github, etc...) Updates some of the boilerplate stuff (README, CONTRIBUTING, .gitignore, Makefile) Adds workflows for CI with GitHub Actions Cleans up trailing whitespace (with ) so that the build runs cleanly Updates the Gemfile (I don't use this, but I tried to make it functional for those that do) From here, you should be able to create a first submission of a draft-ietf-sframe-enc-00 by pushing a tag of draft-ietf-sframe-enc-00 (or making a new release with that name).\nSpec changes look ok since they are editorial/ NAME is this ready to go?\nYou should be able to merge this and use it straight away, if that is what you want to do (the only thing that might need attention is the proposed filename, which I just made up).", "new_text": "Secure Frame (SFrame) draft-ietf-sframe-enc-latest Abstract"}
{"id": "q-en-suit-firmware-encryption-cb59e863845ba0fb42bfbeecde419a9490597874533709df5cd14b801e097e56", "old_text": "4. This specification introduces two extensions to the SUIT envelope and the manifest structure, as motivated in arch. The SUIT envelope is enhanced with a key exchange payload, which is carried inside the suit-protection-wrappers parameter, see envelope- fig. One or multiple SUIT_Encryption_Info payload(s) are carried within the suit-protection-wrappers parameter. The content of the SUIT_Encryption_Info payload is explained in AES-KW (for AES-KW) and in HPKE (for HPKE). When the encryption capability is used, the suit-protection-wrappers parameter MUST be included in the envelope. The manifest is extended with a CEK verification parameter (called suit-cek-verification), see manifest-fig. This parameter is optional and is utilized in environments where battery exhaustion attacks are a concern. Details about the CEK verification can be found in cek- verification. 5.", "comments": "moving into .\nKen, here is the reasoning for moving encryption functionality outside the SUIT manifest: When the firmware author needs to encrypt a payload for a specific recipient it needs to know its public key (for HPKE) or its symmetric key (for AES-KW). Since the firmware author does not typically know the recipients ahead of time, it cannot perform this encryption operation. What he can, however, do is to digitally sign the manifest (and thereby the firmware image) since it only requires its private key. Later the distribution system, before delivering the firmware to the devices, will encrypt the firmware images. It cannot modify the manifest anymore since otherwise the signature previously generated would be invalidated. Hence, any information regarding firmware encryption has to go into the envelope. I hope this makes sense. For this reason, I believe the current text in the document is correct.\nI've misunderstood your intention. Are these correct? Author has the raw firmware image and creates by signing the with its signing key. Distribution System receives the raw firmware image and its from the Author, then encrypts the firmware. It appends to (outside the ). Device verifies the and decrypts encrypted firmware image using . My concern is that the would be untrusted because it locates outside the signature. How about this one? The order of suit-protection-wrappers array corresponds to the suit-components array in suit-common it can be if no decryption is required The Device can verify whether the decrypted firmware image is correct using the digest hash of raw firmware image the image digest and the image size are trusted because they are inside the though the s are untrusted because they are outside the The Author MUST aware that the firmware will be encrypted by Distribution System to create appropriate suit-install section\nThe distribution system is not untrusted but rather trusted to do the encryption of the plaintext firmware image and to distribute it to the intended recipients. If the author already knows the recipients ahead of time and has their public keys, then it can also encrypt the firmware image without relying on the distribution system. Hence, I do not believe that the wording that the distribution system is untrusted is correct. Note that the recipient will get to learn that the encryption has been applied by the distribution system since the encryption info is also signed. The recipient (device) can still check the signature of the manifest to verify whether the plaintext image has been unchanged. Hence, I don't see a problem with the approach\nBtw, I agree with you that the manifest needs to carry a hash of the plaintext firmware and it also needs to contain instructions for matching the hash against a specific value. The reason why I believe that no commands for processing the encrypted payload are needed is simply because the encryption must happen before any of the other processing steps need to take place. Otherwise nothing can proceed. Is this the right approach? I don't know.\nI wonder whether we should bring this up on the SUIT mailing list since it is an important design decision.\nThe term \"untrusted\" I used was ambiguous, but I meant the part was outside the signature. Before discussing in the mailing list let me confirm this: Distribution System is trusted case A-1) as a firmware server and a device management infrastructure case A-2) as a firmware server and a device management infrastructure, and also a manifest signer The firmware is encrypted by case B-1) the Author case B-2) the Distribution System With case A-2, the Distribution System can just update the manifest and then update the signature because Devices trusts its public key. With case B-1, the Author can create encrypted firmware and its manifest, so that there is no need Distribution System to modify the manifest. They work well with current SUIT Manifest, but our case does not especially with non-integrated payload. Am I correct?\nI think you are saying that because a non-integrated payload requires a URI to be included in the manifest, which will then be integrity protected. Hence, the distribution system will not be able to change that URL and it will typically not have permissions to replace the content of the referenced file. Is this correct? FYI: I think it would be good to have examples of all these cases in the draft.\nHere is what \u00d8yvind proposes in URL\nIn the firmware encryption draft we have defined an extension to the envelope that indicates what encryption parameters are applied to which component. Here is the CDDL: Currently the text provides no explanation or an example of how this linkage works.\nThis aspect is related to URL and to the mailing list discussion URL\nThis issue is also related to URL\nWith in PR , this issue is solved.\nSUITEncryptionInfo was defined in the past SUIT Manifest core document describing how to decrypt the firmware image, but in this document it may not be associated with any component. I will create a Pull Request to move SUITEncryptionInfo under the suit-manifest.\nSee also URL A new idea to leverage SUIT Dependency for Firmware Encryption has come up to my mind. Please review and make comments, especially for SUIT Manifest usage and security considerations. The Author creates firmware and its manifest The Distribution System encrypts the firmware and distribute it to Devices The Device fetches and decrypts encrypted firmware, then check and install it It introduces new section suit-security-wrapper holding SUITEncryptionInfo, which is appended by the Distribution System The Device decrypts the encrypted firmware with SUITEncryptionInfo and then check it with condition-image-match Distribution System cannot modify the URI in the manifest where the encrypted firmware is stored SUITEncryptionInfo is used without verification because it locates out of the Author's signature The Author and the Distribution System create manifest respectively Distribution System's manifest fetches encrypted firmware and decrypts it, and depends on Author's manifest Author's manifest check the decrypted firmware with condition-image-match The Device MUST trust both Author and Distribution System keys to verify manifests The Distribution System can freely set the URI to distribute encrypted firmware in Distribution System's manifest SUITEncryptionInfo is inside signature created by the Distribution System (Only important parts are shown in this example) (Only important parts are shown in this example) Best, Ken\nInternet-Draft draft-ietf-suit-firmware-encryption-URL is now available. It is a work item of the Software Updates for Internet of Things (SUIT) WG of the IETF. Title: Encrypted Payloads in SUIT Manifests Authors: Hannes Tschofenig Russ Housley Brendan Moran David Brown Ken Takayama Name: draft-ietf-suit-firmware-encryption-URL Pages: 52 Dates: 2024-03-03 Abstract: This document specifies techniques for encrypting software, firmware, machine learning models, and personalization data by utilizing the IETF SUIT manifest. Key agreement is provided by ephemeral-static (ES) Diffie-Hellman (DH) and AES Key Wrap (AES-KW). ES-DH uses public key cryptography while AES-KW uses a pre-shared key. Encryption of the plaintext is accomplished with conventional symmetric key cryptography. The IETF datatracker status page for this Internet-Draft is: URL There is also an HTMLized version available at: URL A diff from the previous version is available at: https://author-URL Internet-Drafts are also available by rsync at: URL", "new_text": "4. This specification introduces two extensions to the SUIT_Parameters, as motivated in arch. The SUIT manifest is enhanced with a key exchange payload, which is carried within the suit-directive-override-parameters or suit- directive-set-parameters for each encrypted payload. One SUIT_Encryption_Info is carried with suit-parameter-encryption-info, see parameter-fig. The content of the SUIT_Encryption_Info is explained in AES-KW (for AES-KW) and in HPKE (for HPKE). When the encryption capability is used, the SUIT_Encryption_Info parameter MUST be included in the SUIT_Directive. A CEK verification parameter (called suit-parameter-cek- verification), see parameter-fig, also extends the manifest. This parameter is optional and is utilized in environments where battery exhaustion attacks are a concern. Details about the CEK verification can be found in cek-verification. 5."}
{"id": "q-en-t2trg-rest-iot-1178825bbb1a9c9cb3c97250b3ae65772341cbbc8165a3fa70a7841584a88249", "old_text": "a RESTful API. Different protocols can be used with RESTful systems, but at the time of writing the most common protocols are HTTP RFC7230 and CoAP RFC7252. Since RESTful APIs are often lightweight and enable loose coupling of system components, they are a good fit for various Internet of Things (IoT) applications, which in general aim at", "comments": "Updating HTTP references to RFC9110 and RFC9112. Moving CoAP reference to normative to be consistent with other references.\n; httpbis-messaging/-semantics/-cache are in the RFC editor queue and will be published before this document. URL\nThey actually have been published.\nThanks!", "new_text": "a RESTful API. Different protocols can be used with RESTful systems, but at the time of writing the most common protocols are HTTP RFC9110 and CoAP RFC7252. Since RESTful APIs are often lightweight and enable loose coupling of system components, they are a good fit for various Internet of Things (IoT) applications, which in general aim at"}
{"id": "q-en-t2trg-rest-iot-1178825bbb1a9c9cb3c97250b3ae65772341cbbc8165a3fa70a7841584a88249", "old_text": "3.5. Section 4.3 of RFC7231 defines the set of methods in HTTP; Section 5.8 of RFC7252 defines the set of methods in CoAP. As part of the Uniform Interface constraint, each method can have certain properties that give guarantees to clients.", "comments": "Updating HTTP references to RFC9110 and RFC9112. Moving CoAP reference to normative to be consistent with other references.\n; httpbis-messaging/-semantics/-cache are in the RFC editor queue and will be published before this document. URL\nThey actually have been published.\nThanks!", "new_text": "3.5. Section 9.3 of RFC9110 defines the set of methods in HTTP; Section 5.8 of RFC7252 defines the set of methods in CoAP. As part of the Uniform Interface constraint, each method can have certain properties that give guarantees to clients."}
{"id": "q-en-t2trg-rest-iot-1178825bbb1a9c9cb3c97250b3ae65772341cbbc8165a3fa70a7841584a88249", "old_text": "3.6. Section 6 of RFC7231 defines a set of Status Codes in HTTP that are assigned by the server to indicate whether a request was understood and satisfied, and how to interpret the answer. Similarly, Section 5.9 of RFC7252 defines the set of Response Codes in CoAP. The status codes consist of three digits (e.g., \"404\" with HTTP or \"4.04\" with CoAP) where the first digit expresses the class of the code. Implementations do not need to understand all status codes, but the class of the code must be understood. Codes starting with 1 are informational; the request was received and being processed (not available in CoAP). Codes starting with 2 indicate a successful request. Codes starting with 3 indicate redirection; further action is needed to complete the request (not available in CoAP). Codes", "comments": "Updating HTTP references to RFC9110 and RFC9112. Moving CoAP reference to normative to be consistent with other references.\n; httpbis-messaging/-semantics/-cache are in the RFC editor queue and will be published before this document. URL\nThey actually have been published.\nThanks!", "new_text": "3.6. Section 15 of RFC9110 defines a set of Status Codes in HTTP that are assigned by the server to indicate whether a request was understood and satisfied, and how to interpret the answer. Similarly, Section 5.9 of RFC7252 defines the set of Response Codes in CoAP. The codes consist of three digits (e.g., \"404\" with HTTP or \"4.04\" with CoAP) where the first digit expresses the class of the code. Implementations do not need to understand all codes, but the class of the code must be understood. Codes starting with 1 are informational; the request was received and being processed (not available in CoAP). Codes starting with 2 indicate a successful request. Codes starting with 3 indicate redirection; further action is needed to complete the request (not available in CoAP). Codes"}
{"id": "q-en-t2trg-rest-iot-1178825bbb1a9c9cb3c97250b3ae65772341cbbc8165a3fa70a7841584a88249", "old_text": "Internet X.509 Public Key Infrastructure: HTTP security: Section 9 of RFC7230, Section 9 of RFC7231, etc. CoAP security:", "comments": "Updating HTTP references to RFC9110 and RFC9112. Moving CoAP reference to normative to be consistent with other references.\n; httpbis-messaging/-semantics/-cache are in the RFC editor queue and will be published before this document. URL\nThey actually have been published.\nThanks!", "new_text": "Internet X.509 Public Key Infrastructure: HTTP security: Section 11 of RFC9112, Section 17 of RFC9110, etc. CoAP security:"}
{"id": "q-en-tls-exported-authenticator-d900e4b58ba6fc88ddad370a99ea86e29d27bc6ae00ebf22b6e18a41ad0fef9d", "old_text": "7. This document has no IANA actions. 8.", "comments": "TBH I don't really know if this is actually needed.\nWhoops that was supposed to be about it's good to merge. We need this section to tell IANA to register the exporter.\nNAME do you mind amending this to add:\nAdded section about exporter labels as well.", "new_text": "7. 7.1. IANA is requested to update the entry for server_name(0) in the registry for ExtensionType (defined in TLS13) by replacing the value in the \"TLS 1.3\" column with the value \"CH, EE, CR\". 7.2. IANA is requested to add the following entries to the registry for Exporter Labels (defined in RFC5705): \"EXPORTER-server authenticator handshake context\", \"EXPORTER-client authenticator finished key\" and \"EXPORTER-server authenticator finished key\". 8."}
{"id": "q-en-tls-exported-authenticator-5b61bc0c458a8b068518de7b56cce5e7d2d926a4dfde01f1eaec7760ae783f09", "old_text": "The signatures generated with this API cover the context string \"Exported Authenticator\" and therefore cannot be transplanted into other protocols. ", "comments": "In the TLS 1.3 handshake the client and server do not agree on the client's certificate bytes until after the first server application message. If this is significant to an application, then the server should send an application message before authenticating the client with an EA.\nDone.\nThis is useful information to convey to implementers and users. +1\nNAME the message that started this all is in this . It's the second issue.\nNAME To follow up from the meeting, this is only relevant in the case where the server has requested a client certificate, which is why you can't predict the client . For the Server to send an NST it must therefore have received the entire handshake. Changing this to require the Server to wait for the end of the handshake in not client certificate situations seems a touch heavy handed.\nNAME the text (as shown in the session) says that the server sends something though. But the server can do that without seeing the client certificate. I think that this needs to be something different, perhaps \"the server has received and validated the client's Finished\".\nI'm still trying to work out what threat we are considering here. Is it the server pretending to read the Finished and not or is it the server making an error and not reading the Finished. In the former case, it's not sufficient to send the NST because the client doesn't know what RMS the server has computed. In the latter case, why not just tell the server not to send a request until it's read Finished if there is a client cert?\nAt the end of a mutually authenticated handshake the client doesn't know that the server has agreed on the ClientCertificate's bytes. The threat is that the client's last message might have been corrupted in flight (intentionally or otherwise). I was worried that if we put the text \"MUST verify the Finished before sending an EA.\" implementations might ignore it, so the goal was to make the check something externally visible. I think you're right that the current text doesn't achieve the security goal. I can put \"MUST verify the Finished\" if that works better.\nI think that's the only thing that works.\nNAME Just checking to see when this might be updated?\nChanges pushed.\nNAME Does the final change address your issue?\n[Clarifying] The traffic keys do not depend on the client's final flight at all. So, I don't see what the \"first application message\" does here. The client receiving the server's messages doesn't tell the client anything, and receipt of the client's first application data isn't really relevant here. It's the Finished that matters.\nMaybe something about how we rely on the server to not send application data if the client Finished fails to validate (and thus for application data to not be exchanged in case of tampering), rather than talking about \"agree on the client's final flight\"? That lines up nicely with the new MUST-level requirement to not send EA if the client Finished doesn't validate. So something like \"In TLS 1.3 the agreement on the client's final flight is enforced solely by the server. The client Finished hash incorporates the rest of the transcript, which the server uses as an integrity check over the entire transcript, including the client's final flight.\" and then tweak the last sentence to include a clause about \"in order to ensure that the client's final flight has been validated, servers MUST ...\"\nNAME NAME can you please coordinate and come to a resolution that addresses the remaining comments in this thread? In particular, I think we're seeking clarity on the conditions for agreement and how that relates to the client's finished message (and not necessarily application data, though that may imply agreement, if I'm understanding the intended text here).\nWhat about 0.5RTT?\nNAME NAME can we please get an update here? Would additional editors here help the process?\nYes, the server can send 0.5RTT before the handshake completes. That's why my focus was on \"we rely on the server\", as the client doesn't always have a great way of telling for sure. The application protocol's semantics might be able to clue that a given response was 1-RTT and not 0.5-RTT, but it might not, too. I probably should have said \"close the connection\" rather than \"not send application data\", or maybe \"not continue to send application data\".\nLGTM.\nWhat do people think about putting \"before sending an EA or processing a received EA.\" instead of \"before sending an EA\"? I realised it's possible for the server to receive an EA before the client Finished as follows: Server sends its Finished message and then immediately an AR. Client receives the Finished, sends its own Finished, and then an EA in response to the AR. The client Finished and EA are reordered by the network (the EA might not even be going over the TLS transport). The server might then accept the EA and do some action, but then reject the Finished message. I think this is already incorrect by the definition of EAs, but I thought it wouldn't hurt to make it super explicit. NAME\nRequesting changes to resolve remaining comments. Hello TLSWG and IESG reviewers, This is a compendium of responses to the various reviews of the document. There are a few remaining open questions to address that I hope we can resolve in this thread. I\u2019ve compiled the changes to the document in response to the comments in Github: URL Responses welcome! Open questions for debate Should we define an optional API to generate the certificaterequestcontext? [ns] Proposed solution: no, let the application decide Should we remove the SHOULDs in Section 7, as they have little to do with interoperability? [ns] Proposed solution: leave the text as is. Are there any security issues caused by the fact that the Exported Authenticator is based on the Exporter Secret, which does not incorporate the entire transcript: Using an exporter value as the handshake context means that any client authentication from the initial handshake is not cryptographically digested into the exporter value. In light of what we ran into with EAP-TLS1.3, do we want to revisit whether we're still happy with that? We should in practice still benefit from the implicit confirmation that anything the server participates in after receiving the Client Finished means the server properly validated it and did not abort the handshake, but it's a bit awkward, especially since the RFC 8446 post-handshake authentication does digest the entire initial handshake transcript. Would we feel any safer if an exporter variant was available that digested the entire initial handshake transcript and we could use that? [ns] This is a very good question. I didn\u2019t follow the EAP-TLS1.3 issue closely, but this does seem to imply an imperfect match with post-handshake authentication in the case of mutually authenticated connections. I would like to suggest that Jonathan Hoyland (cc'd), who did the formal analysis on the protocol, provide his input. A comment from Yaron Scheffer/Benjamin Kaduk on the IANA considerations: My understanding is that the intent of this document is to only allow \"servername\" to appear in the ClientCertificateRequest structure that otherwise uses the \"CR\" notation in the TLS 1.3 column of the TLS ExtensionType Values registry, but not to allow for \"servername\" in authenticator CertificateRequests or handshake CertificateRequests. Assuming that's correct, the easiest way to indicate that would be if there was a \"Comment\" column in the registry that could say \"only used in ClientCertificateRequest certificate requests and no other certificate requests\", but there is not such a column in the registry. I think it could also work to be much more clear in the IANA considerations of this document as to why the \"CR\" is being added and what restrictions there are on its use, even if that's not quite as clean of a solution. [ns] Proposed solution: Add text to section 8.1 to clarify why the CR is being added and what restrictions there are to its use, as is done in URL Alternate solution: Propose a new column, as done in URL Comments and responses Proposals to address IESG comments below are here: URL Martin Duke (No Objection): I come away from this not being entirely sure if this document applies to DTLS or not. There is the one reference in the intro to DTLS, but it's not in the abstract, nor anywhere else. Assuming that the Sec. 1 reference is not some sort of artifact, the document would benefit from a liberal sprinkling of 's/TLS/(D)TLS' (but this would not apply to every instance) [ns] DTLS does not explicitly define a post-handshake authentication, so not all references made sense to update, but I updated all the relevant references. If (D)TLS 1.2 is REQUIRED to implement, then does this document not update those RFCs? [ns] It doesn\u2019t change existing versions of (D)TLS and it is not mandatory to include this functionality, so I don\u2019t think that it updates them. It\u2019s a separate and optional feature built on top of TLS. In any case, the text is changed to make it explicit that this is a minimum requirement. NITS: Sec 1. I think you mean Sec 4.2.6 of RFC 8446, not 4.6.3. [ns] It\u2019s actually neither, it\u2019s 4.6.2. Sec. 4.2.6 defines the extension that the client uses to advertise support for post-handshake authentication. Sec 4 and 5. \"use a secure with...\" ? [ns] Added \u201ctransport\u201d Sec 4. s/messages structures/message structures [ns] Fixed Erik Kline's No Objection COMMENT: [ sections 4 & 5 ] \"SHOULD use a secure with\" -> \"SHOULD use a secure communications channel with\" or \"SHOULD use a secure transport with\" or something? [ns] Fixed in previous change \"as its as its\" -> \"as its\" [ns] Fixed [ section 5.2.3 ] I think the first sentence of the first paragraph may not be a grammatically complete sentence? [ns] Fixed [ section 7 ] \"following sections describes APIs\" -> \"following sections describe APIs\" [ns] Fixed Francesca Palombini's No Objection Thank you for the work on this document, and thank you to the doc shepherd for the in-depth background. Please find some comments below. Francesca TLS (or DTLS) version 1.2 [RFC5246] [RFC6347]. or later are REQUIRED to implement the mechanisms described in this document. FP: Does this mechanism applies to DTLS as well? The sentence above, the normative reference to DTLS 1.2, and the IANA section 8.2 would imply so. However, this is the only time DTLS is mentioned in the body of the document, excluding IANA considerations. I would like it to be made explicit that this mechanism can be used for DTLS (if that's the case) both in the abstract and in the introduction, add any useful considerations about using this mechanism with DTLS vs TLS, and possibly add a sentence in the introduction stating that DTLS uses what specified for TLS 1.X. Two examples of why that would be useful are the following sentences: using an exporter as described in Section 4 of [RFC5705] (for TLS 1.2) or Section 7.5 of [TLS13] (for TLS 1.3). For TLS 1.3, the \"signaturealgorithms\" and \"signaturealgorithmscert\" extension, as described in Section 4.2.3 of [TLS13] for TLS 1.3 or the \"signaturealgorithms\" extension from Sections 7.4.2 and 7.4.6 of [RFC5246] for TLS 1.2. which makes it clear what applies for TLS but not for DTLS. [ns] Fixed in previous change used to send the authenticator request SHOULD use a secure with equivalent security to TLS, such as QUIC [QUIC-TLS], as its as its FP: What are the implications of not using such a secure transport protocol? Why is it just RECOMMENDED and not MANDATED? nits: missing word \"use a secure with\" ; remove one of the duplicated \"as its\". (Note: this text appears again with the same typos for the authenticator in section 5) [ns] There are no known security implications of using an insecure transport here, just privacy implications. This point was debated on list (see Yaron Sheffer\u2019s comments) and it was agreed that because there may be alternate deployment scenarios that involve transport outside the original tunnel, TLS-equivalent security is not a mandated requirement. Typos fixed in previous change. extensions (OCSP, SCT, etc.) FP: Please consider adding informative references. [ns] Added Zaheduzzaman Sarker's No Objection Thanks for the work on this document. I found it well written and I have minor comments and Nits Comment : As this document asked for a IANA registration entry with DTLS-OK, hence this mechanism is OK to be used with DTLS. I understand the heavily references to TLS 1.3 as it relay on the mechanisms described there. However, I found it odd not find any reference to DTLS1.3 (we had it on the last formal IESG telechat, it is quite ready to be referenced). Is this intentional? is it supposed to be that this mechanism defined in this document on can be used with DTLS1.2? [ns] This was a common concern, the text has been updated. Section 7.3 & 7.4: is \"active connection\" defined somewhere? it would be good if some descriptive texts are added for clarification as done for the other bullets in the same list. [ns] Wording adjusted. For the API considerations I was expecting a API to generate the certificaterequestcontext. [ns] An API would likely just return a random number, which isn\u2019t very useful. Not defining such an API gives the application freedom to choose to use this value in some other way. I don\u2019t have strong opinions either way. Nits: Post-handshake authentication is not defined in section 4.6.3 of TLS 1.3 [ns] Updated in prior change. Section 4 & 5: likely copy paste error -- s/as its as its/as its [ns] Updated in prior change. Lars Eggert's No Objection All comments below are very minor change suggestions that you may choose to incorporate in some way (or ignore), as you see fit. There is no need to let me know what you did with these suggestions. Section 1, paragraph 9, nit: TLS (or DTLS) version 1.2 [RFC5246] [RFC6347]. or later are REQUIRED - - TLS (or DTLS) version 1.2 [RFC5246][RFC6347] or later are REQUIRED [ns] Updated \u00c9ric Vyncke's No Objection Thank you for the work put into this document. Please find below some non-blocking COMMENT points (but replies would be appreciated), and some nits. I hope that this helps to improve the document, Regards, -\u00e9ric == COMMENTS == I am trusting the WG and the responsible AD for ensuring that the specific section in references are the correct ones. -- Section 2 -- Unsure where the \"terminology\" part is... Also, should \"Client\" and \"Server\" be defined here ? Reading section 3, I am still unsure whether the \"client\" and \"server\" are the \"TLS client\" and \"TLS server\" ... In the same vein, a small figure with all parties involved would be welcome by the reader. [ns] Terminology updated to reference the terms in TLS 1.3. There are only two parties in this protocol, so I\u2019m not sure a diagram will help. == NITS == In some places, 'TLS13' is used as an anchor rather than 'RFC8446'. [ns] Corrected. Sometimes, the document uses 'octet' and sometimes 'bytes'. [ns] Interesting. This is likely an oversight due to RFC 8446\u2019s use of both terms. I\u2019ve adjusted the text to try to be more consistent with that document (octets for on-the-wire, bytes for string values). -- Section 1 -- Spurious '.' in the last \u00a7. [ns] Fixed in previous change. -- Section 4 -- \"as its as its\" ? [ns] Fixed in previous change. Ben Kaduk's IESG review + Yes You can view, comment on, or merge this pull request online at: URL Commit Summary Editorial suggestions from IESG review COMMENT: I've made some (mostl) editorial suggestions in a github PR: URL Despite the overall YES ballot, I do have some substantive comments that may be worth some further consideration. Do we want to give any insight or examples into how an application might choose to use the certificaterequestcontext? [ns] I put a proposed line here. Thanks to Yaron Sheffer for the multiple secdir reviews. My understanding is that the intent of this document is to only allow \"servername\" to appear in the ClientCertificateRequest structure that otherwise uses the \"CR\" notation in the TLS 1.3 column of the TLS ExtensionType Values registry, but not to allow for \"servername\" in authenticator CertificateRequests or handshake CertificateRequests. Assuming that's correct, the easiest way to indicate that would be if there was a \"Comment\" column in the registry that could say \"only used in ClientCertificateRequest certificate requests and no other certificate requests\", but there is not such a column in the registry. I think it could also work to be much more clear in the IANA considerations of this document as to why the \"CR\" is being added and what restrictions there are on its use, even if that's not quite as clean of a solution. [ns] I agree that this is an open question. Section 1 multiple identities - Endpoints that are authoritative for multiple identities - but do not have a single certificate that includes all of the identities - can authenticate additional identities over a single connection. IIUC this is only new functionality for server endpoints, since client endpoints can use different certificates for subsequent post-handshake authentication events. [ns] Correct. TLS (or DTLS) version 1.2 [RFC5246] [RFC6347]. or later are REQUIRED to implement the mechanisms described in this document. Please clarify whether this is a minimum TLS version required for using EAs, or that this is a binding requirement on implementations of the named protocol versions. (If the latter is intended we may have some discussions about an Updates: tag...) [ns] I meant \u201cminimum version of 1.2\u201d and have changed the text to reflect this. Section 5.1 Using an exporter value as the handshake context means that any client authentication from the initial handshake is not cryptographically digested into the exporter value. In light of what we ran into with EAP-TLS1.3, do we want to revisit whether we're still happy with that? We should in practice still benefit from the implicit confirmation that anything the server participates in after receiving the Client Finished means the server properly validated it and did not abort the handshake, but it's a bit awkward, especially since the RFC 8446 post-handshake authentication does digest the entire initial handshake transcript. Would we feel any safer if an exporter variant was available that digested the entire initial handshake transcript and we could use that? [ns] Addressed in the open question section above. Section 5.2.1 Certificates chosen in the Certificate message MUST conform to the requirements of a Certificate message in the negotiated version of TLS. In particular, the certificate chain MUST be valid for the signature algorithms indicated by the peer in the \"signaturealgorithms\" and \"signaturealgorithmscert\" extension, as described in Section 4.2.3 of [TLS13] for TLS 1.3 or the \"signaturealgorithms\" extension from Sections 7.4.2 and 7.4.6 of [RFC5246] for TLS 1.2. This seems awkward. Do \"the requirements of a Certificate message in the negotiated version of TLS\" include the actual message structure? Because the rest of the document suggests that we always use the TLS 1.3 Certificate, but this line would set us up for using a TLS 1.2 Certificate. [ns] They do not include the actual message structure. The constraints are about the certificatelist. I\u2019ve updated the text. START Also, per \u00a71.3 of RFC 8446, \"signaturealgorithmscert\" also applies to TLS 1.2, so the last sentence seems incorrect or at least incomplete. [ns] Updated Section 5.2.1 Otherwise, the Certificate message MUST contain only extensions present in the TLS handshake. Unrecognized extensions in the authenticator request MUST be ignored. At least the first sentence seems redundant with the previous section (but I did not propose removing this in my editorial PR). The second sentence arguably follows from the RFC 8446 requirements, though I don't mind mentioning it again as much as I do for the first sentence. [ns] I don\u2019t mind being explicit here considering there was some confusion about this previously. Section 5.2.2 The authenticator transcript is the hash of the concatenated Handshake Context, authenticator request (if present), and Certificate message: Hash(Handshake Context Certificate) Where Hash is the authenticator hash defined in section 4.1. If the authenticator request is not present, it is omitted from this construction, i.e., it is zero-length. This construction is injective, because the handshake messages start with a message HandshakeType. Should we say so explicitly in the document? [ns] I\u2019m not sure I understand what injective means in this context or how it helps the reader. If the party that generates the exported authenticator does so with a different connection than the party that is validating it, then the Handshake Context will not match, resulting in a CertificateVerify message that does not validate. This includes situations in which the application data is sent via TLS-terminating proxy. Given a failed CertificateVerify validation, it may be helpful for the application to confirm that both peers share the same connection using a value derived from the connection secrets before taking a user-visible action. Is it safe to reuse the Handshake Context itself as the \"value derived from the connection secrets\"? [ns] Yes, it should be safe. Updated. Section 7.4 It returns the identity, such as the certificate chain and its extensions, and a status to indicate whether the authenticator is valid or not after applying the function for validating the certificate chain to the chain contained in the authenticator. I strongly recommend not returning the identity in the case when validation fails. [ns] Fair. Updated. The API should return a failure if the certificaterequest_context of the authenticator was used in a previously validated authenticator. Is the intent for this to be a \"distinct previously validated authenticator\" (i.e., that the API returns the same value when given the same inputs subsequently)? That does not seem to be the meaning conveyed by the current text. [ns] Yes, this is about distinct validators. Updated. Section 8.2 IANA is requested to add the following entries to the registry for Exporter Labels (defined in [RFC5705]): \"EXPORTER-server authenticator handshake context\", \"EXPORTER-client authenticator finished key\" and \"EXPORTER-server authenticator finished key\" with Don't we also need \"EXPORTER-client authenticator handshake context\"? [ns] Yes, added. Section 9 I think we should more explicitly incorporate the TLS 1.3 security considerations by reference. [ns] Yes. Section 11.1 I don't think [QUIC-TLS] or RFCs 7250 and 7540 are normative references. [ns] Updated Murray Kucherawy's No Objection Why are the SHOULDs in Section 4 only SHOULDs? Why might an implementer reasonably not do what this says? [ns] Addressed after discussion on list and in reply above. Similarly, the SHOULDs in Section 7 seem awkward, as they have little to do with interoperability. [ns] It\u2019s true that this is about interoperability. I\u2019m open to changing these to lower-case shoulds since they reflect implementation recommendations rather than interoperability. Nick", "new_text": "The signatures generated with this API cover the context string \"Exported Authenticator\" and therefore cannot be transplanted into other protocols. In TLS 1.3 the client can not explicitly learn from the TLS layer whether its Finished message was accepted. Because the application traffic keys are not dependent on the client's final flight, receiving messages from the server does not prove that the server received the client's Finished. To avoid disagreement between the client and server on the authentication status of EAs, servers MUST verify the client Finished before sending an EA or processing a received EA. "}
{"id": "q-en-tls-subcerts-712efc365d20c34e191fe35b898c44faf2d8d6b5ecb01c42511e5420c8db212c", "old_text": "Abstract The organizational separation between the operator of a TLS server and the certificate authority can create limitations. For example, the lifetime of certificates, how they may be used, and the algorithms they support are ultimately determined by the certificate authority. This document describes a mechanism by which operators may delegate their own credentials for use in TLS, without breaking compatibility with clients that do not support this specification. 1.", "comments": "Just some nits.\nThe public key name and type have been changed in , as NAME suggested on the mailing list.\nNAME beat me to it!\noh looks like the travis build is failing for some reason\nNAME I have no idea why pushing that worked ...\nawesomesauce. it's probably some temporary failure while trying to fetch the new references or something", "new_text": "Abstract The organizational separation between the operator of a TLS server and the certification authority can create limitations. For example, the lifetime of certificates, how they may be used, and the algorithms they support are ultimately determined by the certification authority. This document describes a mechanism by which operators may delegate their own credentials for use in TLS, without breaking compatibility with clients that do not support this specification. 1."}
{"id": "q-en-tls-subcerts-712efc365d20c34e191fe35b898c44faf2d8d6b5ecb01c42511e5420c8db212c", "old_text": "operators to limit the exposure of keys in cases that they do not realize a compromise has occurred. The risk inherent in cross- organizational transactions makes it operationally infeasible to rely on an external CA for such short-lived credentials. In OCSP stapling, if an operator chooses to talk frequently to the CA to obtain stapled responses, then failure to fetch an OCSP stapled response results only in degraded performance. On the other hand, failure to fetch a potentially large number of short lived certificates would result in the service not being available, which", "comments": "Just some nits.\nThe public key name and type have been changed in , as NAME suggested on the mailing list.\nNAME beat me to it!\noh looks like the travis build is failing for some reason\nNAME I have no idea why pushing that worked ...\nawesomesauce. it's probably some temporary failure while trying to fetch the new references or something", "new_text": "operators to limit the exposure of keys in cases that they do not realize a compromise has occurred. The risk inherent in cross- organizational transactions makes it operationally infeasible to rely on an external CA for such short-lived credentials. In OCSP stapling (i.e., using the Certificate Status extension types ocsp RFC6066 or ocsp_multi RFC6961), if an operator chooses to talk frequently to the CA to obtain stapled responses, then failure to fetch an OCSP stapled response results only in degraded performance. On the other hand, failure to fetch a potentially large number of short lived certificates would result in the service not being available, which"}
{"id": "q-en-tls-subcerts-712efc365d20c34e191fe35b898c44faf2d8d6b5ecb01c42511e5420c8db212c", "old_text": "associated signature algorithm). The signature on the credential indicates a delegation from the certificate that is issued to the TLS server operator. The secret key used to sign a credential corresponds to the public key of the X.509 end-entity certificate. A TLS handshake that uses delegated credentials differs from a normal handshake in a few important ways:", "comments": "Just some nits.\nThe public key name and type have been changed in , as NAME suggested on the mailing list.\nNAME beat me to it!\noh looks like the travis build is failing for some reason\nNAME I have no idea why pushing that worked ...\nawesomesauce. it's probably some temporary failure while trying to fetch the new references or something", "new_text": "associated signature algorithm). The signature on the credential indicates a delegation from the certificate that is issued to the TLS server operator. The secret key used to sign a credential corresponds to the public key of the TLS server's X.509 end-entity certificate. A TLS handshake that uses delegated credentials differs from a normal handshake in a few important ways:"}
{"id": "q-en-tls-subcerts-712efc365d20c34e191fe35b898c44faf2d8d6b5ecb01c42511e5420c8db212c", "old_text": "It was noted in XPROT that certificates in use by servers that support outdated protocols such as SSLv2 can be used to forge signatures for certificates that contain the keyEncipherment KeyUsage (RFC5280 section 4.2.1.3) In order to prevent this type of cross- protocol attack, we define a new DelegationUsage extension to X.509 that permits use of delegated credentials. (See certificate- requirements.)", "comments": "Just some nits.\nThe public key name and type have been changed in , as NAME suggested on the mailing list.\nNAME beat me to it!\noh looks like the travis build is failing for some reason\nNAME I have no idea why pushing that worked ...\nawesomesauce. it's probably some temporary failure while trying to fetch the new references or something", "new_text": "It was noted in XPROT that certificates in use by servers that support outdated protocols such as SSLv2 can be used to forge signatures for certificates that contain the keyEncipherment KeyUsage (RFC5280 section 4.2.1.3). In order to prevent this type of cross- protocol attack, we define a new DelegationUsage extension to X.509 that permits use of delegated credentials. (See certificate- requirements.)"}
{"id": "q-en-tls-subcerts-712efc365d20c34e191fe35b898c44faf2d8d6b5ecb01c42511e5420c8db212c", "old_text": "private key. The mechanism proposed in this document allows the delegation to be done off-line, with no per-transaction latency. The figure below compares the message flows for these two mechanisms with TLS 1.3 I-D.ietf-tls-tls13. These two mechanisms can be complementary. A server could use credentials for clients that support them, while using LURK to", "comments": "Just some nits.\nThe public key name and type have been changed in , as NAME suggested on the mailing list.\nNAME beat me to it!\noh looks like the travis build is failing for some reason\nNAME I have no idea why pushing that worked ...\nawesomesauce. it's probably some temporary failure while trying to fetch the new references or something", "new_text": "private key. The mechanism proposed in this document allows the delegation to be done off-line, with no per-transaction latency. The figure below compares the message flows for these two mechanisms with TLS 1.3 I-D.ietf-tls-tls13, where DC is delegated credentials. These two mechanisms can be complementary. A server could use credentials for clients that support them, while using LURK to"}
{"id": "q-en-tls-subcerts-712efc365d20c34e191fe35b898c44faf2d8d6b5ecb01c42511e5420c8db212c", "old_text": "to match the protocol version that is negotiated by the client and server. The credential's public key, a DER-encoded SubjectPublicKeyInfo as defined in RFC5280. The delegated credential has the following structure:", "comments": "Just some nits.\nThe public key name and type have been changed in , as NAME suggested on the mailing list.\nNAME beat me to it!\noh looks like the travis build is failing for some reason\nNAME I have no idea why pushing that worked ...\nawesomesauce. it's probably some temporary failure while trying to fetch the new references or something", "new_text": "to match the protocol version that is negotiated by the client and server. The credential's public key, a DER-encoded X690 SubjectPublicKeyInfo as defined in RFC5280. The delegated credential has the following structure:"}
{"id": "q-en-tls-subcerts-2de55f8100710bd292f098a3b997d176ede8f1e47fd2f392db9726a098f3d89e", "old_text": "Many of the use cases for delegated credentials can also be addressed using purely server-side mechanisms that do not require changes to client behavior (e.g., LURK I-D.mglt-lurk-tls-requirements). These mechanisms, however, incur per-transaction latency, since the front- end server has to interact with a back-end server that holds a private key. The mechanism proposed in this document allows the delegation to be done off-line, with no per-transaction latency. The figure below compares the message flows for these two mechanisms with TLS 1.3 I-D.ietf-tls-tls13. These two mechanisms can be complementary. A server could use credentials for clients that support them, while using LURK to support legacy clients. It is possible to address the short-lived certificate concerns above by automating certificate issuance, e.g., with ACME I-D.ietf-acme- acme. In addition to requiring frequent operationally-critical interactions with an external party, this makes the server operator dependent on the CA's willingness to issue certificates with sufficiently short lifetimes. It also fails to address the issues with algorithm support. Nonetheless, existing automated issuance APIs like ACME may be useful for provisioning credentials within an operator network. 3. While X.509 forbids end-entity certificates from being used as issuers for other certificates, it is perfectly fine to use them to issue other signed objects as long as the certificate contains the digitalSignature KeyUsage (RFC5280 section 4.2.1.3). We define a new signed object format that would encode only the semantics that are needed for this application. The credential has the following structure: Relative time in seconds from the beginning of the delegation", "comments": "Use RFC references\nMaybe use generic language, like \"signing oracle\" :)\nooh, naming things is so difficult :) Maybe \"remote certificate signing offload\"?\nPlease double-check. At least one of these appears to be wrong.\nwere you looking at the editor copy perhaps, does the latest draft appear wrong as well? URL i think the editor copy went out of sync\nAhh. I was seeing Richard as being out of date. I was also wondering if NAME wanted to consider affiliation adjustment also.\nNAME Should we update your affiliation to Mozilla here?\nRequest an extension point from the IANA experts and add it to the document.\nshipit. As these are all editorial changes, I don't think we need a long discussion.", "new_text": "Many of the use cases for delegated credentials can also be addressed using purely server-side mechanisms that do not require changes to client behavior (e.g., a PKCS#11 interface or a remote signing mechanism KEYLESS). These mechanisms, however, incur per-transaction latency, since the front-end server has to interact with a back-end server that holds a private key. The mechanism proposed in this document allows the delegation to be done off-line, with no per- transaction latency. The figure below compares the message flows for these two mechanisms with TLS 1.3 RFC8446. These two mechanisms can be complementary. A server could use credentials for clients that support them, while using LURK to support legacy clients. It is possible to address the short-lived certificate concerns above by automating certificate issuance, e.g., with ACME RFC8555. In addition to requiring frequent operationally-critical interactions with an external party, this makes the server operator dependent on the CA's willingness to issue certificates with sufficiently short lifetimes. It also fails to address the issues with algorithm support. Nonetheless, existing automated issuance APIs like ACME may be useful for provisioning credentials within an operator network. 3. While X.509 forbids end-entity certificates from being used as issuers for other certificates, it is perfectly fine to use them to issue other signed objects as long as the certificate contains the digitalSignature KeyUsage (RFC 5280 section 4.2.1.3). We define a new signed object format that would encode only the semantics that are needed for this application. The credential has the following structure: Relative time in seconds from the beginning of the delegation"}
{"id": "q-en-tls-subcerts-2de55f8100710bd292f098a3b997d176ede8f1e47fd2f392db9726a098f3d89e", "old_text": "5.4. Delegated credentials can be valid for 7 days and it is much easier for a service to create delegated credential than a certificate signed by a CA. A service could determine the client time and clock", "comments": "Use RFC references\nMaybe use generic language, like \"signing oracle\" :)\nooh, naming things is so difficult :) Maybe \"remote certificate signing offload\"?\nPlease double-check. At least one of these appears to be wrong.\nwere you looking at the editor copy perhaps, does the latest draft appear wrong as well? URL i think the editor copy went out of sync\nAhh. I was seeing Richard as being out of date. I was also wondering if NAME wanted to consider affiliation adjustment also.\nNAME Should we update your affiliation to Mozilla here?\nRequest an extension point from the IANA experts and add it to the document.\nshipit. As these are all editorial changes, I don't think we need a long discussion.", "new_text": "5.4. If a client decides to cache the certificate chain an re-validate it when resuming a connection, the client SHOULD also cache the associated delegated credential and re-validate it. 5.5. Delegated credentials can be valid for 7 days and it is much easier for a service to create delegated credential than a certificate signed by a CA. A service could determine the client time and clock"}
{"id": "q-en-tls-subcerts-f467b45eaaf38f201ed552066cd3c945b3c954f5b2e70b715c5f16ef56db57c5", "old_text": "1. Typically, a TLS server uses a certificate provided by some entity other than the operator of the server (a \"Certification Authority\" or CA) RFC8446 RFC5280. This organizational separation makes the TLS", "comments": "Addresses .\nRefactor intro to: Use case Problem description Solution from . Hi Sean, Thanks for the follow-up. I refreshed my memory as much as I could. Please see my responses in line. Yours, Daniel Sean Turner wrote: I think I would rather take the structure: 1 - Use case 2 - Problem description 3 - Solution I am fine with whatever fits best to you. Server operators often deploy TLS termination services in locations such as remote data centers or Content Delivery Networks (CDNs) where it may be difficult to detect key compromises. Short-lived certificates may be used to limit the exposure of keys in these cases. However, short-lived certificates need to be renewed more frequently than long-lived certificates. If an external CA is unable to issue a certificate in time to replace a deployed certificate, the server would no longer be able to present a valid certificate to clients. With short-lived certificates, there is a smaller window of time to renew a certificates and therefore a higher risk that an outage at a CA will negatively affect the uptime of the service. Typically, a TLS server uses a certificate provided by some entity other than the operator of the server (a \"Certification Authority\" or CA) [RFC8446 ] [RFC5280 ]. This organizational separation makes the TLS server operator dependent on the CA for some aspects of its operations, for example: Whenever the server operator wants to deploy a new certificate, it has to interact with the CA. The server operator can only use TLS signature schemes for which the CA will issue credentials. To reduce the dependency on external CAs, this document proposes a limited delegation mechanism that allows a TLS peer to issue its own credentials within the scope of a certificate issued by an external CA. These credentials only enable the recipient of the delegation to speak for names that the CA has authorized. Furthermore, this mechanism allows the server to use modern signature algorithms such as Ed25519 [RFC8032 ] even if their CA does not support them. We will refer to the certificate issued by the CA as a \"certificate\", or \"delegation certificate\", and the one issued by the operator as a \"delegated credential\" or \"DC\". I think my concern has been clarified by Ilari. It was resolved by your b). I am fine with both text. works for me. The text does not address my concern and will recap my perspective of the discussion. My initial comment was that the reference [KEYLESS] points to a solution that does not work with TLS 1.3 and currently the only effort compatible with TLS 1.3 is draft-mglt-lurk-tls13. I expect this reference to be mentioned and I do not see it in the text proposed text. If the intention with [KEYLESS] is to mention TLS Handshake Proxying techniques, then I argued that other mechanisms be mentioned and among others draft-mglt-lurk-tls12 and draft-mglt-lurk-tls13 that address different versions of TLS. This was my second concern and I do not see any of these references being mentioned. The way I understand your response is that the paper pointed out by [KEYLESS] is not Cloudflare's commercial product but instead a generic TLS Handshake proxying and that you propose to add a reference to an academic paper that discusses the LURK extension for TLS 1.2. I appreciate that other solutions - and in this case LURK, are being considered. I am wondering however why draft-mglt-lurk-tls12 is being replaced by an academic reference [REF2] and why draft-mglt-lurk-tls13 has been omitted. It seems that we are trying to avoid any reference of the work that is happening at the IETF. Is there anything I am missing ? My suggestion for the text OLD: (e.g., a PKCS interface or a remote signing mechanism [KEYLESS]) NEW: (e.g., a PKCS interface or a remote signing mechanism such as [draft-mglt-lurk-tls13] or [draft-mglt-lurk-tls12] ([LURK-TLS12]) or [KEYLESS]) with [KEYLESS] Sullivan, N. and D. Stebila, \"An Analysis of TLS Handshake Proxying\", IEEE TrustCom/BigDataSE/ISPA 2015, 2015. [LURK-TLS12] Boureanu, I., Migault, D., Preda, S., Alamedine, H.A., Mishra, S. Fieau, F., and M. Mannan, \"LURK: Server-Controlled TLS Delegation\u201d, IEEE TrustCom 2020, URL, 2020. [draft-mglt-lurk-tls12] IETF draft [draft-mglt-lurk-tls13] IETF draft It is unclear to me what is generic about the paper pointed by [KEYLESS] or [REF1]. In any case, for clarification, the paper clearly refers to Cloudflare commercial products as opposed to a generic mechanism, and this will remain whatever label will be used as a reference. Typically, Cloudflare co-authors the paper appears 12 times in the 8 page paper. The contribution section mentions: \"\"\" Our work focuses specifically on the TLS handshake proxying system implemented by CloudFlare in their Keyless SSL product [6], [7] \"\"\" The methodology mentions says: \"\"\" We tested TLS key proxying using CloudFlare\u2019s implementation, which was implemented with the following three parts: \"\"\" \"\"\" The different scenarios were set up through CloudFlare\u2019s control panel. In the direct handshake handshake scenario, the site is set up with no reverse proxy. In the scenario where the key is held by the edge server, the same certificate that was used on the origin is uploaded to CloudFlare and the reverse proxy is enabled for the site. \"\"\" Cloudflare appears in at least references URL CloudFlare Inc., \u201cCloudFlare Keyless SSL,\u201d Sep. 2014, URL N. Sullivan, \u201cKeyless SSL: The nitty gritty technical details,\u201d Sep. 2014, URL works for me. works for me seems clearer. I suspect earlier clarifications addressed this. thanks that it way clearer. Daniel Migault Ericsson", "new_text": "1. Server operators often deploy TLS termination services in locations such as remote data centers or Content Delivery Networks (CDNs) where it may be difficult to detect key compromises. Short-lived certificates may be used to limit the exposure of keys in these cases. However, short-lived certificates need to be renewed more frequently than long-lived certificates. If an external CA is unable to issue a certificate in time to replace a deployed certificate, the server would no longer be able to present a valid certificate to clients. With short-lived certificates, there is a smaller window of time to renew a certificates and therefore a higher risk that an outage at a CA will negatively affect the uptime of the service. Typically, a TLS server uses a certificate provided by some entity other than the operator of the server (a \"Certification Authority\" or CA) RFC8446 RFC5280. This organizational separation makes the TLS"}
{"id": "q-en-tls-subcerts-f467b45eaaf38f201ed552066cd3c945b3c954f5b2e70b715c5f16ef56db57c5", "old_text": "The server operator can only use TLS signature schemes for which the CA will issue credentials. These dependencies cause problems in practice. Server operators often deploy TLS termination services in locations such as remote data centers or Content Delivery Networks (CDNs) where it may be difficult to detect key compromises. Short-lived certificates may be used to limit the exposure of keys in these cases. However, short-lived certificates need to be renewed more frequently than long-lived certificates. If an external CA is unable to issue a certificate in time to replace a deployed certificate, the server would no longer be able to present a valid certificate to clients. With short-lived certificates, there is a smaller window of time to renew a certificates and therefore a higher risk that an outage at a CA will negatively affect the uptime of the service. To reduce the dependency on external CAs, this document proposes a limited delegation mechanism that allows a TLS peer to issue its own credentials within the scope of a certificate issued by an external", "comments": "Addresses .\nRefactor intro to: Use case Problem description Solution from . Hi Sean, Thanks for the follow-up. I refreshed my memory as much as I could. Please see my responses in line. Yours, Daniel Sean Turner wrote: I think I would rather take the structure: 1 - Use case 2 - Problem description 3 - Solution I am fine with whatever fits best to you. Server operators often deploy TLS termination services in locations such as remote data centers or Content Delivery Networks (CDNs) where it may be difficult to detect key compromises. Short-lived certificates may be used to limit the exposure of keys in these cases. However, short-lived certificates need to be renewed more frequently than long-lived certificates. If an external CA is unable to issue a certificate in time to replace a deployed certificate, the server would no longer be able to present a valid certificate to clients. With short-lived certificates, there is a smaller window of time to renew a certificates and therefore a higher risk that an outage at a CA will negatively affect the uptime of the service. Typically, a TLS server uses a certificate provided by some entity other than the operator of the server (a \"Certification Authority\" or CA) [RFC8446 ] [RFC5280 ]. This organizational separation makes the TLS server operator dependent on the CA for some aspects of its operations, for example: Whenever the server operator wants to deploy a new certificate, it has to interact with the CA. The server operator can only use TLS signature schemes for which the CA will issue credentials. To reduce the dependency on external CAs, this document proposes a limited delegation mechanism that allows a TLS peer to issue its own credentials within the scope of a certificate issued by an external CA. These credentials only enable the recipient of the delegation to speak for names that the CA has authorized. Furthermore, this mechanism allows the server to use modern signature algorithms such as Ed25519 [RFC8032 ] even if their CA does not support them. We will refer to the certificate issued by the CA as a \"certificate\", or \"delegation certificate\", and the one issued by the operator as a \"delegated credential\" or \"DC\". I think my concern has been clarified by Ilari. It was resolved by your b). I am fine with both text. works for me. The text does not address my concern and will recap my perspective of the discussion. My initial comment was that the reference [KEYLESS] points to a solution that does not work with TLS 1.3 and currently the only effort compatible with TLS 1.3 is draft-mglt-lurk-tls13. I expect this reference to be mentioned and I do not see it in the text proposed text. If the intention with [KEYLESS] is to mention TLS Handshake Proxying techniques, then I argued that other mechanisms be mentioned and among others draft-mglt-lurk-tls12 and draft-mglt-lurk-tls13 that address different versions of TLS. This was my second concern and I do not see any of these references being mentioned. The way I understand your response is that the paper pointed out by [KEYLESS] is not Cloudflare's commercial product but instead a generic TLS Handshake proxying and that you propose to add a reference to an academic paper that discusses the LURK extension for TLS 1.2. I appreciate that other solutions - and in this case LURK, are being considered. I am wondering however why draft-mglt-lurk-tls12 is being replaced by an academic reference [REF2] and why draft-mglt-lurk-tls13 has been omitted. It seems that we are trying to avoid any reference of the work that is happening at the IETF. Is there anything I am missing ? My suggestion for the text OLD: (e.g., a PKCS interface or a remote signing mechanism [KEYLESS]) NEW: (e.g., a PKCS interface or a remote signing mechanism such as [draft-mglt-lurk-tls13] or [draft-mglt-lurk-tls12] ([LURK-TLS12]) or [KEYLESS]) with [KEYLESS] Sullivan, N. and D. Stebila, \"An Analysis of TLS Handshake Proxying\", IEEE TrustCom/BigDataSE/ISPA 2015, 2015. [LURK-TLS12] Boureanu, I., Migault, D., Preda, S., Alamedine, H.A., Mishra, S. Fieau, F., and M. Mannan, \"LURK: Server-Controlled TLS Delegation\u201d, IEEE TrustCom 2020, URL, 2020. [draft-mglt-lurk-tls12] IETF draft [draft-mglt-lurk-tls13] IETF draft It is unclear to me what is generic about the paper pointed by [KEYLESS] or [REF1]. In any case, for clarification, the paper clearly refers to Cloudflare commercial products as opposed to a generic mechanism, and this will remain whatever label will be used as a reference. Typically, Cloudflare co-authors the paper appears 12 times in the 8 page paper. The contribution section mentions: \"\"\" Our work focuses specifically on the TLS handshake proxying system implemented by CloudFlare in their Keyless SSL product [6], [7] \"\"\" The methodology mentions says: \"\"\" We tested TLS key proxying using CloudFlare\u2019s implementation, which was implemented with the following three parts: \"\"\" \"\"\" The different scenarios were set up through CloudFlare\u2019s control panel. In the direct handshake handshake scenario, the site is set up with no reverse proxy. In the scenario where the key is held by the edge server, the same certificate that was used on the origin is uploaded to CloudFlare and the reverse proxy is enabled for the site. \"\"\" Cloudflare appears in at least references URL CloudFlare Inc., \u201cCloudFlare Keyless SSL,\u201d Sep. 2014, URL N. Sullivan, \u201cKeyless SSL: The nitty gritty technical details,\u201d Sep. 2014, URL works for me. works for me seems clearer. I suspect earlier clarifications addressed this. thanks that it way clearer. Daniel Migault Ericsson", "new_text": "The server operator can only use TLS signature schemes for which the CA will issue credentials. To reduce the dependency on external CAs, this document proposes a limited delegation mechanism that allows a TLS peer to issue its own credentials within the scope of a certificate issued by an external"}
{"id": "q-en-tls-subcerts-3918dba5054895d90af43da83276a0a4502a94c4abb9c4edc8bb2f3244e7108d", "old_text": "A client which supports this document SHALL send an empty \"delegated_credential\" extension in its ClientHello. If the client receives a delegated credential without indicating support, then the client SHOULD abort with an \"unexpected_message\" alert. If the extension is present, the server MAY send a delegated credential extension. If the extension is not present, the server MUST NOT send a credential. A credential MUST NOT be provided unless a Certificate message is also sent. The server MUST ignore the extension unless TLS 1.2 or later is negotiated. When negotiating TLS 1.3, and using delegated credentials, the server MUST send the delegated credential as an extension in the CertificateEntry of its end-entity certificate; the client SHOULD ignore delegated credentials sent as extensions with any other certificate. When negotiating TLS 1.2, the delegated credential MUST be sent as an extension in the ServerHello. The delegated credential contains a signature from the public key in the end-entity certificate using a signature algorithm advertised by", "comments": "Based on The delegation certificate MAY have the DelegationUsage extension marked critical. This also adds a \"strict\" flag to the extension that, if set, requires the server to offer a DC. An alternative that was discussed is to add an optional TLS-feature extension (per RFC 7633) that would provide a \"must-use-DC\" feature, analogous to \"must-staple-OCSP\". Howver, it's possible for a certificate with such an extension to be used in a handshake without DCs if the client doesn't indicate support for DCs.", "new_text": "A client which supports this document SHALL send an empty \"delegated_credential\" extension in its ClientHello. If the client receives a delegated credential without indicating support, then the client MUST abort with an \"unexpected_message\" alert. If the extension is present, the server MAY send a delegated credential extension; if the extension is not present, the server MUST NOT send a delegated credential. A delegated credential MUST NOT be provided unless a Certificate message is also sent. The server MUST ignore the extension unless TLS 1.2 or later is negotiated. When negotiating TLS 1.3, the server MUST send the delegated credential as an extension in the CertificateEntry of its end-entity certificate; the client SHOULD ignore delegated credentials sent as extensions to any other certificate. When negotiating TLS 1.2, the delegated credential MUST be sent as an extension in the ServerHello. The delegated credential contains a signature from the public key in the end-entity certificate using a signature algorithm advertised by"}
{"id": "q-en-tls-subcerts-3918dba5054895d90af43da83276a0a4502a94c4abb9c4edc8bb2f3244e7108d", "old_text": "certificate when the certificate permits the usage of delegated credentials. Conforming CAs MUST mark this extension as non-critical. This allows the certificate to be used by service owners for clients that do not support certificate delegation as well and not need to obtain two certificates. The client MUST NOT accept a delegated credential unless the server's end-entity certificate satisfies the following criteria:", "comments": "Based on The delegation certificate MAY have the DelegationUsage extension marked critical. This also adds a \"strict\" flag to the extension that, if set, requires the server to offer a DC. An alternative that was discussed is to add an optional TLS-feature extension (per RFC 7633) that would provide a \"must-use-DC\" feature, analogous to \"must-staple-OCSP\". Howver, it's possible for a certificate with such an extension to be used in a handshake without DCs if the client doesn't indicate support for DCs.", "new_text": "certificate when the certificate permits the usage of delegated credentials. The client MUST NOT accept a delegated credential unless the server's end-entity certificate satisfies the following criteria:"}
{"id": "q-en-tls-subcerts-3918dba5054895d90af43da83276a0a4502a94c4abb9c4edc8bb2f3244e7108d", "old_text": "type in RFC5280), but has the keyEncipherment and dataEncipherment usages are disabled. 5. TBD", "comments": "Based on The delegation certificate MAY have the DelegationUsage extension marked critical. This also adds a \"strict\" flag to the extension that, if set, requires the server to offer a DC. An alternative that was discussed is to add an optional TLS-feature extension (per RFC 7633) that would provide a \"must-use-DC\" feature, analogous to \"must-staple-OCSP\". Howver, it's possible for a certificate with such an extension to be used in a handshake without DCs if the client doesn't indicate support for DCs.", "new_text": "type in RFC5280), but has the keyEncipherment and dataEncipherment usages are disabled. The extension MAY be marked crititcal. (See Section 4.2 of RFC5280.) If the strict boolean is set to true, then the server MUST use delegated credential in the handshake; if no delegated credential is offered, then the client MUST abort the handshake with an \"illegal_parameter\" alert. 5. TBD"}
{"id": "q-en-tls-subcerts-3918dba5054895d90af43da83276a0a4502a94c4abb9c4edc8bb2f3244e7108d", "old_text": "6.1. Delegated credentials limit the exposure of the TLS private key by limiting its validity. An attacker who compromises the private key of a delegated credential can act as a man in the middle until the", "comments": "Based on The delegation certificate MAY have the DelegationUsage extension marked critical. This also adds a \"strict\" flag to the extension that, if set, requires the server to offer a DC. An alternative that was discussed is to add an optional TLS-feature extension (per RFC 7633) that would provide a \"must-use-DC\" feature, analogous to \"must-staple-OCSP\". Howver, it's possible for a certificate with such an extension to be used in a handshake without DCs if the client doesn't indicate support for DCs.", "new_text": "6.1. Marking the delegation certificate's DelegationUsage extension non- critical allows the certificate to be used for clients that do not support delegated credentials. However, it may be desirable to ensure that the delegation certificate is only used in handshakes in which a delegated credential negotiated. It suffices to mark the extension crticial and set the strict boolean to true: if the client does not support delegated credentials, then it will abort the handshake if the certificate has the DelegationUsage extension (as per Section 4.2 of RFC5280); if the client indicates support, but the server does not offer a delegated credential, then the client will abort the handshake (as per certificate-requirements). 6.2. Delegated credentials limit the exposure of the TLS private key by limiting its validity. An attacker who compromises the private key of a delegated credential can act as a man in the middle until the"}
{"id": "q-en-tls-subcerts-3918dba5054895d90af43da83276a0a4502a94c4abb9c4edc8bb2f3244e7108d", "old_text": "access control mechanisms SHOULD be used to protect it such as file system controls, physical security or hardware security modules. 6.2. Delegated credentials do not provide any additional form of early revocation. Since it is short lived, the expiry of the delegated", "comments": "Based on The delegation certificate MAY have the DelegationUsage extension marked critical. This also adds a \"strict\" flag to the extension that, if set, requires the server to offer a DC. An alternative that was discussed is to add an optional TLS-feature extension (per RFC 7633) that would provide a \"must-use-DC\" feature, analogous to \"must-staple-OCSP\". Howver, it's possible for a certificate with such an extension to be used in a handshake without DCs if the client doesn't indicate support for DCs.", "new_text": "access control mechanisms SHOULD be used to protect it such as file system controls, physical security or hardware security modules. 6.3. Delegated credentials do not provide any additional form of early revocation. Since it is short lived, the expiry of the delegated"}
{"id": "q-en-tls-subcerts-3918dba5054895d90af43da83276a0a4502a94c4abb9c4edc8bb2f3244e7108d", "old_text": "private key that signs the delegated credential also implictly revokes the delegated credential. 6.3. Delegated credentials can be valid for 7 days and it is much easier for a service to create delegated credential than a certificate", "comments": "Based on The delegation certificate MAY have the DelegationUsage extension marked critical. This also adds a \"strict\" flag to the extension that, if set, requires the server to offer a DC. An alternative that was discussed is to add an optional TLS-feature extension (per RFC 7633) that would provide a \"must-use-DC\" feature, analogous to \"must-staple-OCSP\". Howver, it's possible for a certificate with such an extension to be used in a handshake without DCs if the client doesn't indicate support for DCs.", "new_text": "private key that signs the delegated credential also implictly revokes the delegated credential. 6.4. Delegated credentials can be valid for 7 days and it is much easier for a service to create delegated credential than a certificate"}
{"id": "q-en-tls-subcerts-312ead9ab8d6d08f0114cb0399887a44c7a7c41f731bda2efaca62185a1d9d37", "old_text": "validity period. In the absence of an application profile standard specifying otherwise, the maximum validity period is set to 7 days. Peers MUST NOT issue credentials with a validity period longer than the maximum validity period. This mechanism is described in detail in client-and-server-behavior. It was noted in XPROT that certificates in use by servers that support outdated protocols such as SSLv2 can be used to forge", "comments": "Added text to address issue\nThere is a potential issue for credential renewal. On the credential owners side there is a risk that if the owner entirely depends upon the delegated credential lifetime to renew it could miss the fact that the underlying certificate has already expired will cause clients to fail validation. This can be addressed by adding the following to section 3: \"Peers MUST NOT issue credentials with a validity period longer than the maximum validity period or extend beyond the validity period of the X.509 certificate\" Or there could be an addition to the operational section that talks about renewal. \"5.2 Credential Renewal Credentials must be renewed before they expire and before the issuing X.509 certificate expires. \"\naddressed by pull request", "new_text": "validity period. In the absence of an application profile standard specifying otherwise, the maximum validity period is set to 7 days. Peers MUST NOT issue credentials with a validity period longer than the maximum validity period or that extends beyond the validity period of the delegation certificate. This mechanism is described in detail in client-and-server-behavior. It was noted in XPROT that certificates in use by servers that support outdated protocols such as SSLv2 can be used to forge"}
{"id": "q-en-tls13-spec-446ca989481017b27d7e25d52180098de90364725f816f99b010dd4798e4d36f", "old_text": "\"signature_algorithms\" is REQUIRED for certificate authentication. \"supported_groups\" and \"key_share\" are REQUIRED for DHE or ECDHE key exchange. \"pre_shared_key\" is REQUIRED for PSK key agreement.", "comments": "Even when (EC)DHE is in use, the \"supported_groups\" extension is only mandatory in the ClientHello; it can be omitted from the EncryptedExtensions, as per section 4.2.6. Given that, it is not MTI for the server sending to the client, but the client must be prepared to accept the extension in EncryptedExtensions (at least to the point of not crashing or aborting the handshake), so leave the MTI text unchanged. It would be too awkward to try to express this subtlety in such a concise list.", "new_text": "\"signature_algorithms\" is REQUIRED for certificate authentication. \"supported_groups\" is REQUIRED for ClientHello messages using DHE or ECDHE key exchange. \"key_share\" is REQUIRED for DHE or ECDHE key exchange. \"pre_shared_key\" is REQUIRED for PSK key agreement."}
{"id": "q-en-tls13-spec-7f319c42fe953d6f9eb1086843c0347b79d4f765c07a26ff1ab43e1eb4f7abb5", "old_text": "with B and probably retry the request, leading to duplication on the server system as a whole. The first class of attack can be prevented by the mechanism described in this section. Servers need not permit 0-RTT at all, but those which do SHOULD implement either the single-use tickets or ClientHello recording techniques described in the following two sections. The second class of attack cannot be prevented at the TLS layer and MUST be dealt with by any application. Note that any application whose clients implement any kind of retry behavior already needs to implement some sort of anti-replay defense. In normal operation, clients will not know which, if any, of these mechanisms servers actually implement and therefore MUST only send early data which they are willing to have subject to the attacks described in replay-0rtt. 8.1. The simplest form of anti-replay defense is for the server to only", "comments": "This change rephrases the requirements and recommendations from prescribing specific mechanisms of 0-RTT replay protection to prescribing the guarantees expected from implementations, thus letting the implementations to choose the way they can provide those guarantees. This change also introduces a MUST-level requirement for protection equivalent to having at least local strike register.\nI've addressed all editorial nits suggested (except for the comma one, which I leave to editor's discretion).\nThere are some nits noted inline (by me and others), but those are subject to editorial discretion, so I will mark my overall approval.", "new_text": "with B and probably retry the request, leading to duplication on the server system as a whole. The first class of attack can be prevented by sharing state to guarantee that the 0-RTT data is accepted at most once. Servers SHOULD provide that level of replay safety, by implementing one of the methods described in this section or by equivalent means. It is understood, however, that due to operational concerns not all deployments will maintain state at that level. Therefore, in normal operation, clients will not know which, if any, of these mechanisms servers actually implement and hence MUST only send early data which they deem safe to be replayed. In addition to the direct effects of replays, there is a class of attacks where even operations normally considered idempotent could be exploited by a large number of replays (timing attacks, resource limit exhaustion and others described in replay-0rtt). Those can be mitigated by ensuring that every 0-RTT payload can be replayed only a limited number of times. The server MUST ensure that any instance of it (be it a machine, a thread or any other entity within the relevant serving infrastructure) would accept 0-RTT for the same 0-RTT handshake at most once; this limits the number of replays to the number of server instances in the deployment. Such a guarantee can be accomplished by locally recording data from recently-received ClientHellos and rejecting repeats, or by any other method that provides the same or a stronger guarantee. The \"at most once per server instance\" guarantee is a minimum requirement; servers SHOULD limit 0-RTT replays further when feasible. The second class of attack cannot be prevented at the TLS layer and MUST be dealt with by any application. Note that any application whose clients implement any kind of retry behavior already needs to implement some sort of anti-replay defense. 8.1. The simplest form of anti-replay defense is for the server to only"}
{"id": "q-en-tls13-spec-fdeadd259e5028e267078ecc88a5f1a40b2b9fae7375dc27cb1c2053dfb1d484", "old_text": "CertificateEntry. If the corresponding certificate type extension (\"server_certificate_type\" or \"client_certificate_type\") was not used or the X.509 certificate type was negotiated, then each CertificateEntry contains an X.509 certificate. The sender's certificate MUST come in the first CertificateEntry in the list. Each following certificate SHOULD directly certify one preceding it. Because certificate validation requires that trust anchors be distributed independently, a certificate that specifies a trust anchor MAY be omitted from the chain, provided that supported peers are known to possess any omitted certificates. Note: Prior to TLS 1.3, \"certificate_list\" ordering required each certificate to certify the one immediately preceding it; however,", "comments": "the RFC 7250 semantics that both certificate types are globally negotiated. There is no support for mixed certificates.", "new_text": "CertificateEntry. If the corresponding certificate type extension (\"server_certificate_type\" or \"client_certificate_type\") was not negotiated in Encrypted Extensions, or the X.509 certificate type was negotiated, then each CertificateEntry contains an X.509 certificate. The sender's certificate MUST come in the first CertificateEntry in the list. Each following certificate SHOULD directly certify one preceding it. Because certificate validation requires that trust anchors be distributed independently, a certificate that specifies a trust anchor MAY be omitted from the chain, provided that supported peers are known to possess any omitted certificates. Note: Prior to TLS 1.3, \"certificate_list\" ordering required each certificate to certify the one immediately preceding it; however,"}
{"id": "q-en-tls13-spec-3948f5bf1568a6c897c47458f23a0dfd22006828c151b2d3a007651d68b5ce2f", "old_text": "\"key_share\", \"pre_shared_key\", \"psk_key_exchange_modes\", \"early_data\", \"cookie\", \"supported_versions\", \"certificate_authorities\", \"oid_filters\", \"post_handshake_auth\", and \"signature_algorithms_certs\", extensions with the values defined in this document and the Recommended value of \"Yes\". IANA [SHALL update/has updated] this registry to include a \"TLS", "comments": "the type is called signaturealgorithmscert (singular) everywhere, not signaturealgorithmscerts (plural)", "new_text": "\"key_share\", \"pre_shared_key\", \"psk_key_exchange_modes\", \"early_data\", \"cookie\", \"supported_versions\", \"certificate_authorities\", \"oid_filters\", \"post_handshake_auth\", and \"signature_algorithms_cert\", extensions with the values defined in this document and the Recommended value of \"Yes\". IANA [SHALL update/has updated] this registry to include a \"TLS"}
{"id": "q-en-tls13-spec-da6633a418139f781e29a1862312f3ab8626d8aea7e73781e3fbb105b74369f2", "old_text": "associate an SNI value with the ticket. Clients, however, SHOULD store the SNI with the PSK to fulfill the requirements of NSTMessage. Implementor's note: the most straightforward way to implement the PSK/cipher suite matching requirements is to negotiate the cipher suite first and then exclude any incompatible PSKs. Any unknown PSKs (e.g., they are not in the PSK database or are encrypted with an unknown key) SHOULD simply be ignored. If no acceptable PSKs are found, the server SHOULD perform a non-PSK handshake if possible. Prior to accepting PSK key establishment, the server MUST validate the corresponding binder value (see psk-binder below). If this value", "comments": "Given that it's common to select the strongest cipher by default (thus likely using SHA-384 PRF) and that the default Hash associated with a PSK is SHA-256, simple upgrade of server and client to TLS 1.3 may cause handshake failures if only PSK key exchanges were configured and the existing implementor's note is followed.", "new_text": "associate an SNI value with the ticket. Clients, however, SHOULD store the SNI with the PSK to fulfill the requirements of NSTMessage. Implementor's note: when session resumption is the primary use case of PSKs the most straightforward way to implement the PSK/cipher suite matching requirements is to negotiate the cipher suite first and then exclude any incompatible PSKs. Any unknown PSKs (e.g., they are not in the PSK database or are encrypted with an unknown key) SHOULD simply be ignored. If no acceptable PSKs are found, the server SHOULD perform a non-PSK handshake if possible. If backwards compatibility is important, client provided, externally established PSKs SHOULD influence cipher suite selection. Prior to accepting PSK key establishment, the server MUST validate the corresponding binder value (see psk-binder below). If this value"}
{"id": "q-en-tls13-spec-fa2c7056d1e3a7b471120eb7520f045b1657c46bf725c93c4a760724ed4d1c56", "old_text": "server has selected for the negotiated key exchange. Servers MUST NOT send a KeyShareEntry for any group not indicated in the \"supported_groups\" extension and MUST NOT send a KeyShareEntry when using the \"psk_ke\" PskKeyExchangeMode. If a HelloRetryRequest was received by the client, the client MUST verify that the selected NamedGroup in the ServerHello is the same as that in the HelloRetryRequest. If this check fails, the client MUST abort the handshake with an \"illegal_parameter\" alert.", "comments": "This clarifies two items relevant to stateless HelloRetryRequests: 1) If a HelloRetryRequest does not contain a keyshare extension, the negotiated group does not have to match the non-existent selected group in the HRR. 2) If a HelloRetryRequest does select a group, the server can still choose to use pskke key exchange. When sending a stateless HelloRetryRequest, the server may not know if it will accept a PSK yet. This allows the server to still select a group in the HelloRetryRequest which it would need for a full handshake, but later decide to accept the PSK with psk_ke mode.\nSeems reasonable to me.", "new_text": "server has selected for the negotiated key exchange. Servers MUST NOT send a KeyShareEntry for any group not indicated in the \"supported_groups\" extension and MUST NOT send a KeyShareEntry when using the \"psk_ke\" PskKeyExchangeMode. If using (EC)DHE key establishment, and a HelloRetryRequest containing a \"key_share\" extension was received by the client, the client MUST verify that the selected NamedGroup in the ServerHello is the same as that in the HelloRetryRequest. If this check fails, the client MUST abort the handshake with an \"illegal_parameter\" alert."}
{"id": "q-en-tls13-spec-d8806608ec25dc6448f20f06db21302159bf54353748cd4d2781ee03d5d1b8e5", "old_text": "is not present or does not validate, the server MUST abort the handshake. Servers SHOULD NOT attempt to validate multiple binders; rather they SHOULD select a single PSK and validate solely the binder that corresponds to that PSK. See [client-hello-recording] for the security rationale for this requirement. In order to accept PSK key establishment, the server sends a \"pre_shared_key\" extension indicating the selected identity. Clients MUST verify that the server's selected_identity is within the range supplied by the client, that the server selected a cipher suite", "comments": "As na\u00efve handling of external identities may cause the server to be vulnerable to identity enumeration attacks, refer the section that discusses that danger in the PSK section itself.", "new_text": "is not present or does not validate, the server MUST abort the handshake. Servers SHOULD NOT attempt to validate multiple binders; rather they SHOULD select a single PSK and validate solely the binder that corresponds to that PSK. See [client-hello-recording] and [psk- identity-exposure] for the security rationale for this requirement. In order to accept PSK key establishment, the server sends a \"pre_shared_key\" extension indicating the selected identity. Clients MUST verify that the server's selected_identity is within the range supplied by the client, that the server selected a cipher suite"}
{"id": "q-en-tls13-spec-0fb215f42f8efa8e3cbbf98027a51eea0ecfefb8e0f7bb4ba4d45d28b17f9b71", "old_text": "This alert notifies the recipient that the sender will not send any more messages on this connection. Any data received after a closure alert has been received MUST be ignored. This alert notifies the recipient that the sender is canceling the handshake for some reason unrelated to a protocol failure. If a", "comments": "closenotify has always used a warning alert level, but this is not actually written down anywhere. It seems to have gotten lost as early as RFC 4346 (TLS 1.1!). In RFC 2246, there is some text that mentions the correct level is warning as an aside in describing something else. I wasn't able to find any other text that discussed this. Then, RFC 4346 dropped the session termination behavior: But in doing so, it dropped any mention of which alert level to use. That text has carried over to RFC 8446 as: In RFC 8446, we said alert levels no longer matter and can be \"safely ignored\", but this still leaves unspecified what the sender should do. Skimming implementations, both BoringSSL and NSS will treat \"fatal\" closenotify as an error, so using \"warning\" is also necessary for interop.", "new_text": "This alert notifies the recipient that the sender will not send any more messages on this connection. Any data received after a closure alert has been received MUST be ignored. This alert MUST be sent with AlertLevel=warning. This alert notifies the recipient that the sender is canceling the handshake for some reason unrelated to a protocol failure. If a"}
{"id": "q-en-tls13-spec-b2174edbaca39bc5aa7d33fcc6c344fb1656d98318b425590186c7cc61c55b8e", "old_text": "not specify how protocols add security with TLS; how to initiate TLS handshaking and how to interpret the authentication certificates exchanged are left to the judgment of the designers and implementors of protocols that run on top of TLS. This document defines TLS version 1.3. While TLS 1.3 is not directly compatible with previous versions, all versions of TLS incorporate a", "comments": "The last few years I have personally experienced several occasions where people in SDOs as well as developers believe that TLS magically gives them authentication. This has resulted in security standards that just writes that \"TLS is used for mutual authentication\" without any more details as well as implementations that do not provide any identity verification at all and instead just doing a full path validation, when they should have checked the server identity, that the right trust anchor was used, and that the right intermediate CA was used. I think this is a common problem. Reading RFC8446, I think it is quite easy for people without expertise in security protocols to get the understanding that TLS gives you authentication. Suggestions: I think RFC8446 should be clearer with that what is provided by the TLS protocol is transport of authentication credentials and proof-of-possession of the private authentication key. I think the requirements on the application using TLS should be clearer. Maybe: \"Application protocols using TLS MUST specify how to initiate TLS handshaking, how to do path validation, and how to do identity verification.\" I think it would be very good for readers if draft-ietf-uta-rfc6125bis was given as an informative reference.", "new_text": "not specify how protocols add security with TLS; how to initiate TLS handshaking and how to interpret the authentication certificates exchanged are left to the judgment of the designers and implementors of protocols that run on top of TLS. Application protocols using TLS MUST specify how TLS works with their application protocol, including how and when handshaking occurs, and how to do identity verification. I-D.ietf-uta-rfc6125bis provides useful guidance on integrating TLS with applicaiton protocols. This document defines TLS version 1.3. While TLS 1.3 is not directly compatible with previous versions, all versions of TLS incorporate a"}
{"id": "q-en-tls13-spec-d2fa86787c1ff1dac1f69cd04e926980ad142a3ddbfbbe76f5ce5f689bffd1bb", "old_text": "Freeze & deprecate record layer version field. draft-05 Prohibit SSL negotiation for backwards compatibility.", "comments": "Merging PR 100 added to the changelog at draft-04, but we're up to draft-06 now.", "new_text": "Freeze & deprecate record layer version field. Update format of signatures with context. draft-05 Prohibit SSL negotiation for backwards compatibility."}
{"id": "q-en-tls13-spec-d2fa86787c1ff1dac1f69cd04e926980ad142a3ddbfbbe76f5ce5f689bffd1bb", "old_text": "Remove renegotiation. Update format of signatures with context. Remove point format negotiation. draft-03", "comments": "Merging PR 100 added to the changelog at draft-04, but we're up to draft-06 now.", "new_text": "Remove renegotiation. Remove point format negotiation. draft-03"}
{"id": "q-en-tls13-spec-5dc113692ac2601afda3ee6d8c6490c8aa50bc0d7f414ad060d19ea48c540295", "old_text": "1.2. draft-07 - Integration of semi-ephemeral DH proposal. Add initial 0-RTT support", "comments": "Not sure if the deprecation cutoff will be at 224 bits (~112 bit security level) or 255 bits (~128 bit security level) yet, so I've just done the former for now. This drops the old curves under 224 bits, updates the extension example bytes, and deprecates MD5.\nAs noted on-list, SHA1 can't be as easily dealt with here, so that'll be a topic for a separate changeset.\nHere's a PR for this: Question: Are we dropping all curves <224 bits or <255 bits? (was consensus ever called on this specific number?)\nThere was not a specific WG consensus call on this point. I hope we don't need one and that we should keep the curves in that are used and trim the ones that aren't.\nGood. In that case, I'll slash everything other than secp256r1 (prime256v1 / NIST P-256), secp384r1 (NIST P-384), secp521r1 (NIST P-521), and sect571r1 (NIST B-571). (plus, of course, new CFRG curves yet to be added) PR updated. URL URL\nBased on the fact that apparently precisely zero servers actually use SHA-224 for signatures, can we drop that too while we're at it? (not an exaggeration; the scan cited in the last comment shows none)\nI would drop it. spt On Jun 17, 2015, at 20:09, Dave Garrett EMAIL wrote:\nOK. PR updated.\nNAME This can be closed now that PR has been merged.", "new_text": "1.2. draft-08 Remove support for weak and lesser used named curves. Remove support for MD5 and SHA-224 hashes with signatures. draft-07 Integration of semi-ephemeral DH proposal. Add initial 0-RTT support"}
{"id": "q-en-tls13-spec-5dc113692ac2601afda3ee6d8c6490c8aa50bc0d7f414ad060d19ea48c540295", "old_text": "digital signatures. The \"extension_data\" field of this extension contains a \"supported_signature_algorithms\" value. %%% Signature Algorithm Extension enum { none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5), sha512(6), (255) } HashAlgorithm; Each SignatureAndHashAlgorithm value lists a single hash/signature pair that the client is willing to verify. The values are indicated", "comments": "Not sure if the deprecation cutoff will be at 224 bits (~112 bit security level) or 255 bits (~128 bit security level) yet, so I've just done the former for now. This drops the old curves under 224 bits, updates the extension example bytes, and deprecates MD5.\nAs noted on-list, SHA1 can't be as easily dealt with here, so that'll be a topic for a separate changeset.\nHere's a PR for this: Question: Are we dropping all curves <224 bits or <255 bits? (was consensus ever called on this specific number?)\nThere was not a specific WG consensus call on this point. I hope we don't need one and that we should keep the curves in that are used and trim the ones that aren't.\nGood. In that case, I'll slash everything other than secp256r1 (prime256v1 / NIST P-256), secp384r1 (NIST P-384), secp521r1 (NIST P-521), and sect571r1 (NIST B-571). (plus, of course, new CFRG curves yet to be added) PR updated. URL URL\nBased on the fact that apparently precisely zero servers actually use SHA-224 for signatures, can we drop that too while we're at it? (not an exaggeration; the scan cited in the last comment shows none)\nI would drop it. spt On Jun 17, 2015, at 20:09, Dave Garrett EMAIL wrote:\nOK. PR updated.\nNAME This can be closed now that PR has been merged.", "new_text": "digital signatures. The \"extension_data\" field of this extension contains a \"supported_signature_algorithms\" value. %%% Signature Algorithm Extension enum { none(0), md5_RESERVED(1), sha1(2), sha224_RESERVED(3), sha256(4), sha384(5), sha512(6), (255) } HashAlgorithm; Each SignatureAndHashAlgorithm value lists a single hash/signature pair that the client is willing to verify. The values are indicated"}
{"id": "q-en-tls13-spec-5dc113692ac2601afda3ee6d8c6490c8aa50bc0d7f414ad060d19ea48c540295", "old_text": "SHA-224, SHA-256, SHA-384, and SHA-512 SHS, respectively. The \"none\" value is provided for future extensibility, in case of a signature algorithm which does not require hashing before signing. This field indicates the signature algorithm that may be used.", "comments": "Not sure if the deprecation cutoff will be at 224 bits (~112 bit security level) or 255 bits (~128 bit security level) yet, so I've just done the former for now. This drops the old curves under 224 bits, updates the extension example bytes, and deprecates MD5.\nAs noted on-list, SHA1 can't be as easily dealt with here, so that'll be a topic for a separate changeset.\nHere's a PR for this: Question: Are we dropping all curves <224 bits or <255 bits? (was consensus ever called on this specific number?)\nThere was not a specific WG consensus call on this point. I hope we don't need one and that we should keep the curves in that are used and trim the ones that aren't.\nGood. In that case, I'll slash everything other than secp256r1 (prime256v1 / NIST P-256), secp384r1 (NIST P-384), secp521r1 (NIST P-521), and sect571r1 (NIST B-571). (plus, of course, new CFRG curves yet to be added) PR updated. URL URL\nBased on the fact that apparently precisely zero servers actually use SHA-224 for signatures, can we drop that too while we're at it? (not an exaggeration; the scan cited in the last comment shows none)\nI would drop it. spt On Jun 17, 2015, at 20:09, Dave Garrett EMAIL wrote:\nOK. PR updated.\nNAME This can be closed now that PR has been merged.", "new_text": "SHA-224, SHA-256, SHA-384, and SHA-512 SHS, respectively. The \"none\" value is provided for future extensibility, in case of a signature algorithm which does not require hashing before signing. The usage of MD5 and SHA-224 are deprecated. The md5_RESERVED and sha224_RESERVED values MUST NOT be offered or negotiated by any implementation. This field indicates the signature algorithm that may be used."}
{"id": "q-en-tls13-spec-5dc113692ac2601afda3ee6d8c6490c8aa50bc0d7f414ad060d19ea48c540295", "old_text": "appropriate rules. If the client supports only the default hash and signature algorithms (listed in this section), it MAY omit the signature_algorithms extension. If the client does not support the default algorithms, or supports other hash and signature algorithms (and it is willing to use them for verifying messages sent by the server, i.e., server certificates and server key share), it MUST send the signature_algorithms extension, listing the algorithms it is willing to accept. If the client does not send the signature_algorithms extension, the server MUST do the following: If the negotiated key exchange algorithm is one of (DHE_RSA,", "comments": "Not sure if the deprecation cutoff will be at 224 bits (~112 bit security level) or 255 bits (~128 bit security level) yet, so I've just done the former for now. This drops the old curves under 224 bits, updates the extension example bytes, and deprecates MD5.\nAs noted on-list, SHA1 can't be as easily dealt with here, so that'll be a topic for a separate changeset.\nHere's a PR for this: Question: Are we dropping all curves <224 bits or <255 bits? (was consensus ever called on this specific number?)\nThere was not a specific WG consensus call on this point. I hope we don't need one and that we should keep the curves in that are used and trim the ones that aren't.\nGood. In that case, I'll slash everything other than secp256r1 (prime256v1 / NIST P-256), secp384r1 (NIST P-384), secp521r1 (NIST P-521), and sect571r1 (NIST B-571). (plus, of course, new CFRG curves yet to be added) PR updated. URL URL\nBased on the fact that apparently precisely zero servers actually use SHA-224 for signatures, can we drop that too while we're at it? (not an exaggeration; the scan cited in the last comment shows none)\nI would drop it. spt On Jun 17, 2015, at 20:09, Dave Garrett EMAIL wrote:\nOK. PR updated.\nNAME This can be closed now that PR has been merged.", "new_text": "appropriate rules. If the client supports only the default hash and signature algorithms (listed in this section), it MAY omit the \"signature_algorithms\" extension. If the client does not support the default algorithms, or supports other hash and signature algorithms (and it is willing to use them for verifying messages sent by the server, i.e., server certificates and server key share), it MUST send the \"signature_algorithms\" extension, listing the algorithms it is willing to accept. If the client does not send the \"signature_algorithms\" extension, the server MUST do the following: If the negotiated key exchange algorithm is one of (DHE_RSA,"}
{"id": "q-en-tls13-spec-5dc113692ac2601afda3ee6d8c6490c8aa50bc0d7f414ad060d19ea48c540295", "old_text": "The \"extension_data\" field of this extension SHALL contain a \"NamedGroupList\" value: %%% Named Group Extension enum { // Elliptic Curve Groups. sect163k1 (1), sect163r1 (2), sect163r2 (3), sect193r1 (4), sect193r2 (5), sect233k1 (6), sect233r1 (7), sect239k1 (8), sect283k1 (9), sect283r1 (10), sect409k1 (11), sect409r1 (12), sect571k1 (13), sect571r1 (14), secp160k1 (15), secp160r1 (16), secp160r2 (17), secp192k1 (18), secp192r1 (19), secp224k1 (20), secp224r1 (21), secp256k1 (22), secp256r1 (23), secp384r1 (24), secp521r1 (25), Indicates support of the corresponding named curve The named curves defined here are those specified in SEC 2 [13]. Note that many of these curves are also recommended in ANSI X9.62 X962 and FIPS 186-2 DSS. Values 0xFE00 through 0xFEFF are reserved for private use. Values 0xFF01 and 0xFF02 were used in previous versions of TLS but MUST NOT be offered by TLS 1.3 implementations. [[OPEN ISSUE: Triage curve list.]] Indicates support of the corresponding finite field group, defined in Items in named_curve_list are ordered according to the client's preferences (favorite choice first). As an example, a client that only supports secp192r1 (aka NIST P-192; value 19 = 0x0013) and secp224r1 (aka NIST P-224; value 21 = 0x0015) and prefers to use secp192r1 would include a TLS extension consisting of the following octets. Note that the first two octets indicate the extension type (Supported Group Extension):", "comments": "Not sure if the deprecation cutoff will be at 224 bits (~112 bit security level) or 255 bits (~128 bit security level) yet, so I've just done the former for now. This drops the old curves under 224 bits, updates the extension example bytes, and deprecates MD5.\nAs noted on-list, SHA1 can't be as easily dealt with here, so that'll be a topic for a separate changeset.\nHere's a PR for this: Question: Are we dropping all curves <224 bits or <255 bits? (was consensus ever called on this specific number?)\nThere was not a specific WG consensus call on this point. I hope we don't need one and that we should keep the curves in that are used and trim the ones that aren't.\nGood. In that case, I'll slash everything other than secp256r1 (prime256v1 / NIST P-256), secp384r1 (NIST P-384), secp521r1 (NIST P-521), and sect571r1 (NIST B-571). (plus, of course, new CFRG curves yet to be added) PR updated. URL URL\nBased on the fact that apparently precisely zero servers actually use SHA-224 for signatures, can we drop that too while we're at it? (not an exaggeration; the scan cited in the last comment shows none)\nI would drop it. spt On Jun 17, 2015, at 20:09, Dave Garrett EMAIL wrote:\nOK. PR updated.\nNAME This can be closed now that PR has been merged.", "new_text": "The \"extension_data\" field of this extension SHALL contain a \"NamedGroupList\" value: %%% Named Group Extension enum { // Elliptic Curve Groups. obsolete_RESERVED (1..13), sect571r1 (14), obsolete_RESERVED (15..22), secp256r1 (23), secp384r1 (24), secp521r1 (25), Indicates support of the corresponding named curve. Note that some curves are also recommended in ANSI X9.62 X962 and FIPS 186-2 DSS. Values 0xFE00 through 0xFEFF are reserved for private use. Indicates support of the corresponding finite field group, defined in I-D.ietf-tls-negotiated-ff-dhe. Values 0x01FC through 0x01FF are reserved for private use. Values within \"obsolete_RESERVED\" ranges were used in previous versions of TLS and MUST NOT be offered or negotiated by TLS 1.3 implementations. Items in named_curve_list are ordered according to the client's preferences (favorite choice first). As an example, a client that only supports secp256r1 (aka NIST P-256; value 23 = 0x0017) and secp384r1 (aka NIST P-384; value 24 = 0x0018) and prefers to use secp256r1 would include a TLS extension consisting of the following octets. Note that the first two octets indicate the extension type (Supported Group Extension):"}
{"id": "q-en-tls13-spec-d6d12196a428903da664c896d7472a420d56ecffba6994a277cd4a4917338b4d", "old_text": "1.2. draft-09 The maximum permittion record expansion for AEAD has been reduced from 2048 to 256 octets. draft-08 Remove support for weak and lesser used named curves.", "comments": "NAME Quick PR for fix noted for PR plus a couple other small nits. Still on draft 08. Fix a typo. (\"permittion\" is not a word) Consistently use spaces around the \"+\" in \"2^14 + 256\". (some did; some didn't)", "new_text": "1.2. draft-08 Remove support for weak and lesser used named curves."}
{"id": "q-en-tls13-spec-d6d12196a428903da664c896d7472a420d56ecffba6994a277cd4a4917338b4d", "old_text": "Revise list of currently available AEAD cipher suites. draft-07 Integration of semi-ephemeral DH proposal.", "comments": "NAME Quick PR for fix noted for PR plus a couple other small nits. Still on draft 08. Fix a typo. (\"permittion\" is not a word) Consistently use spaces around the \"+\" in \"2^14 + 256\". (some did; some didn't)", "new_text": "Revise list of currently available AEAD cipher suites. Reduce maximum permitted record expansion for AEAD from 2048 to 256 octets. draft-07 Integration of semi-ephemeral DH proposal."}
{"id": "q-en-tls13-spec-d6d12196a428903da664c896d7472a420d56ecffba6994a277cd4a4917338b4d", "old_text": "generated. An AEAD cipher MUST NOT produce an expansion of greater than 256 bytes. An endpoint that receives a record that is larger than 2^14+256 octets MUST generate a fatal \"record_overflow\" alert. As a special case, we define the NULL_NULL AEAD cipher which is simply the identity operation and thus provides no security. This", "comments": "NAME Quick PR for fix noted for PR plus a couple other small nits. Still on draft 08. Fix a typo. (\"permittion\" is not a word) Consistently use spaces around the \"+\" in \"2^14 + 256\". (some did; some didn't)", "new_text": "generated. An AEAD cipher MUST NOT produce an expansion of greater than 256 bytes. An endpoint that receives a record that is larger than 2^14 + 256 octets MUST generate a fatal \"record_overflow\" alert. As a special case, we define the NULL_NULL AEAD cipher which is simply the identity operation and thus provides no security. This"}
{"id": "q-en-tls13-spec-d6d12196a428903da664c896d7472a420d56ecffba6994a277cd4a4917338b4d", "old_text": "A TLSCiphertext record was received that had a length more than 2^14+256 bytes, or a record decrypted to a TLSPlaintext record with more than 2^14 bytes. This alert is always fatal and should never be observed in communication between proper implementations (except when messages were corrupted in the network).", "comments": "NAME Quick PR for fix noted for PR plus a couple other small nits. Still on draft 08. Fix a typo. (\"permittion\" is not a word) Consistently use spaces around the \"+\" in \"2^14 + 256\". (some did; some didn't)", "new_text": "A TLSCiphertext record was received that had a length more than 2^14 + 256 bytes, or a record decrypted to a TLSPlaintext record with more than 2^14 bytes. This alert is always fatal and should never be observed in communication between proper implementations (except when messages were corrupted in the network)."}
{"id": "q-en-tls13-spec-40a39bbb49bbaf3b76a5962a6044b088b632e3eef5d2157ed4c936a923e31e72", "old_text": "When this message will be sent: If this message is sent, it MUST be sent immediately after the ServerHello message. This is the first message that is encrypted under keys derived from ES.", "comments": "I think that the decision was to make this mandatory, since present-but-empty is harder to screw up.\nSection 6.2 Figure 1 and others show messages as non-optional. Section 6.3.3 however states that: It seems that the message could indeed be omitted if the server doesn't need to respond with any extensions, but I may be wrong here. We should fix one of the two places though as they contradict each other.", "new_text": "When this message will be sent: The EncryptedExtensions message MUST be sent immediately after the ServerHello message. This is the first message that is encrypted under keys derived from ES."}
{"id": "q-en-tls13-spec-be3ca54660a83764bb26487f08384ef17303257b98b1a159e7be6bce6bd685f2", "old_text": "below). The client specifies the cryptographic configuration for the 0-RTT data using the \"configuration\", \"cipher_suite\", and \"extensions\" values. For configurations received in-band (in a previous TLS connection) the client MUST:", "comments": "The text & struct have different names for the config ID. Use the one in the struct.", "new_text": "below). The client specifies the cryptographic configuration for the 0-RTT data using the \"configuration_id\", \"cipher_suite\", and \"extensions\" values. For configurations received in-band (in a previous TLS connection) the client MUST:"}
{"id": "q-en-tls13-spec-66f4b50d220860203c0be4fc03b389db3ad8b607126cf91dcafc264044bced08", "old_text": "of the following octets. Note that the first two octets indicate the extension type (\"supported_groups\" extension): [[TODO: IANA Considerations.]] 6.3.2.3.", "comments": "You'll also need to update the last paragraph of the MTI Extensions section (currently section 8.2). Now that we're going to be down to only one client-only extension defined in this document, that whole paragraph can probably be cut in favor of a one-off note in the remaining extension's section (signature_algorithms will be the last).\nSo that the clients can learn about what groups a server supports. The idea here is to allow a client to offer P256 but then learn that a server supports 25519 for next time without the server having to reject.", "new_text": "of the following octets. Note that the first two octets indicate the extension type (\"supported_groups\" extension): As of TLS 1.3, servers are permitted to send the \"supported_groups\" extension to the client. If the server has a group it prefers to the ones in the \"key_share\" extension but is still willing to accept the ClientHello, it SHOULD send \"supported_groups\" to update the client's view of its preferences. Clients MUST NOT act upon any information found in \"supported_groups\" prior to successful completion of the handshake, but MAY use the information learned from a successfully completed handshake to change what groups they offer to a server in subsequent connections. [[TODO: IANA Considerations.]] 6.3.2.3."}
{"id": "q-en-tls13-spec-66f4b50d220860203c0be4fc03b389db3ad8b607126cf91dcafc264044bced08", "old_text": "lacking a \"server_name\" extension with a fatal \"missing_extension\" alert. Some of these extensions exist only for the client to provide additional data to the server in a backwards-compatible way and thus have no meaning when sent from a server. The client-only extensions defined in this document are: \"signature_algorithms\" & \"supported_groups\". Servers MUST NOT send these extensions. Clients receiving any of these extensions MUST respond with a fatal \"unsupported_extension\" alert and close the connection. 9.", "comments": "You'll also need to update the last paragraph of the MTI Extensions section (currently section 8.2). Now that we're going to be down to only one client-only extension defined in this document, that whole paragraph can probably be cut in favor of a one-off note in the remaining extension's section (signature_algorithms will be the last).\nSo that the clients can learn about what groups a server supports. The idea here is to allow a client to offer P256 but then learn that a server supports 25519 for next time without the server having to reject.", "new_text": "lacking a \"server_name\" extension with a fatal \"missing_extension\" alert. Servers MUST NOT send the \"signature_algorithms\" extension; if a client receives this extension it MUST respond with a fatal \"unsupported_extension\" alert and close the connection. 9."}
{"id": "q-en-tls13-spec-18a29b6796ca64895f77ceb03d190c22f40303d4f462c9027ed71e9cd7732376", "old_text": "The TLS handshake establishes one or more input secrets which are combined to create the actual working keying material, as detailed below. The key derivation process makes use of the following functions, based on HKDF RFC5869: Given a set of n InputSecrets, the final \"master secret\" is computed by iteratively invoking HKDF-Extract with InputSecret_1, InputSecret_2, etc. The initial secret is simply a string of 0s as long as the size of the Hash that is the basis for the HKDF. Concretely, for the present version of TLS 1.3, secrets are added in the following order:", "comments": "Some small tweaks and fixes for issue : Move HKDF-Extract citing to a sentence and bring HKDF-Extract with it. Change the Messages arg to HashValue, as the actual hashing is stated in the definition of Derive-Secret which is passing this function a hash, not messages. (fixes issue ) More concise defining of HkdfLabel. Use the word \"zeroes\" instead of \"0s\". Edited to add: Also, appended a commit to fix an extra trailing paren.\nThe other possibility is that a hash of the concatenation of hashes was to be the new HashValue. Simplest change is just to make the arg for what's currently used in the definition and passed in.\nNAME Rebased. Is this ok to merge now?\nIt never actually says that hash_value is H(messages).\nThis should be clear now.", "new_text": "The TLS handshake establishes one or more input secrets which are combined to create the actual working keying material, as detailed below. The key derivation process makes use of the HKDF-Extract and HKDF-Expand functions as defined for HKDF RFC5869, as well as the functions defined below: Given a set of n InputSecrets, the final \"master secret\" is computed by iteratively invoking HKDF-Extract with InputSecret_1, InputSecret_2, etc. The initial secret is simply a string of zeroes as long as the size of the Hash that is the basis for the HKDF. Concretely, for the present version of TLS 1.3, secrets are added in the following order:"}
{"id": "q-en-tls13-spec-64e4c49073b1d3917e575998ce1abfc91ed4ddc928f86d497f2eb9c3cb41dd59", "old_text": "The \"extension_data\" field of this extension contains a \"NamedGroupList\" value: %%% Named Group Extension Indicates support of the corresponding named curve. Note that some curves are also recommended in ANSI X9.62 X962 and FIPS 186-4", "comments": "We have a registry called \"supported groups\", and the extension is \"supported groups\"; but each element is called a \"named group\" and the data passed in the extension is a \"named group list\". be consistent in our terminology.", "new_text": "The \"extension_data\" field of this extension contains a \"NamedGroupList\" value: %%% Supported Groups Extension Indicates support of the corresponding named curve. Note that some curves are also recommended in ANSI X9.62 X962 and FIPS 186-4"}
{"id": "q-en-tls13-spec-e6c6b5677a5be0bcec815708980600eb4f479b99f6c4be3a835597fa31c76fa9", "old_text": "Upon receipt of a HelloRetryRequest, the client MUST verify that the extensions block is not empty and otherwise MUST abort the handshake with a \"decode_error\" alert. Clients SHOULD abort the handshake with an \"unexpected message\" if the HelloRetryRequest would not result in any change in the ClientHello or in response to any second HelloRetryRequest which was sent in the same connection (i.e., where the ClientHello was itself in response to a HelloRetryRequest). Otherwise, the client MUST process all extensions in the HelloRetryRequest and send a second updated ClientHello. The", "comments": "On Tuesday, October 11, 2016 08:14:32 pm Martin Thomson wrote: or missing_extension\nI think that missingextension would be OK, but we would hit the lower-level decodeerror first, since HRR is defined with:", "new_text": "Upon receipt of a HelloRetryRequest, the client MUST verify that the extensions block is not empty and otherwise MUST abort the handshake with a \"decode_error\" alert. Clients MUST abort the handshake with an \"illegal_parameter\" alert if the HelloRetryRequest would not result in any change in the ClientHello. If a client receives a second HelloRetryRequest in the same connection (i.e., where the ClientHello was itself in response to a HelloRetryRequest), it MUST abort the handshake with an \"unexpected_message\" alert. Otherwise, the client MUST process all extensions in the HelloRetryRequest and send a second updated ClientHello. The"}
{"id": "q-en-tls13-spec-dcc27ba01328c991a23c6d78b1c7b1b3826cde9ae3cc1b25dcdbea3d45194862", "old_text": "Fields and variables may be assigned a fixed value using \"=\", as in: 3.7.1. Defined structures may have variants based on some knowledge that is available within the environment. The selector must be an enumerated", "comments": "Variants is not a subsection of Constants; just have them all be subsections of same depth within Presentation Language URL", "new_text": "Fields and variables may be assigned a fixed value using \"=\", as in: 3.8. Defined structures may have variants based on some knowledge that is available within the environment. The selector must be an enumerated"}
{"id": "q-en-tls13-spec-dcc27ba01328c991a23c6d78b1c7b1b3826cde9ae3cc1b25dcdbea3d45194862", "old_text": "For example: 3.8. TLS defines two generic alerts (see alert-protocol) to use upon failure to parse a message. Peers which receive a message which", "comments": "Variants is not a subsection of Constants; just have them all be subsections of same depth within Presentation Language URL", "new_text": "For example: 3.9. TLS defines two generic alerts (see alert-protocol) to use upon failure to parse a message. Peers which receive a message which"}
{"id": "q-en-tls13-spec-9759903d1f7bb23b678beee9f8b153afcf98666dc1a4502ee1eea6100359b32d", "old_text": "The certificate and signing key to be used. A Handshake Context based on the hash of the handshake messages A base key to be used to compute a MAC key.", "comments": "The handshake context here is just the transcript of the handshake messages, not the hash of the transcript. The language makes it seem like the authentication messages are Hash(Hash(handshake messages) + Certificate), etc.", "new_text": "The certificate and signing key to be used. A Handshake Context based on the transcript of the handshake messages A base key to be used to compute a MAC key."}
{"id": "q-en-tls13-spec-fb3a5cb93a8e6f156524b597978bfe14b3f7417062f733edf2deb642bf8172e7", "old_text": "in the cookie, rather than requiring it to export the entire intermediate hash state (see cookie). In general, implementations can implement the transcript by keeping a running transcript hash value based on the negotiated hash. Note, however, that subsequent post-handshake authentications do not", "comments": "Move the list to the new transcript hash section. Note that EndOfEarlyData goes in the client Handshake Context.\nlgtm", "new_text": "in the cookie, rather than requiring it to export the entire intermediate hash state (see cookie). For concreteness, the transcript hash is always taken from the following sequence of handshake messages, starting at the first ClientHello and including only those messages that were sent: ClientHello, HelloRetryRequest, ClientHello, ServerHello, EncryptedExtensions, Server CertificateRequest, Server Certificate, Server CertificateVerify, Server Finished, EndOfEarlyData, Client Certificate, Client CertificateVerify, Client Finished. In general, implementations can implement the transcript by keeping a running transcript hash value based on the negotiated hash. Note, however, that subsequent post-handshake authentications do not"}
{"id": "q-en-tls13-spec-fb3a5cb93a8e6f156524b597978bfe14b3f7417062f733edf2deb642bf8172e7", "old_text": "Secret is called with four distinct transcripts; in a 1-RTT-only exchange with three distinct transcripts. The complete transcript passed to Derive-Secret is always taken from the following sequence of handshake messages, starting at the first ClientHello and including only those messages that were sent: ClientHello, HelloRetryRequest, ClientHello, ServerHello, EncryptedExtensions, Server CertificateRequest, Server Certificate, Server CertificateVerify, Server Finished, EndOfEarlyData, Client Certificate, Client CertificateVerify, Client Finished. If a given secret is not available, then the 0-value consisting of a string of Hash.length zero bytes is used. Note that this does not mean skipping rounds, so if PSK is not in use Early Secret will still", "comments": "Move the list to the new transcript hash section. Note that EndOfEarlyData goes in the client Handshake Context.\nlgtm", "new_text": "Secret is called with four distinct transcripts; in a 1-RTT-only exchange with three distinct transcripts. If a given secret is not available, then the 0-value consisting of a string of Hash.length zero bytes is used. Note that this does not mean skipping rounds, so if PSK is not in use Early Secret will still"}
{"id": "q-en-tls13-spec-86a1cfc8122dc7384978a0f4ffe93d67ce0221d599ea4f47020a14767d37be1c", "old_text": "Add \"post_handshake_auth\" extension to negotiate post-handshake authentication (*). draft-19 Hash context_value input to Exporters (*)", "comments": "Per mailing list discussion, this allows us to have every HKDF-Expand just have one hash block of info. NAME would love your feedback.\nShouldn't \"TLS 1.3, \" be shortened to \"tls13 \"? Without that, several of the labels would seem to blow blocks. Also, \"derived secret\" seems unchanged, and is too long to fit into a block, even with \"tls13 \" prefix.\nOops. Good catch. Script fail.\nNAME fixed?\nYeah, I'll add a note. I still have an issue for the ChangeLog so I will add this as well\nAlso, minimum label size looks to be still 10, presumably should be 7 with 6-byte prefix (as it 10 with 9-byte prefix).", "new_text": "Add \"post_handshake_auth\" extension to negotiate post-handshake authentication (*). Shorten labels for HKDF-Expand-Label so that we can fit within one compression block (*). draft-19 Hash context_value input to Exporters (*)"}
{"id": "q-en-tls13-spec-86a1cfc8122dc7384978a0f4ffe93d67ce0221d599ea4f47020a14767d37be1c", "old_text": "layer headers. Note that in some cases a zero-length HashValue (indicated by \"\") is passed to HKDF-Expand-Label. Given a set of n InputSecrets, the final \"master secret\" is computed by iteratively invoking HKDF-Extract with InputSecret_1, InputSecret_2, etc. The initial secret is simply a string of", "comments": "Per mailing list discussion, this allows us to have every HKDF-Expand just have one hash block of info. NAME would love your feedback.\nShouldn't \"TLS 1.3, \" be shortened to \"tls13 \"? Without that, several of the labels would seem to blow blocks. Also, \"derived secret\" seems unchanged, and is too long to fit into a block, even with \"tls13 \" prefix.\nOops. Good catch. Script fail.\nNAME fixed?\nYeah, I'll add a note. I still have an issue for the ChangeLog so I will add this as well\nAlso, minimum label size looks to be still 10, presumably should be 7 with 6-byte prefix (as it 10 with 9-byte prefix).", "new_text": "layer headers. Note that in some cases a zero-length HashValue (indicated by \"\") is passed to HKDF-Expand-Label. Note: with common hash functions, any label longer than 12 characters requires an additional iteration of the hash function to compute. The labels in this specification have all been chosen to fit within this limit. Given a set of n InputSecrets, the final \"master secret\" is computed by iteratively invoking HKDF-Extract with InputSecret_1, InputSecret_2, etc. The initial secret is simply a string of"}
{"id": "q-en-tls13-spec-86a1cfc8122dc7384978a0f4ffe93d67ce0221d599ea4f47020a14767d37be1c", "old_text": "mean skipping rounds, so if PSK is not in use Early Secret will still be HKDF-Extract(0, 0). For the computation of the binder_secret, the label is \"external psk binder key\" for external PSKs (those provisioned outside of TLS) and \"resumption psk binder key\" for resumption PSKs (those provisioned as the resumption master secret of a previous handshake). The different labels prevent the substitution of one type of PSK for the other. There are multiple potential Early Secret values depending on which PSK the server ultimately selects. The client will need to compute", "comments": "Per mailing list discussion, this allows us to have every HKDF-Expand just have one hash block of info. NAME would love your feedback.\nShouldn't \"TLS 1.3, \" be shortened to \"tls13 \"? Without that, several of the labels would seem to blow blocks. Also, \"derived secret\" seems unchanged, and is too long to fit into a block, even with \"tls13 \" prefix.\nOops. Good catch. Script fail.\nNAME fixed?\nYeah, I'll add a note. I still have an issue for the ChangeLog so I will add this as well\nAlso, minimum label size looks to be still 10, presumably should be 7 with 6-byte prefix (as it 10 with 9-byte prefix).", "new_text": "mean skipping rounds, so if PSK is not in use Early Secret will still be HKDF-Extract(0, 0). For the computation of the binder_secret, the label is \"external psk binder key\" for external PSKs (those provisioned outside of TLS) and \"res binder\" for resumption PSKs (those provisioned as the resumption master secret of a previous handshake). The different labels prevent the substitution of one type of PSK for the other. There are multiple potential Early Secret values depending on which PSK the server ultimately selects. The client will need to compute"}
{"id": "q-en-using-github-7fed6392648a0d6fc52616c1d3c946722132288b6b9eafd88627ac7d1a6a367c", "old_text": "Organizations are a way of forming groups of contributors on GitHub. Each Working Group SHOULD create a new organization for the working group. A working group organization SHOULD be named consistently so that it can be found. For instance, the name could be ietf- or ietf- -wg. A single organization SHOULD NOT be used for all IETF activity, or all activity within an area. Large organizations create too much", "comments": "I was tempted to remove the entire section on administrative policies, though left it as is. NAME what do you think?\nYeah, I figured it best to start small. Let me know if you need anything else before merging!\nNAME Interesting that the circle CI build ran on the merge, but not the PR. I didn't see any build failures reported before it merged, but I just got the email reporting a missed reference. Maybe there are some tweaks you need to apply over there as owner of the org.\nYeah, I'll fix it. Thanks for letting me know!\nI was thinking of being a little more aggressive about removing some of the text, with matching PRs for the other work, but this works.", "new_text": "Organizations are a way of forming groups of contributors on GitHub. Each Working Group SHOULD create a new organization for the working group. A working group organization SHOULD be named consistently so that it can be found. For instance, the name could be ietf- or ietf--wg. A single organization SHOULD NOT be used for all IETF activity, or all activity within an area. Large organizations create too much"}
{"id": "q-en-using-github-7fed6392648a0d6fc52616c1d3c946722132288b6b9eafd88627ac7d1a6a367c", "old_text": "to all repositories and ownership does not grant any other significant privileges. When Area Directors or Working Group Chairs change, teams MUST be updated to reflect the new membership status. When a Working Group is closed, the responsible Area Director is responsible for removing existing members from teams in the organization. Repositories MUST be updated along to indicate that they are no longer under development. 2.2. When an IETF Working Group is closed or when associated mailing lists are closed, mail archives and datatracker information from that work is backed up and accessible. The same applies to GitHub repositories. Any repositories including issues and discussion SHOULD be backed up on IETF resources. It is desirable for those to be accessible via the Working Group's datatracker page. For example, this might be via URLs listed in the More Info section on the Working Group Charter page. The IETF MAY decide to backup information associated with a Working Group's organization periodically. This decision can be made differently per Working Group in consultation with the responsible Area Director. 2.3. Each Working Group MAY set its own policy as to whether and how it uses GitHub. It is important that occasional participants in the WG and others accustomed to IETF tools be able to determine this and", "comments": "I was tempted to remove the entire section on administrative policies, though left it as is. NAME what do you think?\nYeah, I figured it best to start small. Let me know if you need anything else before merging!\nNAME Interesting that the circle CI build ran on the merge, but not the PR. I didn't see any build failures reported before it merged, but I just got the email reporting a missed reference. Maybe there are some tweaks you need to apply over there as owner of the org.\nYeah, I'll fix it. Thanks for letting me know!\nI was thinking of being a little more aggressive about removing some of the text, with matching PRs for the other work, but this works.", "new_text": "to all repositories and ownership does not grant any other significant privileges. Details about creating organizations adhering to these guidelines can be found in GIT-CONFIG. 2.2. Each Working Group MAY set its own policy as to whether and how it uses GitHub. It is important that occasional participants in the WG and others accustomed to IETF tools be able to determine this and"}
{"id": "q-en-using-github-7fed6392648a0d6fc52616c1d3c946722132288b6b9eafd88627ac7d1a6a367c", "old_text": "to those mailing lists should be given. An example of this is at https://datatracker.ietf.org/wg/quic/charter/. 2.3.1. One important policy is the IETF IPR policy (see RFC5378, RFC3979, and RFC4879). Part of this policy requires making contributors aware of the policy. The IETF Trust license file for open source repositories [5] MUST be included prominently in any document repository. Including this information in the CONTRIBUTING file is sufficient. In addition to the boilerplate text there can be a benefit to including pointers to other working group materials, the IETF datatracker, specific drafts, or websites. Adding such text is at the discretion of the Working Group Chairs. 3. A Working Group Chairs are responsible for determining how to best", "comments": "I was tempted to remove the entire section on administrative policies, though left it as is. NAME what do you think?\nYeah, I figured it best to start small. Let me know if you need anything else before merging!\nNAME Interesting that the circle CI build ran on the merge, but not the PR. I didn't see any build failures reported before it merged, but I just got the email reporting a missed reference. Maybe there are some tweaks you need to apply over there as owner of the org.\nYeah, I'll fix it. Thanks for letting me know!\nI was thinking of being a little more aggressive about removing some of the text, with matching PRs for the other work, but this works.", "new_text": "to those mailing lists should be given. An example of this is at https://datatracker.ietf.org/wg/quic/charter/. 3. A Working Group Chairs are responsible for determining how to best"}
{"id": "q-en-using-github-7fed6392648a0d6fc52616c1d3c946722132288b6b9eafd88627ac7d1a6a367c", "old_text": "An alternative is to rely on periodic email summaries of activity, such as those produced by a notification tool like github-notify-ml [6]. This tool has been used effectively in several working groups, though it requires server infrastructure. A working group that uses GitHub MAY provide either facility at the", "comments": "I was tempted to remove the entire section on administrative policies, though left it as is. NAME what do you think?\nYeah, I figured it best to start small. Let me know if you need anything else before merging!\nNAME Interesting that the circle CI build ran on the merge, but not the PR. I didn't see any build failures reported before it merged, but I just got the email reporting a missed reference. Maybe there are some tweaks you need to apply over there as owner of the org.\nYeah, I'll fix it. Thanks for letting me know!\nI was thinking of being a little more aggressive about removing some of the text, with matching PRs for the other work, but this works.", "new_text": "An alternative is to rely on periodic email summaries of activity, such as those produced by a notification tool like github-notify-ml [5]. This tool has been used effectively in several working groups, though it requires server infrastructure. A working group that uses GitHub MAY provide either facility at the"}
{"id": "q-en-using-github-7fed6392648a0d6fc52616c1d3c946722132288b6b9eafd88627ac7d1a6a367c", "old_text": "[4] https://about.gitlab.com/ [5] https://trustee.ietf.org/license-for-open-source- repositories.html [6] https://github.com/dontcallmedom/github-notify-ml ", "comments": "I was tempted to remove the entire section on administrative policies, though left it as is. NAME what do you think?\nYeah, I figured it best to start small. Let me know if you need anything else before merging!\nNAME Interesting that the circle CI build ran on the merge, but not the PR. I didn't see any build failures reported before it merged, but I just got the email reporting a missed reference. Maybe there are some tweaks you need to apply over there as owner of the org.\nYeah, I'll fix it. Thanks for letting me know!\nI was thinking of being a little more aggressive about removing some of the text, with matching PRs for the other work, but this works.", "new_text": "[4] https://about.gitlab.com/ [5] https://github.com/dontcallmedom/github-notify-ml "}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "much later - in that scenario one endpoint might be aware of the compatibility document while the other may not. When a client creates a QUIC connection, its goal is to use an application layer protocol. Therefore, when considering which versions are compatible, clients will only consider versions that support one of the intended application layer protocols. For example, if the client's first flight advertises multiple Application Layer Protocol Negotiation (ALPN) ALPN tokens and multiple compatible versions, the server needs to ensure that the ALPN token that it selects can run over the QUIC version that it selects. 2.3. When the server can parse the client's first flight using the", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "much later - in that scenario one endpoint might be aware of the compatibility document while the other may not. 2.3. When the server can parse the client's first flight using the"}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "6. In order to facilitate the deployment of future versions of QUIC, designers of future versions SHOULD attempt to design their new version such that commonly deployed versions are compatible with it.", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "6. When a client creates a QUIC connection, its goal is to use an application layer protocol. Therefore, when considering which versions are compatible, clients will only consider versions that support one of the intended application layer protocols. If the client's first flight advertises multiple Application Layer Protocol Negotiation (ALPN) ALPN tokens and multiple compatible versions, it is possible for some application layer protocols to not be able to run over some of the offered compatible versions. It is the server's responsibility to only select an ALPN token that can run over the compatible QUIC version that it selects. A given ALPN token MUST NOT be used with a new QUIC version different from the version for which the ALPN token was originally defined, unless all the following requirements are met: The new QUIC version supports the transport features required by the application protocol. The new QUIC version supports ALPN. The version of QUIC for which the ALPN token was originally defined is compatible with the new QUIC version. When incompatible version negotiation is in use, the second connection which is created in response to the received version negotiation packet MUST restart its application layer protocol negotiation process without taking into account the original version. 7. In order to facilitate the deployment of future versions of QUIC, designers of future versions SHOULD attempt to design their new version such that commonly deployed versions are compatible with it."}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "widely deployed, this section discusses considerations for future versions to help with compatibility with QUIC version 1. 6.1. QUIC version 1 features Retry packets, which the server can send to validate the client's IP address before parsing the client's first", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "widely deployed, this section discusses considerations for future versions to help with compatibility with QUIC version 1. 7.1. QUIC version 1 features Retry packets, which the server can send to validate the client's IP address before parsing the client's first"}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "handshake even though the Retry itself was sent using the original version. 6.2. QUIC version 1 uses TLS 1.3, which supports session resumption by sending session tickets in one connection that can be used in a later", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "handshake even though the Retry itself was sent using the original version. 7.2. QUIC version 1 uses TLS 1.3, which supports session resumption by sending session tickets in one connection that can be used in a later"}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "of QUIC; i.e., require that clients not use them across multiple version and that servers validate this client requirement. 6.3. QUIC version 1 allows sending data from the client to the server during the handshake, by using 0-RTT packets. If a future document", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "of QUIC; i.e., require that clients not use them across multiple version and that servers validate this client requirement. 7.3. QUIC version 1 allows sending data from the client to the server during the handshake, by using 0-RTT packets. If a future document"}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "packets. That document could specify that compatible version negotiation causes 0-RTT data to be rejected by the server. 7. Because QUIC version 1 was the only IETF Standards Track version of QUIC published before this document, it is handled specially as", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "packets. That document could specify that compatible version negotiation causes 0-RTT data to be rejected by the server. 8. Because QUIC version 1 was the only IETF Standards Track version of QUIC published before this document, it is handled specially as"}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "version negotiation to negotiate versions other than QUIC version 1 will need to implement this draft. 8. The security of this version negotiation mechanism relies on the authenticity of the Version Information exchanged during the", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "version negotiation to negotiate versions other than QUIC version 1 will need to implement this draft. 9. The security of this version negotiation mechanism relies on the authenticity of the Version Information exchanged during the"}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "possibility of cross-protocol attacks, but more analysis is still needed here. 9. 9.1. This document registers a new value in the \"QUIC Transport Parameters\" registry maintained at", "comments": "Can an ALPN label be used for multiple QUIC versions? I think that the text should say something like: NAME adds (and I wholly agree):\nI'm merging into this one where NAME said \"The point is that your ALPN offer when you offer QUICvX shouldn't be dependent on if you did VN or not.\"\nLGTM", "new_text": "possibility of cross-protocol attacks, but more analysis is still needed here. 10. 10.1. This document registers a new value in the \"QUIC Transport Parameters\" registry maintained at"}
{"id": "q-en-version-negotiation-8e13cc1a46cf15f87cd09d8669cfd672cc2b51cfb2031372a3e80a8ea2b64f71", "old_text": "of a codepoint in the 0-63 range to replace the provisional codepoint described above. 9.2. This document registers a new value in the \"QUIC Transport Error Codes\" registry maintained at < 10.2. This document registers a new value in the \"QUIC Transport Error Codes\" registry maintained at < Note that this separation across two connections is conceptual: it applies to normative requirements on QUIC connections, but does not require implementations to internally use two distinct connection objects. 2.5. When the client picks its original version, it will try to avoid"}
{"id": "q-en-version-negotiation-ef8166e63839cd09eeae85fc8b391d01d274afbd25c1b8bb157ab561871cc5a0", "old_text": "also ignores a Version Negotiation packet that contains incorrect connection ID fields; see QUIC-INVARIANTS. Upon receiving the Version Negotiation packet, the client will search for a version it supports in the list provided by the server. If it doesn't find one, it aborts the connection attempt. Otherwise, it selects a mutually supported version and sends a new first flight with that version - this version is now the negotiated version. The new first flight will allow the endpoints to establish a connection using the negotiated version. The handshake of the", "comments": "URL in this example, it would be very stupid of the client to select version C and start another negotiation after the sever has indicated D is what it prefers. The server can again select D as preferred and there exits a potential loop. However, it is possible. What are the rule we have in the specification that stops this kind of loop?\nI'm not sure that I understand this question. A client makes a choice about what version to attempt (and offer) when it creates a connection. It might learn that its versions are incompatible with a server (with a Version Negotiation packet) and try again, but that retry would be a separate action. But if the connection is successfully established, the negotiation is done.\nThere is at most one round of incompatible VN followed by one connection that may or may not use compatible VN to switch versions again. There is no potential for a loop here. NAME can you clarify what you meant?\nThe scenario i was thinking is a follows - Chosen = A, Other Versions = (A, B) -----------------> <----------------- Chosen = G, Other Versions = (G,H) ........ the draft only says the client's first flight contain This contains the list of versions that the client knows its first flight is compatible with (A,B). It does not force the client to list ALL the compatible versions, so it might not list all the compatible versions. I think, there is also no requirement on the server to list ALL the compatible versions. Now server sends back another list of compatible versions (D,C). This new list triggers the client to realise it knows about versions that are compatible (E,F), and sends it back to the client as it has not interest in D or C. That trigger the server to reply with new list of compatible version (G,H).. and this goes on. It is also possible that client made a mistake by not listing all the compatible version (well it does not have to) and as it is not interested in D or C, it wants to list (E,F) and keep in negotiating. I would like to understand the with current specification this kind of interaction is not possible and the negotiation should not go on like that.\nThe scenarios you describe aren't allowed: if the server performs compatible version negotiation it MUST select a that was included in the client's :\nyes, but servers are allowed to do incompatible version negotiation. in my 1st scenario, the server initiates incompatible version negotiation, the server selects D instead of C as chosen by client. The expectation here is that D will be the final negotiated version, and the version negotiation will end. But is there anything stopping the client to try to enforce it's chosen one (C) again? shall we have some to avoid this case? in the 2nd scenario, it is not possible if A, B, C,D, E,F all are compatible. if the server starts incompatible version negotiation with (C,D) and client does not find any supported version, it aborts the connection. I just realized - there is no normative text for aborting the connection attempt.. As the server expresses all the versions it support in the supported versions list, the client knows there is not point on continuing with another attempt with (E,F). Unless I am mistaken there is nothing now prohibiting the client to do so. It can continue to do so.\nThat doesn't make sense. The server initiates incompatible version negotiation by sending a VN packet, there's no notion of chosen version in the VN packet. There's no need for normative text when there's nothing else a client can do, are you saying the client could do something wrong as opposed to aborting due to lack of versions? The client will ignore any VN packet apart from the first one:\nright, my bad. I think I have drawn the picture wrong. Server send (D,C) and the client choses C but the server choses D. does this make sense? and the main issue goes away? sniped from my previous comment. >The expectation here is that D will be the final negotiated version, and the version negotiation will end. But is there anything stopping the client to try to enforce it's chosen one (C) again? shall we have some to avoid this case? yes . and should we try to prevent it? OK, I see the client ignores, but what does happen then ? does it abort the connection attempt ? Is it possible for the client to start another connection attempt with complete new original versions and a new set of other versions?\nZahed and I discussed this first scenario offline. In this scenario, the client indicated that it was happy with D by sending it in its Other Versions, so the client will be happy when D is negotiated even if it preferred C. If the client didn't like D, it could have removed D from Other Versions and then the connection would have negotiated C. Fair enough, this can be fixed with a small tweak - see below. Zahed and I discussed this second scenario offline as well. I wrote up to make the incompatible VN text more normative and that addresses Zahed's concern here. Now we have normative text to guarantee a failure in that scenario. If the client wants to start a separate connection attempt later, it will restart the algorithm from the top. We explicitly decided as a WG to avoid telling the client to remember information from previous connection attempts because VN packets don't carry an expiration date so caching them is fraught.\nLGTM", "new_text": "also ignores a Version Negotiation packet that contains incorrect connection ID fields; see QUIC-INVARIANTS. Upon receiving the Version Negotiation packet, the client SHALL search for a version it supports in the list provided by the server. If it doesn't find one, it SHALL abort the connection attempt. Otherwise, it SHALL select a mutually supported version and sends a new first flight with that version - this version is now the negotiated version. The new first flight will allow the endpoints to establish a connection using the negotiated version. The handshake of the"}
{"id": "q-en-version-negotiation-b2f5214e8941bdc237382d9784438eaacd2cfc4ba841837d72af65f9c5dc818e", "old_text": "attacker injected packets in order to influence the version negotiation process, see downgrade. 2.2. If A and B are two distinct versions of QUIC, A is said to be", "comments": "This was raised during the Reviewer: Qin Wu Review result: Has Issues I have reviewed this document as part of the Operational directorate's ongoing effort to review all IETF documents being processed by the IESG. These comments were written with the intent of improving the operational aspects of the IETF drafts. Comments that are not addressed in last call may be included in AD reviews during the IESG review. Document editors and WG chairs should treat these comments just like any other last call comments. This document describes a version negotiation mechanism that allows a client and server to select a mutually supported version. The mechanism can be further broke down into incompatible Version Negotiation and compatible Version Negotiation. I believe this document is on the right track and but worth further polished with the following editorial comments. Major Issues: No Minor Issues: 1. Section 4 introduce verson downgrade term [Qin]What the version downgrade is? I feel confused, does it mean when I currently use new version, I should not fall back to the old version? Can you explain how version downgrade is different from version incompatible? It will be great to give a definition of version downgrade in the first place or provide an example to explain it? 2. Section 9 said: \" The requirement that versions not be assumed compatible mitigates the possibility of cross-protocol attacks, but more analysis is still needed here.\" [Qin] It looks this paragraph is incomplete, what else analysis should we provide to make this paragraph complete? 3. Section 10 [Qin]: I am not clear why not request permanent allocation of a codepoint in the 0-63 range directly in this document. In my understanding, provisional codepoint can be requested without any dependent document? e.g., Each vendor can request IANA for Vendor specific range. Secondly, if replacing provisional codepoint with permanent allocation, requires make bis for this document, I don't think it is reasonable. Nits: 1. Section 2 said: \" For instance, if the client initiates a connection with version A and the server starts incompatible version negotiation and the client then initiates a new connection with .... \" [Qin]Can the client starts incompatible version negotiation? if not, I think we should call this out, e.g., using RFC2119 language. 2. Section 2, last paragraph [Qin] This paragraph is a little bit vague to me, how do we define not fully support? e.g., the server can interpret version A, B,C, but the server only fully implements version A,B, regarding version C, the server can read this version, but can not use this version, in other words, it is partially implemented, is my understanding correct? 3.Section 2.1 the new term \"offered version\" [Qin] Can you add one definition of offered versions in section 1.2, terminology section? To be honest, I still not clear how offered version is different from negotiated version? Also I suggest to add definitions of accepted version, deployed version as well in section 1.2? Too many terminologies, hard to track them in the first place. 4. Section 6 said: \" it is possible for some application layer protocols to not be able to run over some of the offered compatible versions. \" [Qin]I believe compatible versions is not pertaining to any application layer protocol, if yes, s/compatible versions/compatible QUIC versions 7.1 said: \"For example, that could be accomplished by having the server send a Retry packet in the original version first thereby validating the client's IP address before\" [Qin] Is Version first Version 1? If the answer is yes, please be consistent and keep using either of them instead of inter-exchange to use them. s/version first/version 1", "new_text": "attacker injected packets in order to influence the version negotiation process, see downgrade. Only servers can start incompatible version negotiation: clients MUST NOT send Version Negotiation packets and servers MUST ignore all received Version Negotiation packets. 2.2. If A and B are two distinct versions of QUIC, A is said to be"}
{"id": "q-en-version-negotiation-8d5fc5cb4c37d6f27d718c5cdb6a88a0e803bca9af1a4c9650ce32e84d2a2d52", "old_text": "differ after the handshake, then A is compatible with B and B is compatible with A. Version compatibility is not bijective: it is possible for version A to be compatible with version B and for B not to be compatible with A. This could happen for example if version B is a strict superset of version A.", "comments": "Bijective should be symmetric. We're describing a relation not a function. This is a very minor comment, will send a PR hopefully soon.\nIndeed, my math is pretty rusty :) Totally agree with you, and thanks for offering to write the PR!\nNAME any luck with the PR?\nI completely forgot about this!\nURL opened\nClosing now that has been merged.", "new_text": "differ after the handshake, then A is compatible with B and B is compatible with A. Version compatibility is not symmetric: it is possible for version A to be compatible with version B and for B not to be compatible with A. This could happen for example if version B is a strict superset of version A."}
{"id": "q-en-version-negotiation-33a1b353eba25e5d55b1cc725fde52a50b9a50c1faffa62a8568326850a76b30", "old_text": "6. Clients MUST ignore any received Version Negotiation packets that contain the version that they initially attempted. Once a client has reacted to a Version Negotiation packet, it MUST drop all subsequent Version Negotiation packets on that connection. Both endpoints MUST parse their peer's Version Information during the handshake. If parsing the Version Information failed (for example,", "comments": "The draft says explicitly that receiving Version Negotiation terminates a connection attempt. This isn't consistent with that. This probably needs to say something like this instead:\nThat sounds good to me, can you make this a PR?", "new_text": "6. Clients MUST ignore any received Version Negotiation packets that contain the version that they initially attempted. A client that makes a connection attempt based on information received from a Version Negotiation packet MUST ignore any Version Negotiation packets it receives in response to that connection attempt. Both endpoints MUST parse their peer's Version Information during the handshake. If parsing the Version Information failed (for example,"}
{"id": "q-en-webpush-protocol-41bb731bd11482c4d272e87319747e521edd79021844cf6f53b49c3874974c9b", "old_text": "topic is used to correlate push messages sent to the same subscription and does not convey any other semantics. The grammar for the Topic header field uses the \"token\" and \"quoted- string\" rules defined in RFC7230. Any double quotes from the \"quoted-string\" form are removed before comparing topics for equality. For use with this protocol, the Topic header field MUST be restricted to no more than 32 characters from the URL and filename safe Base 64", "comments": "NAME suggested this. It's a simplification that should reduce errors. It prevents topics like \"Hello world!\", but that's probably a good thing on balance.", "new_text": "topic is used to correlate push messages sent to the same subscription and does not convey any other semantics. The grammar for the Topic header field uses the \"token\" rule defined in RFC7230. For use with this protocol, the Topic header field MUST be restricted to no more than 32 characters from the URL and filename safe Base 64"}
{"id": "q-en-webpush-protocol-810f3358733a5d7d0740a45c08feb4dd256b9717025160b2f1d54c788efa9219", "old_text": "A device and software that is the recipient of push messages. Examples in this document use the RFC7230. Many of the exchanges can be completed using HTTP/1.1. Where HTTP/2 is necessary, the more verbose frame format from RFC7540 is used. Examples do not include specific methods for push message encryption or application server authentication because the protocol does not", "comments": "This draft attempts to address all of the issues identified during with the exception of Dealing with overload situations where further guidance has been requested from early implementers of webpush. Hi, This review is an early, i.e. pre IETF Last Call TSV-ART review. The TSV-ART reviews document that has potential transport implications or transport related subjects. Please treat it as an IETF Last call comment if you don't want to handled it during the AD review. Document: draft-ietf-webpush-protocol-09 Title: Generic Event Delivery Using HTTP Push Reviewer: M. Westerlund Review Date: Sept 27, 2016 Below are a number of comments based on my review. The one with transport related subject is the overload handling in comment 4. Section 3: So the Security Considerations do require use of HTTP over TLS. However, I do wonder if there would be a point to move that requirement up into the relevant connection sections, like 3. Especially as when one reads the confidentiality requirement in Section 4 for the message one wonders a bit why it is not explicitly required in section 3. Section 5.2: TTL = 1*DIGIT Shouldn't the upper range for allowed values be specified here. At least to ensure that one doesn't get interoperability issues. Section 7.2: To limit the number of stored push messages, the push service MAY either expire messages prior to their advertised Time-To-Live or reduce their advertised Time-To-Live. Do I understand this correctly, that theses options are push service side actions that is not notified at that point to the application server? Instead it will have to note that the message was early expired if it subscribes to delivery receipts? Dealing with overload situations Reviewing Section 7.1 and other parts I find the discussion of how overload situations of various kinds are dealt with missing some cases. So the general handling of subscriptions are covered in Section 7.1 and with a mitigation of redirecting to another server to handle the new subscription. What I lack discussion of are how any form of push back are handled when first the deliver of push service to UA link is overloaded. Is the assumption here that as the push service can store messages the delivery will catch up eventually, or the message expires? How does one handle a 0-RTT messages when one has a queue of messages to deliver, despite having a UA to Push service connection? The second case is how the push service server can push back on accepting new message when it is overloaded. To me it appears that the load on a push service can be very dynamic. Thus overload can really be periods. Thus, what push back to application servers is intended here? Just, do a 503 response on the request from the application server? I do note that RFC 7231 does not appear to have any guidance one how one sets the Retry-After header to spread the load of the retries. This is a known issue. And in this context with automated or external event triggered message creations the push back mechanism needs to be robust and reduce the issues, not make them worse. I would recommend that some recommendations are actually included here for handling overload from application server message submissions. Life of subscription in relation to transports I find myself a bit perplexed of if there are any relation between the subscription and the relation to an existing transport connection and outstanding request, i.e. the availability to receive messages over push. I would guess, that there are no strict need for terminating the subscription even if there are no receiver for an extended time. However, from an application perspective there could exists reason for why a subscription would not be terminated as long as the UA is connected, while any longer duration of no connections could motivate the subscription termination. I personally would like a bit more clarification on this, but seeing how these issues are usually handled around HTTP, I would guess the answer, it will be implementation specific? Still there appear to be assumptions around this, why it wouldn't matter that much, but that is only given that one follows these assumptions. Unclarities which requests that require HTTP/2. The specification is explicit that some request can be performed over HTTP/1.x. However, it is not particular clear which requests that require HTTP/2. I assume all the GET requests that will hang to enable PUSH promises needs to be HTTP/2? Maybe this should be clarified? Cheers Magnus Westerlund Services, Media and Network features, Ericsson Research EAB/TXM Ericsson AB | Phone +46 10 7148287 F\u00e4r\u00f6gatan 6 | Mobile +46 73 0949079 SE-164 80 Stockholm, Sweden | mailto: EMAIL", "new_text": "A device and software that is the recipient of push messages. Examples in this document use the RFC7230. Many of the exchanges can be completed using HTTP/1.1: message_subscription send replace acknowledge_message When an example depends on HTTP/2 server push, the more verbose frame format from RFC7540 is used: monitor-subscription monitor-set receive_receipt Examples do not include specific methods for push message encryption or application server authentication because the protocol does not"}
{"id": "q-en-webpush-protocol-810f3358733a5d7d0740a45c08feb4dd256b9717025160b2f1d54c788efa9219", "old_text": "3. The push service shares the same default port number (443/TCP) with HTTPS, but MAY also advertise the IANA allocated TCP System Port 1001 using HTTP alternative services RFC7838. While the default port (443) offers broad reachability characteristics, it is most often used for web browsing scenarios", "comments": "This draft attempts to address all of the issues identified during with the exception of Dealing with overload situations where further guidance has been requested from early implementers of webpush. Hi, This review is an early, i.e. pre IETF Last Call TSV-ART review. The TSV-ART reviews document that has potential transport implications or transport related subjects. Please treat it as an IETF Last call comment if you don't want to handled it during the AD review. Document: draft-ietf-webpush-protocol-09 Title: Generic Event Delivery Using HTTP Push Reviewer: M. Westerlund Review Date: Sept 27, 2016 Below are a number of comments based on my review. The one with transport related subject is the overload handling in comment 4. Section 3: So the Security Considerations do require use of HTTP over TLS. However, I do wonder if there would be a point to move that requirement up into the relevant connection sections, like 3. Especially as when one reads the confidentiality requirement in Section 4 for the message one wonders a bit why it is not explicitly required in section 3. Section 5.2: TTL = 1*DIGIT Shouldn't the upper range for allowed values be specified here. At least to ensure that one doesn't get interoperability issues. Section 7.2: To limit the number of stored push messages, the push service MAY either expire messages prior to their advertised Time-To-Live or reduce their advertised Time-To-Live. Do I understand this correctly, that theses options are push service side actions that is not notified at that point to the application server? Instead it will have to note that the message was early expired if it subscribes to delivery receipts? Dealing with overload situations Reviewing Section 7.1 and other parts I find the discussion of how overload situations of various kinds are dealt with missing some cases. So the general handling of subscriptions are covered in Section 7.1 and with a mitigation of redirecting to another server to handle the new subscription. What I lack discussion of are how any form of push back are handled when first the deliver of push service to UA link is overloaded. Is the assumption here that as the push service can store messages the delivery will catch up eventually, or the message expires? How does one handle a 0-RTT messages when one has a queue of messages to deliver, despite having a UA to Push service connection? The second case is how the push service server can push back on accepting new message when it is overloaded. To me it appears that the load on a push service can be very dynamic. Thus overload can really be periods. Thus, what push back to application servers is intended here? Just, do a 503 response on the request from the application server? I do note that RFC 7231 does not appear to have any guidance one how one sets the Retry-After header to spread the load of the retries. This is a known issue. And in this context with automated or external event triggered message creations the push back mechanism needs to be robust and reduce the issues, not make them worse. I would recommend that some recommendations are actually included here for handling overload from application server message submissions. Life of subscription in relation to transports I find myself a bit perplexed of if there are any relation between the subscription and the relation to an existing transport connection and outstanding request, i.e. the availability to receive messages over push. I would guess, that there are no strict need for terminating the subscription even if there are no receiver for an extended time. However, from an application perspective there could exists reason for why a subscription would not be terminated as long as the UA is connected, while any longer duration of no connections could motivate the subscription termination. I personally would like a bit more clarification on this, but seeing how these issues are usually handled around HTTP, I would guess the answer, it will be implementation specific? Still there appear to be assumptions around this, why it wouldn't matter that much, but that is only given that one follows these assumptions. Unclarities which requests that require HTTP/2. The specification is explicit that some request can be performed over HTTP/1.x. However, it is not particular clear which requests that require HTTP/2. I assume all the GET requests that will hang to enable PUSH promises needs to be HTTP/2? Maybe this should be clarified? Cheers Magnus Westerlund Services, Media and Network features, Ericsson Research EAB/TXM Ericsson AB | Phone +46 10 7148287 F\u00e4r\u00f6gatan 6 | Mobile +46 73 0949079 SE-164 80 Stockholm, Sweden | mailto: EMAIL", "new_text": "3. The push service MUST use HTTP over TLS [RFC2818] following the recommendations in [RFC7525]. The push service shares the same default port number (443/TCP) with HTTPS, but MAY also advertise the IANA allocated TCP System Port 1001 using HTTP alternative services RFC7838. While the default port (443) offers broad reachability characteristics, it is most often used for web browsing scenarios"}
{"id": "q-en-webpush-protocol-810f3358733a5d7d0740a45c08feb4dd256b9717025160b2f1d54c788efa9219", "old_text": "to an entity body that is 4096 bytes or less in size. To limit the number of stored push messages, the push service MAY either expire messages prior to their advertised Time-To-Live or reduce their advertised Time-To-Live. 7.3.", "comments": "This draft attempts to address all of the issues identified during with the exception of Dealing with overload situations where further guidance has been requested from early implementers of webpush. Hi, This review is an early, i.e. pre IETF Last Call TSV-ART review. The TSV-ART reviews document that has potential transport implications or transport related subjects. Please treat it as an IETF Last call comment if you don't want to handled it during the AD review. Document: draft-ietf-webpush-protocol-09 Title: Generic Event Delivery Using HTTP Push Reviewer: M. Westerlund Review Date: Sept 27, 2016 Below are a number of comments based on my review. The one with transport related subject is the overload handling in comment 4. Section 3: So the Security Considerations do require use of HTTP over TLS. However, I do wonder if there would be a point to move that requirement up into the relevant connection sections, like 3. Especially as when one reads the confidentiality requirement in Section 4 for the message one wonders a bit why it is not explicitly required in section 3. Section 5.2: TTL = 1*DIGIT Shouldn't the upper range for allowed values be specified here. At least to ensure that one doesn't get interoperability issues. Section 7.2: To limit the number of stored push messages, the push service MAY either expire messages prior to their advertised Time-To-Live or reduce their advertised Time-To-Live. Do I understand this correctly, that theses options are push service side actions that is not notified at that point to the application server? Instead it will have to note that the message was early expired if it subscribes to delivery receipts? Dealing with overload situations Reviewing Section 7.1 and other parts I find the discussion of how overload situations of various kinds are dealt with missing some cases. So the general handling of subscriptions are covered in Section 7.1 and with a mitigation of redirecting to another server to handle the new subscription. What I lack discussion of are how any form of push back are handled when first the deliver of push service to UA link is overloaded. Is the assumption here that as the push service can store messages the delivery will catch up eventually, or the message expires? How does one handle a 0-RTT messages when one has a queue of messages to deliver, despite having a UA to Push service connection? The second case is how the push service server can push back on accepting new message when it is overloaded. To me it appears that the load on a push service can be very dynamic. Thus overload can really be periods. Thus, what push back to application servers is intended here? Just, do a 503 response on the request from the application server? I do note that RFC 7231 does not appear to have any guidance one how one sets the Retry-After header to spread the load of the retries. This is a known issue. And in this context with automated or external event triggered message creations the push back mechanism needs to be robust and reduce the issues, not make them worse. I would recommend that some recommendations are actually included here for handling overload from application server message submissions. Life of subscription in relation to transports I find myself a bit perplexed of if there are any relation between the subscription and the relation to an existing transport connection and outstanding request, i.e. the availability to receive messages over push. I would guess, that there are no strict need for terminating the subscription even if there are no receiver for an extended time. However, from an application perspective there could exists reason for why a subscription would not be terminated as long as the UA is connected, while any longer duration of no connections could motivate the subscription termination. I personally would like a bit more clarification on this, but seeing how these issues are usually handled around HTTP, I would guess the answer, it will be implementation specific? Still there appear to be assumptions around this, why it wouldn't matter that much, but that is only given that one follows these assumptions. Unclarities which requests that require HTTP/2. The specification is explicit that some request can be performed over HTTP/1.x. However, it is not particular clear which requests that require HTTP/2. I assume all the GET requests that will hang to enable PUSH promises needs to be HTTP/2? Maybe this should be clarified? Cheers Magnus Westerlund Services, Media and Network features, Ericsson Research EAB/TXM Ericsson AB | Phone +46 10 7148287 F\u00e4r\u00f6gatan 6 | Mobile +46 73 0949079 SE-164 80 Stockholm, Sweden | mailto: EMAIL", "new_text": "to an entity body that is 4096 bytes or less in size. To limit the number of stored push messages, the push service MAY respond with a shorter Time-To-Live than proposed by the application server in its request for push message delivery (ttl). Once a message has been accepted, the push service MAY later expire the message prior to its advertised Time-To-Live. If the application server requested a delivery receipt, the push service MUST return a failure response (acknowledge_message). 7.3."}
{"id": "q-en-webpush-protocol-9bf1e1716274ed23bee76afdf13d829f3c44608897629856a37b644faf290806", "old_text": "Generic Event Delivery Using HTTP Push draft-thomson-webpush-protocol-latest Abstract", "comments": "closed closed renamed draft-thomson-webpush-URL to draft-ietf-webpush-URL updated the HTTP/2 reference updated the alt-svc reference\nFrom Darshak Thakore feedback: Add Push-Receipt header to Section 5 example Reference in Section 5.1\nFrom Darshak Thakore review: Section 5 - In the example, Is the post operation to the right URL? Is that just an editorial oversight or is that the intended URL? Current example: POST /p/LBhhw0OohO-Wl4Oi971UGsB7sdQGUibx HTTP/1.1 Probably should be: POST /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV HTTP/1.1 The current example is incorrect. It should match the urn:ietf:params:push link relation returned in the Subscription response - /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV - and distributed to the application server.", "new_text": "Generic Event Delivery Using HTTP Push draft-ietf-webpush-protocol-latest Abstract"}
{"id": "q-en-webpush-protocol-9bf1e1716274ed23bee76afdf13d829f3c44608897629856a37b644faf290806", "old_text": "Examples in this document use the RFC7230. Many of the exchanges can be completed using HTTP/1.1, where HTTP/2 is necessary, the more verbose frame format from I-D.ietf-httpbis-http2 is used. 2.", "comments": "closed closed renamed draft-thomson-webpush-URL to draft-ietf-webpush-URL updated the HTTP/2 reference updated the alt-svc reference\nFrom Darshak Thakore feedback: Add Push-Receipt header to Section 5 example Reference in Section 5.1\nFrom Darshak Thakore review: Section 5 - In the example, Is the post operation to the right URL? Is that just an editorial oversight or is that the intended URL? Current example: POST /p/LBhhw0OohO-Wl4Oi971UGsB7sdQGUibx HTTP/1.1 Probably should be: POST /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV HTTP/1.1 The current example is incorrect. It should match the urn:ietf:params:push link relation returned in the Subscription response - /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV - and distributed to the application server.", "new_text": "Examples in this document use the RFC7230. Many of the exchanges can be completed using HTTP/1.1, where HTTP/2 is necessary, the more verbose frame format from RFC7540 is used. 2."}
{"id": "q-en-webpush-protocol-9bf1e1716274ed23bee76afdf13d829f3c44608897629856a37b644faf290806", "old_text": "A user agent requests the delivery of new push messages by making a GET request to a push message subscription resource. The push service does not respond to this request, it instead uses I-D.ietf- httpbis-http2 to send the contents of push messages as they are sent by application servers. Each push message is pushed in response to a synthesized GET request. The GET request is made to the push message resource that was created", "comments": "closed closed renamed draft-thomson-webpush-URL to draft-ietf-webpush-URL updated the HTTP/2 reference updated the alt-svc reference\nFrom Darshak Thakore feedback: Add Push-Receipt header to Section 5 example Reference in Section 5.1\nFrom Darshak Thakore review: Section 5 - In the example, Is the post operation to the right URL? Is that just an editorial oversight or is that the intended URL? Current example: POST /p/LBhhw0OohO-Wl4Oi971UGsB7sdQGUibx HTTP/1.1 Probably should be: POST /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV HTTP/1.1 The current example is incorrect. It should match the urn:ietf:params:push link relation returned in the Subscription response - /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV - and distributed to the application server.", "new_text": "A user agent requests the delivery of new push messages by making a GET request to a push message subscription resource. The push service does not respond to this request, it instead uses RFC7540 to send the contents of push messages as they are sent by application servers. Each push message is pushed in response to a synthesized GET request. The GET request is made to the push message resource that was created"}
{"id": "q-en-webpush-protocol-9bf1e1716274ed23bee76afdf13d829f3c44608897629856a37b644faf290806", "old_text": "The application server requests the delivery of receipts from the push server by making a HTTP GET request to the receipt subscription resource. The push service does not respond to this request, it instead uses I-D.ietf-httpbis-http2 to send push receipts when messages are acknowledged (acknowledge_message) by the user agent. Each receipt is pushed in response to a synthesized GET request. The GET request is made to the same push message resource that was", "comments": "closed closed renamed draft-thomson-webpush-URL to draft-ietf-webpush-URL updated the HTTP/2 reference updated the alt-svc reference\nFrom Darshak Thakore feedback: Add Push-Receipt header to Section 5 example Reference in Section 5.1\nFrom Darshak Thakore review: Section 5 - In the example, Is the post operation to the right URL? Is that just an editorial oversight or is that the intended URL? Current example: POST /p/LBhhw0OohO-Wl4Oi971UGsB7sdQGUibx HTTP/1.1 Probably should be: POST /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV HTTP/1.1 The current example is incorrect. It should match the urn:ietf:params:push link relation returned in the Subscription response - /p/JzLQ3raZJfFBR0aqvOMsLrt54w4rJUsV - and distributed to the application server.", "new_text": "The application server requests the delivery of receipts from the push server by making a HTTP GET request to the receipt subscription resource. The push service does not respond to this request, it instead uses RFC7540 to send push receipts when messages are acknowledged (acknowledge_message) by the user agent. Each receipt is pushed in response to a synthesized GET request. The GET request is made to the same push message resource that was"}
{"id": "q-en-webrtc-http-ingest-protocol-1cf6cad8cd77ed2ba14cf27cd0770fcc766aa693ab86bed2c202c39090e78456", "old_text": "its adoption in the broadcasting/streaming industry is lagging behind. Currently, there is no standard protocol designed for ingesting media into a streaming service using WebRTC and so content providers still rely heavily on protocols like RTMP for doing so. These protocols are much older than WebRTC and by default lack some important security and resilience features provided by WebRTC with minimal overhead and additional latency. The media codecs used for ingestion in older protocols tend to be limited and not negotiated. WebRTC includes support for negotiation of codecs, potentially alleviating transcoding on the ingest node (which can introduce delay and degrade media quality). Server side transcoding that has traditionally been done to present multiple renditions in Adaptive Bit Rate Streaming (ABR) implementations can be replaced with Simulcast RFC8853 and SVC codecs that are well supported by WebRTC clients. In addition, WebRTC clients can adjust client-side encoding parameters based on RTCP feedback to maximize encoding quality. Encryption is mandatory in WebRTC, therefore secure transport of media is implicit. The IETF RTCWEB working group standardized JSEP (RFC8829), a mechanism used to control the setup, management, and teardown of a multimedia session. It also describes how to negotiate media flows", "comments": "The text says: \"The media codecs used in older protocols do not always match those being used in WebRTC, mandating transcoding on the ingest node,\" I don't understand this sentence. If I use an \"older protocol\", e.g, RTMP, why do I need to care about WebRTC codecs?\nThat refers if you use RTMP for ingest and WebRTC for viewing.\nrewriten the editorial in\nThe text says: \"These protocols are much older than webrtc and lack by default some important security and resilience features provided by webrtc with minimal delay.\" I assume you refer to RTMP, which is mentioned in the previous Section. Doesn't RTMPS solve some of the security issues? But in general, if you want to put up WHIP against RTMP, I think you need a little more text. Also, as WHIP is supposed to be an \"easy to implement\" protocol, I think it would be fair to also list some of the functions NOT supported compared to e.g., RTMP. As I have mentioned before, there is no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME.\nIt is more referring to RTSP which lacks DTLS support. no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME. RTMP protocol was intended as an RPC mechanism between the flash player and server. It allowed to create custom apps both on client and server side, which supported a wide range of methods using AMF. However, current \"RTMP protocol\" implemented for ingesting is a reversed engineered version of the implementation done by the FMS and it lacks support for PAUSE/RESUME and the publication is started from the start of the RTMP connection (same as whip).\nrewriten the editorial in", "new_text": "its adoption in the broadcasting/streaming industry is lagging behind. The IETF RTCWEB working group standardized JSEP (RFC8829), a mechanism used to control the setup, management, and teardown of a multimedia session. It also describes how to negotiate media flows"}
{"id": "q-en-webrtc-http-ingest-protocol-1cf6cad8cd77ed2ba14cf27cd0770fcc766aa693ab86bed2c202c39090e78456", "old_text": "based on RTP and may be the closest in terms of features to WebRTC, is not compatible with the SDP offer/answer model RFC3264. In the specific case of media ingestion into a streaming service, some assumptions can be made about the server-side which simplifies the WebRTC compliance burden, as detailed in WebRTC-gateway document draft-ietf-rtcweb-gateways. This document proposes a simple protocol for supporting WebRTC as media ingestion method which:", "comments": "The text says: \"The media codecs used in older protocols do not always match those being used in WebRTC, mandating transcoding on the ingest node,\" I don't understand this sentence. If I use an \"older protocol\", e.g, RTMP, why do I need to care about WebRTC codecs?\nThat refers if you use RTMP for ingest and WebRTC for viewing.\nrewriten the editorial in\nThe text says: \"These protocols are much older than webrtc and lack by default some important security and resilience features provided by webrtc with minimal delay.\" I assume you refer to RTMP, which is mentioned in the previous Section. Doesn't RTMPS solve some of the security issues? But in general, if you want to put up WHIP against RTMP, I think you need a little more text. Also, as WHIP is supposed to be an \"easy to implement\" protocol, I think it would be fair to also list some of the functions NOT supported compared to e.g., RTMP. As I have mentioned before, there is no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME.\nIt is more referring to RTSP which lacks DTLS support. no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME. RTMP protocol was intended as an RPC mechanism between the flash player and server. It allowed to create custom apps both on client and server side, which supported a wide range of methods using AMF. However, current \"RTMP protocol\" implemented for ingesting is a reversed engineered version of the implementation done by the FMS and it lacks support for PAUSE/RESUME and the publication is started from the start of the RTMP connection (same as whip).\nrewriten the editorial in", "new_text": "based on RTP and may be the closest in terms of features to WebRTC, is not compatible with the SDP offer/answer model RFC3264. So, currently, there is no standard protocol designed for ingesting media into a streaming service using WebRTC and so content providers still rely heavily on protocols like RTMP for doing so. Most of those protocols are not RTP based, requiring media protocol translation when doing egress via WebRTC. Avoiding this media protocol translation is desirable as there is no functional parity between those protocols and WebRTC and it increases the implementation complexity at the media server side. Also, the media codecs used in those protocols tend to be limited and not negotiated, not always matching the mediac codes supported in WebRTC. This requires to perform transcoding on the ingest node, which introduces delay, degrades media quality and increases the processing workload required on the server side. Server side transcoding that has traditionally been done to present multiple renditions in Adaptive Bit Rate Streaming (ABR) implementations can be replaced with Simulcast RFC8853 and SVC codecs that are well supported by WebRTC clients. In addition, WebRTC clients can adjust client-side encoding parameters based on RTCP feedback to maximize encoding quality. This document proposes a simple protocol for supporting WebRTC as media ingestion method which:"}
{"id": "q-en-webrtc-http-ingest-protocol-1cf6cad8cd77ed2ba14cf27cd0770fcc766aa693ab86bed2c202c39090e78456", "old_text": "4.2. In order to reduce the complexity of implementing WHIP in both clients and media servers, WHIP imposes the following restrictions regarding WebRTC usage:", "comments": "The text says: \"The media codecs used in older protocols do not always match those being used in WebRTC, mandating transcoding on the ingest node,\" I don't understand this sentence. If I use an \"older protocol\", e.g, RTMP, why do I need to care about WebRTC codecs?\nThat refers if you use RTMP for ingest and WebRTC for viewing.\nrewriten the editorial in\nThe text says: \"These protocols are much older than webrtc and lack by default some important security and resilience features provided by webrtc with minimal delay.\" I assume you refer to RTMP, which is mentioned in the previous Section. Doesn't RTMPS solve some of the security issues? But in general, if you want to put up WHIP against RTMP, I think you need a little more text. Also, as WHIP is supposed to be an \"easy to implement\" protocol, I think it would be fair to also list some of the functions NOT supported compared to e.g., RTMP. As I have mentioned before, there is no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME.\nIt is more referring to RTSP which lacks DTLS support. no command for start the ingestion - it is assumed to have begun when the connection has been established. Also, there is no PAUSE/RESUME. RTMP protocol was intended as an RPC mechanism between the flash player and server. It allowed to create custom apps both on client and server side, which supported a wide range of methods using AMF. However, current \"RTMP protocol\" implemented for ingesting is a reversed engineered version of the implementation done by the FMS and it lacks support for PAUSE/RESUME and the publication is started from the start of the RTMP connection (same as whip).\nrewriten the editorial in", "new_text": "4.2. In the specific case of media ingestion into a streaming service, some assumptions can be made about the server-side which simplifies the WebRTC compliance burden, as detailed in WebRTC-gateway document draft-ietf-rtcweb-gateways. In order to reduce the complexity of implementing WHIP in both clients and media servers, WHIP imposes the following restrictions regarding WebRTC usage:"}
{"id": "q-en-webrtc-http-ingest-protocol-e23bd94e323460c90cffe5aaf00f5d857e011651bca3d2e55b3b8f5dcf2e2a75", "old_text": "The initial offer by the WHIP client MAY be sent after the full ICE gathering is complete with the full list of ICE candidates, or it MAY only contain local candidates (or even an empty list of candidates). In order to simplify the protocol, there is no support for exchanging gathered trickle candidates from Media Server ICE candidates once the", "comments": "NAME ping?\nSection 4.1 \u201cThe WHIP client MAY perform trickle ICE or ICE restarts ] by sending an HTTP PATCH request\u2026\u201d Why is there a reference to RFC8863, which specifies ICE PAC, here? In general a reference to ICE PAC is useful, because this draft allows an offer without any candidates. So maybe that reference should go at the end of: \u201cThe initial offer by the WHIP client MAY be sent after the full ICE gathering is complete with the full list of ICE candidates, or it MAY only contain local candidates (or even an empty list of candidates).\u201d\nI think it was a typo.\nI created a PR to fix the tricke ice reference and add the pac one as suggested.", "new_text": "The initial offer by the WHIP client MAY be sent after the full ICE gathering is complete with the full list of ICE candidates, or it MAY only contain local candidates (or even an empty list of candidates) as per RFC8863. In order to simplify the protocol, there is no support for exchanging gathered trickle candidates from Media Server ICE candidates once the"}
{"id": "q-en-webrtc-http-ingest-protocol-e23bd94e323460c90cffe5aaf00f5d857e011651bca3d2e55b3b8f5dcf2e2a75", "old_text": "candidates of the Media Server. The Media Server MAY use ICE lite, while the WHIP client MUST implement full ICE. The WHIP client MAY perform trickle ICE or ICE restarts RFC8863 by sending an HTTP PATCH request to the WHIP resource URL with a body containing a SDP fragment with MIME type \"application/trickle-ice- sdpfrag\" as specified in RFC8840. When used for trickle ICE, the body of this PATCH message will contain the new ICE candidate; when used for ICE restarts, it will contain a new ICE ufrag/pwd pair. Trickle ICE and ICE restart support is OPTIONAL for a WHIP resource. If the WHIP resource supports either Trickle ICE or ICE restarts, but", "comments": "NAME ping?\nSection 4.1 \u201cThe WHIP client MAY perform trickle ICE or ICE restarts ] by sending an HTTP PATCH request\u2026\u201d Why is there a reference to RFC8863, which specifies ICE PAC, here? In general a reference to ICE PAC is useful, because this draft allows an offer without any candidates. So maybe that reference should go at the end of: \u201cThe initial offer by the WHIP client MAY be sent after the full ICE gathering is complete with the full list of ICE candidates, or it MAY only contain local candidates (or even an empty list of candidates).\u201d\nI think it was a typo.\nI created a PR to fix the tricke ice reference and add the pac one as suggested.", "new_text": "candidates of the Media Server. The Media Server MAY use ICE lite, while the WHIP client MUST implement full ICE. The WHIP client MAY perform trickle ICE or ICE restarts as per RFC8838 by sending an HTTP PATCH request to the WHIP resource URL with a body containing a SDP fragment with MIME type \"application/ trickle-ice-sdpfrag\" as specified in RFC8840. When used for trickle ICE, the body of this PATCH message will contain the new ICE candidate; when used for ICE restarts, it will contain a new ICE ufrag/pwd pair. Trickle ICE and ICE restart support is OPTIONAL for a WHIP resource. If the WHIP resource supports either Trickle ICE or ICE restarts, but"}