repo stringlengths 6 65 | file_url stringlengths 81 311 | file_path stringlengths 6 227 | content stringlengths 0 32.8k | language stringclasses 1
value | license stringclasses 7
values | commit_sha stringlengths 40 40 | retrieved_at stringdate 2026-01-04 15:31:58 2026-01-04 20:25:31 | truncated bool 2
classes |
|---|---|---|---|---|---|---|---|---|
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/manager/peer_state.rs | src/transport/manager/peer_state.rs | // Copyright 2024 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Peer state management.
use crate::{
transport::{
manager::{SupportedTransport, LOG_TARGET},
Endpoint,
},
types::ConnectionId,
PeerId,
};
use multiaddr::{Multiaddr, Protocol};
use std::collections::HashSet;
/// The peer state that tracks connections and dialing attempts.
///
/// # State Machine
///
/// ## [`PeerState::Disconnected`]
///
/// Initially, the peer is in the [`PeerState::Disconnected`] state without a
/// [`PeerState::Disconnected::dial_record`]. This means the peer is fully disconnected.
///
/// Next states:
/// - [`PeerState::Disconnected`] -> [`PeerState::Dialing`] (via [`PeerState::dial_single_address`])
/// - [`PeerState::Disconnected`] -> [`PeerState::Opening`] (via [`PeerState::dial_addresses`])
///
/// ## [`PeerState::Dialing`]
///
/// The peer can transition to the [`PeerState::Dialing`] state when a dialing attempt is
/// initiated. This only happens when the peer is dialed on a single address via
/// [`PeerState::dial_single_address`], or when a socket connection established
/// in [`PeerState::Opening`] is upgraded to noise and yamux negotiation phase.
///
/// The dialing state implies the peer is reached on the socket address provided, as well as
/// negotiating noise and yamux protocols.
///
/// Next states:
/// - [`PeerState::Dialing`] -> [`PeerState::Connected`] (via
/// [`PeerState::on_connection_established`])
/// - [`PeerState::Dialing`] -> [`PeerState::Disconnected`] (via [`PeerState::on_dial_failure`])
///
/// ## [`PeerState::Opening`]
///
/// The peer can transition to the [`PeerState::Opening`] state when a dialing attempt is
/// initiated on multiple addresses via [`PeerState::dial_addresses`]. This takes into account
/// the parallelism factor (8 maximum) of the dialing attempts.
///
/// The opening state holds information about which protocol is being dialed to properly report back
/// errors.
///
/// The opening state is similar to the dial state, however the peer is only reached on a socket
/// address. The noise and yamux protocols are not negotiated yet. This state transitions to
/// [`PeerState::Dialing`] for the final part of the negotiation. Please note that it would be
/// wasteful to negotiate the noise and yamux protocols on all addresses, since only one
/// connection is kept around.
///
/// Next states:
/// - [`PeerState::Opening`] -> [`PeerState::Dialing`] (via transport manager
/// `on_connection_opened`)
/// - [`PeerState::Opening`] -> [`PeerState::Disconnected`] (via transport manager
/// `on_connection_opened` if negotiation cannot be started or via `on_open_failure`)
/// - [`PeerState::Opening`] -> [`PeerState::Connected`] (via transport manager
/// `on_connection_established` when an incoming connection is accepted)
#[derive(Debug, Clone, PartialEq)]
pub enum PeerState {
/// `Litep2p` is connected to peer.
Connected {
/// The established record of the connection.
record: ConnectionRecord,
/// Secondary record, this can either be a dial record or an established connection.
///
/// While the local node was dialing a remote peer, the remote peer might've dialed
/// the local node and connection was established successfully. The original dial
/// address is stored for processing later when the dial attempt concludes as
/// either successful/failed.
secondary: Option<SecondaryOrDialing>,
},
/// Connection to peer is opening over one or more addresses.
Opening {
/// Address records used for dialing.
addresses: HashSet<Multiaddr>,
/// Connection ID.
connection_id: ConnectionId,
/// Active transports.
transports: HashSet<SupportedTransport>,
},
/// Peer is being dialed.
Dialing {
/// Address record.
dial_record: ConnectionRecord,
},
/// `Litep2p` is not connected to peer.
Disconnected {
/// Dial address, if it exists.
///
/// While the local node was dialing a remote peer, the remote peer might've dialed
/// the local node and connection was established successfully. The connection might've
/// been closed before the dial concluded which means that
/// [`crate::transport::manager::TransportManager`] must be prepared to handle the dial
/// failure even after the connection has been closed.
dial_record: Option<ConnectionRecord>,
},
}
/// The state of the secondary connection.
#[derive(Debug, Clone, PartialEq)]
pub enum SecondaryOrDialing {
/// The secondary connection is established.
Secondary(ConnectionRecord),
/// The primary connection is established, but the secondary connection is still dialing.
Dialing(ConnectionRecord),
}
/// Result of initiating a dial.
#[derive(Debug, Clone, PartialEq)]
pub enum StateDialResult {
/// The peer is already connected.
AlreadyConnected,
/// The dialing state is already in progress.
DialingInProgress,
/// The peer is disconnected, start dialing.
Ok,
}
impl PeerState {
/// Check if the peer can be dialed.
pub fn can_dial(&self) -> StateDialResult {
match self {
// The peer is already connected, no need to dial again.
Self::Connected { .. } => StateDialResult::AlreadyConnected,
// The dialing state is already in progress, an event will be emitted later.
Self::Dialing { .. }
| Self::Opening { .. }
| Self::Disconnected {
dial_record: Some(_),
} => StateDialResult::DialingInProgress,
Self::Disconnected { dial_record: None } => StateDialResult::Ok,
}
}
/// Dial the peer on a single address.
pub fn dial_single_address(&mut self, dial_record: ConnectionRecord) -> StateDialResult {
match self.can_dial() {
StateDialResult::Ok => {
*self = PeerState::Dialing { dial_record };
StateDialResult::Ok
}
reason => reason,
}
}
/// Dial the peer on multiple addresses.
pub fn dial_addresses(
&mut self,
connection_id: ConnectionId,
addresses: HashSet<Multiaddr>,
transports: HashSet<SupportedTransport>,
) -> StateDialResult {
match self.can_dial() {
StateDialResult::Ok => {
*self = PeerState::Opening {
addresses,
connection_id,
transports,
};
StateDialResult::Ok
}
reason => reason,
}
}
/// Handle dial failure.
///
/// # Transitions
///
/// - [`PeerState::Dialing`] (with record) -> [`PeerState::Disconnected`]
/// - [`PeerState::Connected`] (with dial record) -> [`PeerState::Connected`]
/// - [`PeerState::Disconnected`] (with dial record) -> [`PeerState::Disconnected`]
///
/// Returns `true` if the connection was handled.
pub fn on_dial_failure(&mut self, connection_id: ConnectionId) -> bool {
match self {
// Clear the dial record if the connection ID matches.
Self::Dialing { dial_record } =>
if dial_record.connection_id == connection_id {
*self = Self::Disconnected { dial_record: None };
return true;
},
Self::Connected {
record,
secondary: Some(SecondaryOrDialing::Dialing(dial_record)),
} =>
if dial_record.connection_id == connection_id {
*self = Self::Connected {
record: record.clone(),
secondary: None,
};
return true;
},
Self::Disconnected {
dial_record: Some(dial_record),
} =>
if dial_record.connection_id == connection_id {
*self = Self::Disconnected { dial_record: None };
return true;
},
Self::Opening { .. } | Self::Connected { .. } | Self::Disconnected { .. } =>
return false,
};
false
}
/// Returns `true` if the connection should be accepted by the transport manager.
pub fn on_connection_established(&mut self, connection: ConnectionRecord) -> bool {
match self {
// Transform the dial record into a secondary connection.
Self::Connected {
record,
secondary: Some(SecondaryOrDialing::Dialing(dial_record)),
} =>
if dial_record.connection_id == connection.connection_id {
*self = Self::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Secondary(connection)),
};
return true;
},
// There's place for a secondary connection.
Self::Connected {
record,
secondary: None,
} => {
*self = Self::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Secondary(connection)),
};
return true;
}
// Convert the dial record into a primary connection or preserve it.
Self::Dialing { dial_record }
| Self::Disconnected {
dial_record: Some(dial_record),
} =>
if dial_record.connection_id == connection.connection_id {
*self = Self::Connected {
record: connection.clone(),
secondary: None,
};
return true;
} else {
*self = Self::Connected {
record: connection,
secondary: Some(SecondaryOrDialing::Dialing(dial_record.clone())),
};
return true;
},
Self::Disconnected { dial_record: None } => {
*self = Self::Connected {
record: connection,
secondary: None,
};
return true;
}
// Accept the incoming connection.
Self::Opening {
addresses,
connection_id,
..
} => {
tracing::trace!(
target: LOG_TARGET,
?connection,
opening_addresses = ?addresses,
opening_connection_id = ?connection_id,
"Connection established while opening"
);
*self = Self::Connected {
record: connection,
secondary: None,
};
return true;
}
_ => {}
};
false
}
/// Returns `true` if the connection was closed.
pub fn on_connection_closed(&mut self, connection_id: ConnectionId) -> bool {
match self {
Self::Connected { record, secondary } => {
// Primary connection closed.
if record.connection_id == connection_id {
match secondary {
// Promote secondary connection to primary.
Some(SecondaryOrDialing::Secondary(secondary)) => {
*self = Self::Connected {
record: secondary.clone(),
secondary: None,
};
}
// Preserve the dial record.
Some(SecondaryOrDialing::Dialing(dial_record)) => {
*self = Self::Disconnected {
dial_record: Some(dial_record.clone()),
};
return true;
}
None => {
*self = Self::Disconnected { dial_record: None };
return true;
}
};
return false;
}
match secondary {
// Secondary connection closed.
Some(SecondaryOrDialing::Secondary(secondary))
if secondary.connection_id == connection_id =>
{
*self = Self::Connected {
record: record.clone(),
secondary: None,
};
}
_ => (),
}
}
_ => (),
}
false
}
/// Returns `true` if the last transport failed to open.
pub fn on_open_failure(&mut self, transport: SupportedTransport) -> bool {
match self {
Self::Opening { transports, .. } => {
transports.remove(&transport);
if transports.is_empty() {
*self = Self::Disconnected { dial_record: None };
return true;
}
false
}
_ => false,
}
}
/// Returns `true` if the connection was opened.
pub fn on_connection_opened(&mut self, record: ConnectionRecord) -> bool {
match self {
Self::Opening {
addresses,
connection_id,
..
} => {
if record.connection_id != *connection_id || !addresses.contains(&record.address) {
tracing::warn!(
target: LOG_TARGET,
?record,
?addresses,
?connection_id,
"Connection opened for unknown address or connection ID",
);
}
*self = Self::Dialing {
dial_record: record.clone(),
};
true
}
_ => false,
}
}
}
/// The connection record keeps track of the connection ID and the address of the connection.
///
/// The connection ID is used to track the connection in the transport layer.
/// While the address is used to keep a healthy view of the network for dialing purposes.
///
/// # Note
///
/// The structure is used to keep track of:
///
/// - dialing state for outbound connections.
/// - established outbound connections via [`PeerState::Connected`].
/// - established inbound connections via `PeerContext::secondary_connection`.
#[derive(Debug, Clone, Hash, PartialEq)]
pub struct ConnectionRecord {
/// Address of the connection.
///
/// The address must contain the peer ID extension `/p2p/<peer_id>`.
pub address: Multiaddr,
/// Connection ID resulted from dialing.
pub connection_id: ConnectionId,
}
impl ConnectionRecord {
/// Construct a new connection record.
pub fn new(peer: PeerId, address: Multiaddr, connection_id: ConnectionId) -> Self {
Self {
address: Self::ensure_peer_id(peer, address),
connection_id,
}
}
/// Create a new connection record from the peer ID and the endpoint.
pub fn from_endpoint(peer: PeerId, endpoint: &Endpoint) -> Self {
Self {
address: Self::ensure_peer_id(peer, endpoint.address().clone()),
connection_id: endpoint.connection_id(),
}
}
/// Ensures the peer ID is present in the address.
fn ensure_peer_id(peer: PeerId, mut address: Multiaddr) -> Multiaddr {
if let Some(Protocol::P2p(multihash)) = address.iter().last() {
if multihash != *peer.as_ref() {
tracing::warn!(
target: LOG_TARGET,
?address,
?peer,
"Peer ID mismatch in address",
);
address.pop();
address.push(Protocol::P2p(*peer.as_ref()));
}
address
} else {
address.with(Protocol::P2p(*peer.as_ref()))
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn state_can_dial() {
let state = PeerState::Disconnected { dial_record: None };
assert_eq!(state.can_dial(), StateDialResult::Ok);
let record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(0),
);
let state = PeerState::Disconnected {
dial_record: Some(record.clone()),
};
assert_eq!(state.can_dial(), StateDialResult::DialingInProgress);
let state = PeerState::Dialing {
dial_record: record.clone(),
};
assert_eq!(state.can_dial(), StateDialResult::DialingInProgress);
let state = PeerState::Opening {
addresses: Default::default(),
connection_id: ConnectionId::from(0),
transports: Default::default(),
};
assert_eq!(state.can_dial(), StateDialResult::DialingInProgress);
let state = PeerState::Connected {
record,
secondary: None,
};
assert_eq!(state.can_dial(), StateDialResult::AlreadyConnected);
}
#[test]
fn state_dial_single_address() {
let record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(0),
);
let mut state = PeerState::Disconnected { dial_record: None };
assert_eq!(
state.dial_single_address(record.clone()),
StateDialResult::Ok
);
assert_eq!(
state,
PeerState::Dialing {
dial_record: record
}
);
}
#[test]
fn state_dial_addresses() {
let mut state = PeerState::Disconnected { dial_record: None };
assert_eq!(
state.dial_addresses(
ConnectionId::from(0),
Default::default(),
Default::default()
),
StateDialResult::Ok
);
assert_eq!(
state,
PeerState::Opening {
addresses: Default::default(),
connection_id: ConnectionId::from(0),
transports: Default::default()
}
);
}
#[test]
fn check_dial_failure() {
let record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(0),
);
// Check from the dialing state.
{
let mut state = PeerState::Dialing {
dial_record: record.clone(),
};
let previous_state = state.clone();
// Check with different connection ID.
state.on_dial_failure(ConnectionId::from(1));
assert_eq!(state, previous_state);
// Check with the same connection ID.
state.on_dial_failure(ConnectionId::from(0));
assert_eq!(state, PeerState::Disconnected { dial_record: None });
}
// Check from the connected state without dialing state.
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: None,
};
let previous_state = state.clone();
// Check with different connection ID.
state.on_dial_failure(ConnectionId::from(1));
assert_eq!(state, previous_state);
// Check with the same connection ID.
// The connection ID is checked against dialing records, not established connections.
state.on_dial_failure(ConnectionId::from(0));
assert_eq!(state, previous_state);
}
// Check from the connected state with dialing state.
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Dialing(record.clone())),
};
let previous_state = state.clone();
// Check with different connection ID.
state.on_dial_failure(ConnectionId::from(1));
assert_eq!(state, previous_state);
// Check with the same connection ID.
// Dial record is cleared.
state.on_dial_failure(ConnectionId::from(0));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: None,
}
);
}
// Check from the disconnected state.
{
let mut state = PeerState::Disconnected {
dial_record: Some(record.clone()),
};
let previous_state = state.clone();
// Check with different connection ID.
state.on_dial_failure(ConnectionId::from(1));
assert_eq!(state, previous_state);
// Check with the same connection ID.
state.on_dial_failure(ConnectionId::from(0));
assert_eq!(state, PeerState::Disconnected { dial_record: None });
}
}
#[test]
fn check_connection_established() {
let record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(0),
);
let second_record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(1),
);
// Check from the connected state without secondary connection.
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: None,
};
// Secondary is established.
assert!(state.on_connection_established(record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Secondary(record.clone())),
}
);
}
// Check from the connected state with secondary dialing connection.
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Dialing(record.clone())),
};
// Promote the secondary connection.
assert!(state.on_connection_established(record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Secondary(record.clone())),
}
);
}
// Check from the connected state with secondary established connection.
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Secondary(record.clone())),
};
// No state to advance.
assert!(!state.on_connection_established(record.clone()));
}
// Opening state is completely wiped out.
{
let mut state = PeerState::Opening {
addresses: Default::default(),
connection_id: ConnectionId::from(0),
transports: Default::default(),
};
assert!(state.on_connection_established(record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: None,
}
);
}
// Disconnected state with dial record.
{
let mut state = PeerState::Disconnected {
dial_record: Some(record.clone()),
};
assert!(state.on_connection_established(record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: None,
}
);
}
// Disconnected with different dial record.
{
let mut state = PeerState::Disconnected {
dial_record: Some(record.clone()),
};
assert!(state.on_connection_established(second_record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: second_record.clone(),
secondary: Some(SecondaryOrDialing::Dialing(record.clone()))
}
);
}
// Disconnected without dial record.
{
let mut state = PeerState::Disconnected { dial_record: None };
assert!(state.on_connection_established(record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: None,
}
);
}
// Dialing with different dial record.
{
let mut state = PeerState::Dialing {
dial_record: record.clone(),
};
assert!(state.on_connection_established(second_record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: second_record.clone(),
secondary: Some(SecondaryOrDialing::Dialing(record.clone()))
}
);
}
// Dialing with the same dial record.
{
let mut state = PeerState::Dialing {
dial_record: record.clone(),
};
assert!(state.on_connection_established(record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: None,
}
);
}
}
#[test]
fn check_connection_closed() {
let record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(0),
);
let second_record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(1),
);
// Primary is closed
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: None,
};
assert!(state.on_connection_closed(ConnectionId::from(0)));
assert_eq!(state, PeerState::Disconnected { dial_record: None });
}
// Primary is closed with secondary promoted
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Secondary(second_record.clone())),
};
// Peer is still connected.
assert!(!state.on_connection_closed(ConnectionId::from(0)));
assert_eq!(
state,
PeerState::Connected {
record: second_record.clone(),
secondary: None,
}
);
}
// Primary is closed with secondary dial record
{
let mut state = PeerState::Connected {
record: record.clone(),
secondary: Some(SecondaryOrDialing::Dialing(second_record.clone())),
};
assert!(state.on_connection_closed(ConnectionId::from(0)));
assert_eq!(
state,
PeerState::Disconnected {
dial_record: Some(second_record.clone())
}
);
}
}
#[test]
fn check_open_failure() {
let mut state = PeerState::Opening {
addresses: Default::default(),
connection_id: ConnectionId::from(0),
transports: [SupportedTransport::Tcp].into_iter().collect(),
};
// This is the last protocol
assert!(state.on_open_failure(SupportedTransport::Tcp));
assert_eq!(state, PeerState::Disconnected { dial_record: None });
}
#[test]
fn check_open_connection() {
let record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(0),
);
let mut state = PeerState::Opening {
addresses: Default::default(),
connection_id: ConnectionId::from(0),
transports: [SupportedTransport::Tcp].into_iter().collect(),
};
assert!(state.on_connection_opened(record.clone()));
}
#[test]
fn check_full_lifecycle() {
let record = ConnectionRecord::new(
PeerId::random(),
"/ip4/1.1.1.1/tcp/80".parse().unwrap(),
ConnectionId::from(0),
);
let mut state = PeerState::Disconnected { dial_record: None };
// Dialing.
assert_eq!(
state.dial_single_address(record.clone()),
StateDialResult::Ok
);
assert_eq!(
state,
PeerState::Dialing {
dial_record: record.clone()
}
);
// Dialing failed.
state.on_dial_failure(ConnectionId::from(0));
assert_eq!(state, PeerState::Disconnected { dial_record: None });
// Opening.
assert_eq!(
state.dial_addresses(
ConnectionId::from(0),
Default::default(),
Default::default()
),
StateDialResult::Ok
);
// Open failure.
assert!(state.on_open_failure(SupportedTransport::Tcp));
assert_eq!(state, PeerState::Disconnected { dial_record: None });
// Dial again.
assert_eq!(
state.dial_single_address(record.clone()),
StateDialResult::Ok
);
assert_eq!(
state,
PeerState::Dialing {
dial_record: record.clone()
}
);
// Successful dial.
assert!(state.on_connection_established(record.clone()));
assert_eq!(
state,
PeerState::Connected {
record: record.clone(),
secondary: None
}
);
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/manager/handle.rs | src/transport/manager/handle.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
addresses::PublicAddresses,
crypto::ed25519::Keypair,
error::ImmediateDialError,
executor::Executor,
protocol::ProtocolSet,
transport::manager::{
address::AddressRecord,
peer_state::StateDialResult,
types::{PeerContext, SupportedTransport},
ProtocolContext, TransportManagerEvent, LOG_TARGET,
},
types::{protocol::ProtocolName, ConnectionId},
BandwidthSink, PeerId,
};
use multiaddr::{Multiaddr, Protocol};
use parking_lot::RwLock;
use tokio::sync::mpsc::{error::TrySendError, Sender};
use std::{
collections::{HashMap, HashSet},
net::IpAddr,
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
};
/// Inner commands sent from [`TransportManagerHandle`] to
/// [`crate::transport::manager::TransportManager`].
pub enum InnerTransportManagerCommand {
/// Dial peer.
DialPeer {
/// Remote peer ID.
peer: PeerId,
},
/// Dial address.
DialAddress {
/// Remote address.
address: Multiaddr,
},
UnregisterProtocol {
/// Protocol name.
protocol: ProtocolName,
},
}
/// Handle for communicating with [`crate::transport::manager::TransportManager`].
#[derive(Debug, Clone)]
pub struct TransportManagerHandle {
/// Local peer ID.
local_peer_id: PeerId,
/// Peers.
peers: Arc<RwLock<HashMap<PeerId, PeerContext>>>,
/// TX channel for sending commands to [`crate::transport::manager::TransportManager`].
cmd_tx: Sender<InnerTransportManagerCommand>,
/// Supported transports.
supported_transport: HashSet<SupportedTransport>,
/// Local listen addresess.
listen_addresses: Arc<RwLock<HashSet<Multiaddr>>>,
/// Public addresses.
public_addresses: PublicAddresses,
}
impl TransportManagerHandle {
/// Create new [`TransportManagerHandle`].
pub fn new(
local_peer_id: PeerId,
peers: Arc<RwLock<HashMap<PeerId, PeerContext>>>,
cmd_tx: Sender<InnerTransportManagerCommand>,
supported_transport: HashSet<SupportedTransport>,
listen_addresses: Arc<RwLock<HashSet<Multiaddr>>>,
public_addresses: PublicAddresses,
) -> Self {
Self {
peers,
cmd_tx,
local_peer_id,
supported_transport,
listen_addresses,
public_addresses,
}
}
/// Register new transport to [`TransportManagerHandle`].
pub(crate) fn register_transport(&mut self, transport: SupportedTransport) {
self.supported_transport.insert(transport);
}
/// Get the list of public addresses of the node.
pub(crate) fn public_addresses(&self) -> PublicAddresses {
self.public_addresses.clone()
}
/// Get the list of listen addresses of the node.
pub(crate) fn listen_addresses(&self) -> HashSet<Multiaddr> {
self.listen_addresses.read().clone()
}
/// Check if `address` is supported by one of the enabled transports.
pub fn supported_transport(&self, address: &Multiaddr) -> bool {
let mut iter = address.iter();
match iter.next() {
Some(Protocol::Ip4(address)) =>
if address.is_unspecified() {
return false;
},
Some(Protocol::Ip6(address)) =>
if address.is_unspecified() {
return false;
},
Some(Protocol::Dns(_)) | Some(Protocol::Dns4(_)) | Some(Protocol::Dns6(_)) => {}
_ => return false,
}
match iter.next() {
None => false,
Some(Protocol::Tcp(_)) => match (iter.next(), iter.next(), iter.next()) {
(Some(Protocol::P2p(_)), None, None) =>
self.supported_transport.contains(&SupportedTransport::Tcp),
#[cfg(feature = "websocket")]
(Some(Protocol::Ws(_)), Some(Protocol::P2p(_)), None) =>
self.supported_transport.contains(&SupportedTransport::WebSocket),
#[cfg(feature = "websocket")]
(Some(Protocol::Wss(_)), Some(Protocol::P2p(_)), None) =>
self.supported_transport.contains(&SupportedTransport::WebSocket),
_ => false,
},
#[cfg(feature = "quic")]
Some(Protocol::Udp(_)) => match (iter.next(), iter.next(), iter.next()) {
(Some(Protocol::QuicV1), Some(Protocol::P2p(_)), None) =>
self.supported_transport.contains(&SupportedTransport::Quic),
_ => false,
},
_ => false,
}
}
/// Helper to extract IP and Port from a Multiaddr
fn extract_ip_port(addr: &Multiaddr) -> Option<(IpAddr, u16)> {
let mut iter = addr.iter();
let ip = match iter.next() {
Some(Protocol::Ip4(i)) => IpAddr::V4(i),
Some(Protocol::Ip6(i)) => IpAddr::V6(i),
_ => return None,
};
let port = match iter.next() {
Some(Protocol::Tcp(p)) | Some(Protocol::Udp(p)) => p,
_ => return None,
};
Some((ip, port))
}
/// Check if the address is a local listen address and if so, discard it.
fn is_local_address(&self, address: &Multiaddr) -> bool {
// Strip the peer ID if present.
let address: Multiaddr = address
.iter()
.take_while(|protocol| !std::matches!(protocol, Protocol::P2p(_)))
.collect();
// Check for the exact match.
let listen_addresses = self.listen_addresses.read();
if listen_addresses.contains(&address) {
return true;
}
let Some((ip, port)) = Self::extract_ip_port(&address) else {
return false;
};
for listen_address in listen_addresses.iter() {
let Some((listen_ip, listen_port)) = Self::extract_ip_port(listen_address) else {
continue;
};
if port == listen_port {
// Exact IP match.
if listen_ip == ip {
return true;
}
// Check if the listener is binding to any (0.0.0.0) interface
// and the incoming is a loopback address.
if listen_ip.is_unspecified() && ip.is_loopback() {
return true;
}
// Check for ipv4/ipv6 loopback equivalence.
if listen_ip.is_loopback() && ip.is_loopback() {
return true;
}
}
}
false
}
/// Add one or more known addresses for peer.
///
/// If peer doesn't exist, it will be added to known peers.
///
/// Returns the number of added addresses after non-supported transports were filtered out.
pub fn add_known_address(
&mut self,
peer: &PeerId,
addresses: impl Iterator<Item = Multiaddr>,
) -> usize {
let mut peer_addresses = HashSet::new();
for address in addresses {
// There is not supported transport configured that can dial this address.
if !self.supported_transport(&address) {
continue;
}
if self.is_local_address(&address) {
continue;
}
// Check the peer ID if present.
if let Some(Protocol::P2p(multihash)) = address.iter().last() {
// This can correspond to the provided peerID or to a different one.
if multihash != *peer.as_ref() {
tracing::debug!(
target: LOG_TARGET,
?peer,
?address,
"Refusing to add known address that corresponds to a different peer ID",
);
continue;
}
peer_addresses.insert(address);
} else {
// Add the provided peer ID to the address.
let address = address.with(Protocol::P2p(multihash::Multihash::from(*peer)));
peer_addresses.insert(address);
}
}
let num_added = peer_addresses.len();
tracing::trace!(
target: LOG_TARGET,
?peer,
?peer_addresses,
"add known addresses",
);
let mut peers = self.peers.write();
let entry = peers.entry(*peer).or_default();
// All addresses should be valid at this point, since the peer ID was either added or
// double checked.
entry
.addresses
.extend(peer_addresses.into_iter().filter_map(AddressRecord::from_multiaddr));
num_added
}
/// Dial peer using `PeerId`.
///
/// Returns an error if the peer is unknown or the peer is already connected.
pub fn dial(&self, peer: &PeerId) -> Result<(), ImmediateDialError> {
if peer == &self.local_peer_id {
return Err(ImmediateDialError::TriedToDialSelf);
}
{
let peers = self.peers.read();
let Some(PeerContext { state, addresses }) = peers.get(peer) else {
return Err(ImmediateDialError::NoAddressAvailable);
};
match state.can_dial() {
StateDialResult::AlreadyConnected =>
return Err(ImmediateDialError::AlreadyConnected),
StateDialResult::DialingInProgress => return Ok(()),
StateDialResult::Ok => {}
};
// Check if we have enough addresses to dial.
if addresses.is_empty() {
return Err(ImmediateDialError::NoAddressAvailable);
}
}
self.cmd_tx
.try_send(InnerTransportManagerCommand::DialPeer { peer: *peer })
.map_err(|error| match error {
TrySendError::Full(_) => ImmediateDialError::ChannelClogged,
TrySendError::Closed(_) => ImmediateDialError::TaskClosed,
})
}
/// Dial peer using `Multiaddr`.
///
/// Returns an error if address it not valid.
pub fn dial_address(&self, address: Multiaddr) -> Result<(), ImmediateDialError> {
if !address.iter().any(|protocol| std::matches!(protocol, Protocol::P2p(_))) {
return Err(ImmediateDialError::PeerIdMissing);
}
self.cmd_tx
.try_send(InnerTransportManagerCommand::DialAddress { address })
.map_err(|error| match error {
TrySendError::Full(_) => ImmediateDialError::ChannelClogged,
TrySendError::Closed(_) => ImmediateDialError::TaskClosed,
})
}
/// Dynamically unregister a protocol.
///
/// This must be called when a protocol is no longer needed (e.g. user dropped the protocol
/// handle).
pub fn unregister_protocol(&self, protocol: ProtocolName) {
tracing::info!(
target: LOG_TARGET,
?protocol,
"Unregistering user protocol on handle drop"
);
if let Err(err) = self
.cmd_tx
.try_send(InnerTransportManagerCommand::UnregisterProtocol { protocol })
{
tracing::error!(
target: LOG_TARGET,
?err,
"Failed to unregister protocol"
);
}
}
}
pub struct TransportHandle {
pub keypair: Keypair,
pub tx: Sender<TransportManagerEvent>,
pub protocols: HashMap<ProtocolName, ProtocolContext>,
pub next_connection_id: Arc<AtomicUsize>,
pub next_substream_id: Arc<AtomicUsize>,
pub bandwidth_sink: BandwidthSink,
pub executor: Arc<dyn Executor>,
}
impl TransportHandle {
pub fn protocol_set(&self, connection_id: ConnectionId) -> ProtocolSet {
ProtocolSet::new(
connection_id,
self.tx.clone(),
self.next_substream_id.clone(),
self.protocols.clone(),
)
}
/// Get next connection ID.
pub fn next_connection_id(&mut self) -> ConnectionId {
let connection_id = self.next_connection_id.fetch_add(1usize, Ordering::Relaxed);
ConnectionId::from(connection_id)
}
}
#[cfg(test)]
mod tests {
use crate::transport::manager::{
address::AddressStore,
peer_state::{ConnectionRecord, PeerState},
};
use super::*;
use multihash::Multihash;
use parking_lot::lock_api::RwLock;
use tokio::sync::mpsc::{channel, Receiver};
fn make_transport_manager_handle() -> (
TransportManagerHandle,
Receiver<InnerTransportManagerCommand>,
) {
let (cmd_tx, cmd_rx) = channel(64);
let local_peer_id = PeerId::random();
(
TransportManagerHandle {
local_peer_id,
cmd_tx,
peers: Default::default(),
supported_transport: HashSet::new(),
listen_addresses: Default::default(),
public_addresses: PublicAddresses::new(local_peer_id),
},
cmd_rx,
)
}
#[tokio::test]
async fn tcp_supported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
let address =
"/dns4/google.com/tcp/24928/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(handle.supported_transport(&address));
}
#[tokio::test]
async fn tcp_unsupported() {
let (handle, _rx) = make_transport_manager_handle();
let address =
"/dns4/google.com/tcp/24928/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[tokio::test]
async fn tcp_non_terminal_unsupported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
let address =
"/dns4/google.com/tcp/24928/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy/p2p-circuit"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn websocket_supported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::WebSocket);
let address =
"/dns4/google.com/tcp/24928/ws/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(handle.supported_transport(&address));
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn websocket_unsupported() {
let (handle, _rx) = make_transport_manager_handle();
let address =
"/dns4/google.com/tcp/24928/ws/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn websocket_non_terminal_unsupported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::WebSocket);
let address =
"/dns4/google.com/tcp/24928/ws/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy/p2p-circuit"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn wss_supported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::WebSocket);
let address =
"/dns4/google.com/tcp/24928/wss/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(handle.supported_transport(&address));
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn wss_unsupported() {
let (handle, _rx) = make_transport_manager_handle();
let address =
"/dns4/google.com/tcp/24928/wss/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn wss_non_terminal_unsupported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::WebSocket);
let address =
"/dns4/google.com/tcp/24928/wss/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy/p2p-circuit"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn quic_supported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Quic);
let address =
"/dns4/google.com/udp/24928/quic-v1/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(handle.supported_transport(&address));
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn quic_unsupported() {
let (handle, _rx) = make_transport_manager_handle();
let address =
"/dns4/google.com/udp/24928/quic-v1/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn quic_non_terminal_unsupported() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Quic);
let address =
"/dns4/google.com/udp/24928/quic-v1/p2p/12D3KooWKrUnV42yDR7G6DewmgHtFaVCJWLjQRi2G9t5eJD3BvTy/p2p-circuit"
.parse()
.unwrap();
assert!(!handle.supported_transport(&address));
}
#[test]
fn transport_not_supported() {
let (handle, _rx) = make_transport_manager_handle();
// only peer id (used by Polkadot sometimes)
assert!(!handle.supported_transport(
&Multiaddr::empty().with(Protocol::P2p(Multihash::from(PeerId::random())))
));
// only one transport
assert!(!handle.supported_transport(
&Multiaddr::empty().with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
));
// any udp-based protocol other than quic
assert!(!handle.supported_transport(
&Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Udp(8888))
.with(Protocol::Utp)
));
// any other protocol other than tcp
assert!(!handle.supported_transport(
&Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Sctp(8888))
));
}
#[test]
fn zero_addresses_added() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
assert!(
handle.add_known_address(
&PeerId::random(),
vec![
Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Udp(8888))
.with(Protocol::Utp),
Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::Wss(std::borrow::Cow::Owned("/".to_string()))),
]
.into_iter()
) == 0usize
);
}
#[tokio::test]
async fn dial_already_connected_peer() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
let peer = {
let peer = PeerId::random();
let mut peers = handle.peers.write();
peers.insert(
peer,
PeerContext {
state: PeerState::Connected {
record: ConnectionRecord {
address: Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(Multihash::from(peer))),
connection_id: ConnectionId::from(0),
},
secondary: None,
},
addresses: AddressStore::from_iter(
vec![Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(Multihash::from(peer)))]
.into_iter(),
),
},
);
drop(peers);
peer
};
match handle.dial(&peer) {
Err(ImmediateDialError::AlreadyConnected) => {}
_ => panic!("invalid return value"),
}
}
#[tokio::test]
async fn peer_already_being_dialed() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
let peer = {
let peer = PeerId::random();
let mut peers = handle.peers.write();
peers.insert(
peer,
PeerContext {
state: PeerState::Dialing {
dial_record: ConnectionRecord {
address: Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(Multihash::from(peer))),
connection_id: ConnectionId::from(0),
},
},
addresses: AddressStore::from_iter(
vec![Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(Multihash::from(peer)))]
.into_iter(),
),
},
);
drop(peers);
peer
};
match handle.dial(&peer) {
Ok(()) => {}
_ => panic!("invalid return value"),
}
}
#[tokio::test]
async fn no_address_available_for_peer() {
let (mut handle, _rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
let peer = {
let peer = PeerId::random();
let mut peers = handle.peers.write();
peers.insert(
peer,
PeerContext {
state: PeerState::Disconnected { dial_record: None },
addresses: AddressStore::new(),
},
);
drop(peers);
peer
};
let err = handle.dial(&peer).unwrap_err();
assert!(matches!(err, ImmediateDialError::NoAddressAvailable));
}
#[tokio::test]
async fn pending_connection_for_disconnected_peer() {
let (mut handle, mut rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
let peer = {
let peer = PeerId::random();
let mut peers = handle.peers.write();
peers.insert(
peer,
PeerContext {
state: PeerState::Disconnected {
dial_record: Some(ConnectionRecord::new(
peer,
Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(Multihash::from(peer))),
ConnectionId::from(0),
)),
},
addresses: AddressStore::from_iter(
vec![Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(Multihash::from(peer)))]
.into_iter(),
),
},
);
drop(peers);
peer
};
match handle.dial(&peer) {
Ok(()) => {}
_ => panic!("invalid return value"),
}
assert!(rx.try_recv().is_err());
}
#[tokio::test]
async fn try_to_dial_self() {
let (mut handle, mut rx) = make_transport_manager_handle();
handle.supported_transport.insert(SupportedTransport::Tcp);
let err = handle.dial(&handle.local_peer_id).unwrap_err();
assert_eq!(err, ImmediateDialError::TriedToDialSelf);
assert!(rx.try_recv().is_err());
}
#[test]
fn is_local_address() {
let (cmd_tx, _cmd_rx) = channel(64);
let local_peer_id = PeerId::random();
let specific_bind: Multiaddr = "/ip6/::1/tcp/8888".parse().expect("valid multiaddress");
let ipv6_bind: Multiaddr = "/ip4/127.0.0.1/tcp/8888".parse().expect("valid multiaddress");
let wildcard_bind: Multiaddr = "/ip4/0.0.0.0/tcp/9000".parse().unwrap();
let listen_addresses = Arc::new(RwLock::new(
[specific_bind, wildcard_bind, ipv6_bind].into_iter().collect(),
));
println!("{:?}", listen_addresses);
let handle = TransportManagerHandle {
local_peer_id,
cmd_tx,
peers: Default::default(),
supported_transport: HashSet::new(),
listen_addresses,
public_addresses: PublicAddresses::new(local_peer_id),
};
// Exact matches
assert!(handle
.is_local_address(&"/ip4/127.0.0.1/tcp/8888".parse().expect("valid multiaddress")));
assert!(handle.is_local_address(
&"/ip6/::1/tcp/8888".parse::<Multiaddr>().expect("valid multiaddress")
));
// Peer ID stripping
assert!(handle.is_local_address(
&"/ip6/::1/tcp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress")
));
assert!(handle.is_local_address(
&"/ip4/127.0.0.1/tcp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress")
));
// same address but different peer id
assert!(handle.is_local_address(
&"/ip6/::1/tcp/8888/p2p/12D3KooWPGxxxQiBEBZ52RY31Z2chn4xsDrGCMouZ88izJrak2T1"
.parse::<Multiaddr>()
.expect("valid multiaddress")
));
assert!(handle.is_local_address(
&"/ip4/127.0.0.1/tcp/8888/p2p/12D3KooWPGxxxQiBEBZ52RY31Z2chn4xsDrGCMouZ88izJrak2T1"
.parse()
.expect("valid multiaddress")
));
// Port collision protection: we listen on 0.0.0.0:9000 and should match any loopback
// address on port 9000.
assert!(
handle.is_local_address(&"/ip4/127.0.0.1/tcp/9000".parse().unwrap()),
"Loopback input should satisfy Wildcard (0.0.0.0) listener"
);
// 8.8.8.8 is a different IP.
assert!(
!handle.is_local_address(&"/ip4/8.8.8.8/tcp/9000".parse().unwrap()),
"Remote IP with same port should NOT be considered local against Wildcard listener"
);
// Port mismatches
assert!(
!handle.is_local_address(&"/ip4/127.0.0.1/tcp/1234".parse().unwrap()),
"Same IP but different port should fail"
);
assert!(
!handle.is_local_address(&"/ip4/0.0.0.0/tcp/1234".parse().unwrap()),
"Wildcard IP but different port should fail"
);
assert!(!handle
.is_local_address(&"/ip4/127.0.0.1/tcp/9999".parse().expect("valid multiaddress")));
assert!(!handle
.is_local_address(&"/ip4/127.0.0.1/tcp/7777".parse().expect("valid multiaddress")));
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/common/listener.rs | src/transport/common/listener.rs | // Copyright 2024 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Shared socket listener between TCP and WebSocket.
use crate::{
error::{AddressError, DnsError},
PeerId,
};
use futures::Stream;
use hickory_resolver::TokioResolver;
use multiaddr::{Multiaddr, Protocol};
use network_interface::{Addr, NetworkInterface, NetworkInterfaceConfig};
use socket2::{Domain, Socket, Type};
use tokio::net::{TcpListener as TokioTcpListener, TcpStream};
use std::{
io,
net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},
pin::Pin,
sync::Arc,
task::{Context, Poll},
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::transport::listener";
/// Address type.
#[derive(Debug)]
pub enum AddressType {
/// Socket address.
Socket(SocketAddr),
/// DNS address.
Dns {
address: String,
port: u16,
dns_type: DnsType,
},
}
/// The DNS type of the address.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum DnsType {
/// DNS supports both IPv4 and IPv6.
Dns,
/// DNS supports only IPv4.
Dns4,
/// DNS supports only IPv6.
Dns6,
}
impl AddressType {
/// Resolve the address to a concrete IP.
pub async fn lookup_ip(self, resolver: Arc<TokioResolver>) -> Result<SocketAddr, DnsError> {
let (url, port, dns_type) = match self {
// We already have the IP address.
AddressType::Socket(address) => return Ok(address),
AddressType::Dns {
address,
port,
dns_type,
} => (address, port, dns_type),
};
let lookup = match resolver.lookup_ip(url.clone()).await {
Ok(lookup) => lookup,
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?error,
"failed to resolve DNS address `{}`",
url
);
return Err(DnsError::ResolveError(url));
}
};
let Some(ip) = lookup.iter().find(|ip| match dns_type {
DnsType::Dns => true,
DnsType::Dns4 => ip.is_ipv4(),
DnsType::Dns6 => ip.is_ipv6(),
}) else {
tracing::debug!(
target: LOG_TARGET,
"Multiaddr DNS type does not match IP version `{}`",
url
);
return Err(DnsError::IpVersionMismatch);
};
Ok(SocketAddr::new(ip, port))
}
}
/// Local addresses to use for outbound connections.
#[derive(Clone, Default)]
pub enum DialAddresses {
/// Reuse port from listen addresses.
Reuse {
listen_addresses: Arc<Vec<SocketAddr>>,
},
/// Do not reuse port.
#[default]
NoReuse,
}
impl DialAddresses {
/// Get local dial address for an outbound connection.
pub fn local_dial_address(&self, remote_address: &IpAddr) -> Result<Option<SocketAddr>, ()> {
match self {
DialAddresses::Reuse { listen_addresses } => {
for address in listen_addresses.iter() {
if remote_address.is_ipv4() == address.is_ipv4()
&& remote_address.is_loopback() == address.ip().is_loopback()
{
if remote_address.is_ipv4() {
return Ok(Some(SocketAddr::new(
IpAddr::V4(Ipv4Addr::UNSPECIFIED),
address.port(),
)));
} else {
return Ok(Some(SocketAddr::new(
IpAddr::V6(Ipv6Addr::UNSPECIFIED),
address.port(),
)));
}
}
}
Err(())
}
DialAddresses::NoReuse => Ok(None),
}
}
}
/// Socket listening to zero or more addresses.
pub struct SocketListener {
/// Listeners.
listeners: Vec<TokioTcpListener>,
/// The index in the listeners from which the polling is resumed.
poll_index: usize,
}
/// Trait to convert between `Multiaddr` and `SocketAddr`.
pub trait GetSocketAddr {
/// Convert `Multiaddr` to `SocketAddr`.
///
/// # Note
///
/// This method is called from two main code paths:
/// - When creating a new `SocketListener` to bind to a specific address.
/// - When dialing a new connection to a remote address.
///
/// The `AddressType` is either `SocketAddr` or a `Dns` address.
/// For the `Dns` the concrete IP address is resolved later in our code.
///
/// The `PeerId` is optional and may not be present.
fn multiaddr_to_socket_address(
address: &Multiaddr,
) -> Result<(AddressType, Option<PeerId>), AddressError>;
/// Convert concrete `SocketAddr` to `Multiaddr`.
fn socket_address_to_multiaddr(address: &SocketAddr) -> Multiaddr;
}
/// TCP helper to convert between `Multiaddr` and `SocketAddr`.
pub struct TcpAddress;
impl GetSocketAddr for TcpAddress {
fn multiaddr_to_socket_address(
address: &Multiaddr,
) -> Result<(AddressType, Option<PeerId>), AddressError> {
multiaddr_to_socket_address(address, SocketListenerType::Tcp)
}
fn socket_address_to_multiaddr(address: &SocketAddr) -> Multiaddr {
Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Tcp(address.port()))
}
}
/// WebSocket helper to convert between `Multiaddr` and `SocketAddr`.
#[cfg(feature = "websocket")]
pub struct WebSocketAddress;
#[cfg(feature = "websocket")]
impl GetSocketAddr for WebSocketAddress {
fn multiaddr_to_socket_address(
address: &Multiaddr,
) -> Result<(AddressType, Option<PeerId>), AddressError> {
multiaddr_to_socket_address(address, SocketListenerType::WebSocket)
}
fn socket_address_to_multiaddr(address: &SocketAddr) -> Multiaddr {
Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Tcp(address.port()))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")))
}
}
impl SocketListener {
/// Create new [`SocketListener`]
pub fn new<T: GetSocketAddr>(
addresses: Vec<Multiaddr>,
reuse_port: bool,
nodelay: bool,
) -> (Self, Vec<Multiaddr>, DialAddresses) {
let (listeners, listen_addresses): (_, Vec<Vec<_>>) = addresses
.into_iter()
.filter_map(|address| {
let address = match T::multiaddr_to_socket_address(&address).ok()?.0 {
AddressType::Dns { address, port, .. } => {
tracing::debug!(
target: LOG_TARGET,
?address,
?port,
"dns not supported as bind address"
);
return None;
}
AddressType::Socket(address) => address,
};
let socket = if address.is_ipv4() {
Socket::new(Domain::IPV4, Type::STREAM, Some(socket2::Protocol::TCP)).ok()?
} else {
let socket =
Socket::new(Domain::IPV6, Type::STREAM, Some(socket2::Protocol::TCP))
.ok()?;
socket.set_only_v6(true).ok()?;
socket
};
socket.set_nodelay(nodelay).ok()?;
socket.set_nonblocking(true).ok()?;
socket.set_reuse_address(true).ok()?;
#[cfg(unix)]
if reuse_port {
socket.set_reuse_port(true).ok()?;
}
socket.bind(&address.into()).ok()?;
socket.listen(1024).ok()?;
let socket: std::net::TcpListener = socket.into();
let listener = TokioTcpListener::from_std(socket).ok()?;
let local_address = listener.local_addr().ok()?;
let listen_addresses = if address.ip().is_unspecified() {
match NetworkInterface::show() {
Ok(ifaces) => ifaces
.into_iter()
.flat_map(|record| {
record.addr.into_iter().filter_map(|iface_address| {
match (iface_address, address.is_ipv4()) {
(Addr::V4(inner), true) => Some(SocketAddr::new(
IpAddr::V4(inner.ip),
local_address.port(),
)),
(Addr::V6(inner), false) => {
match inner.ip.segments().first() {
Some(0xfe80) => None,
_ => Some(SocketAddr::new(
IpAddr::V6(inner.ip),
local_address.port(),
)),
}
}
_ => None,
}
})
})
.collect(),
Err(error) => {
tracing::warn!(
target: LOG_TARGET,
?error,
"failed to fetch network interfaces",
);
return None;
}
}
} else {
vec![local_address]
};
Some((listener, listen_addresses))
})
.unzip();
let listen_addresses = listen_addresses.into_iter().flatten().collect::<Vec<_>>();
let listen_multi_addresses =
listen_addresses.iter().map(T::socket_address_to_multiaddr).collect();
let dial_addresses = if reuse_port {
DialAddresses::Reuse {
listen_addresses: Arc::new(listen_addresses),
}
} else {
DialAddresses::NoReuse
};
(
Self {
listeners,
poll_index: 0,
},
listen_multi_addresses,
dial_addresses,
)
}
}
/// The type of the socket listener.
#[derive(Clone, Copy, PartialEq, Eq)]
enum SocketListenerType {
/// Listener for TCP.
Tcp,
/// Listener for WebSocket.
#[cfg(feature = "websocket")]
WebSocket,
}
/// Extract socket address and `PeerId`, if found, from `address`.
fn multiaddr_to_socket_address(
address: &Multiaddr,
ty: SocketListenerType,
) -> Result<(AddressType, Option<PeerId>), AddressError> {
tracing::trace!(target: LOG_TARGET, ?address, "parse multi address");
let mut iter = address.iter();
// Small helper to handle DNS types.
let handle_dns_type =
|address: String, dns_type: DnsType, protocol: Option<Protocol>| match protocol {
Some(Protocol::Tcp(port)) => Ok(AddressType::Dns {
address,
port,
dns_type,
}),
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid transport protocol, expected `Tcp`",
);
Err(AddressError::InvalidProtocol)
}
};
let socket_address = match iter.next() {
Some(Protocol::Ip6(address)) => match iter.next() {
Some(Protocol::Tcp(port)) =>
AddressType::Socket(SocketAddr::new(IpAddr::V6(address), port)),
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid transport protocol, expected `Tcp`",
);
return Err(AddressError::InvalidProtocol);
}
},
Some(Protocol::Ip4(address)) => match iter.next() {
Some(Protocol::Tcp(port)) =>
AddressType::Socket(SocketAddr::new(IpAddr::V4(address), port)),
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid transport protocol, expected `Tcp`",
);
return Err(AddressError::InvalidProtocol);
}
},
Some(Protocol::Dns(address)) => handle_dns_type(address.into(), DnsType::Dns, iter.next())?,
Some(Protocol::Dns4(address)) =>
handle_dns_type(address.into(), DnsType::Dns4, iter.next())?,
Some(Protocol::Dns6(address)) =>
handle_dns_type(address.into(), DnsType::Dns6, iter.next())?,
protocol => {
tracing::error!(target: LOG_TARGET, ?protocol, "invalid transport protocol");
return Err(AddressError::InvalidProtocol);
}
};
match ty {
SocketListenerType::Tcp => (),
#[cfg(feature = "websocket")]
SocketListenerType::WebSocket => {
// verify that `/ws`/`/wss` is part of the multi address
match iter.next() {
Some(Protocol::Ws(_address)) => {}
Some(Protocol::Wss(_address)) => {}
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid protocol, expected `Ws` or `Wss`"
);
return Err(AddressError::InvalidProtocol);
}
};
}
}
let maybe_peer = match iter.next() {
Some(Protocol::P2p(multihash)) =>
Some(PeerId::from_multihash(multihash).map_err(AddressError::InvalidPeerId)?),
None => None,
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid protocol, expected `P2p` or `None`"
);
return Err(AddressError::InvalidProtocol);
}
};
Ok((socket_address, maybe_peer))
}
impl Stream for SocketListener {
type Item = io::Result<(TcpStream, SocketAddr)>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
if self.listeners.is_empty() {
return Poll::Pending;
}
let len = self.listeners.len();
for index in 0..len {
let current = (self.poll_index + index) % len;
let listener = &mut self.listeners[current];
match listener.poll_accept(cx) {
Poll::Pending => {}
Poll::Ready(Err(error)) => {
self.poll_index = (self.poll_index + 1) % len;
return Poll::Ready(Some(Err(error)));
}
Poll::Ready(Ok((stream, address))) => {
self.poll_index = (self.poll_index + 1) % len;
return Poll::Ready(Some(Ok((stream, address))));
}
}
}
self.poll_index = (self.poll_index + 1) % len;
Poll::Pending
}
}
#[cfg(test)]
mod tests {
use super::*;
use futures::StreamExt;
#[test]
fn parse_multiaddresses_tcp() {
assert!(multiaddr_to_socket_address(
&"/ip6/::1/tcp/8888".parse().expect("valid multiaddress"),
SocketListenerType::Tcp,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip4/127.0.0.1/tcp/8888".parse().expect("valid multiaddress"),
SocketListenerType::Tcp,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip6/::1/tcp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::Tcp,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip4/127.0.0.1/tcp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::Tcp,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip6/::1/udp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::Tcp,
)
.is_err());
assert!(multiaddr_to_socket_address(
&"/ip4/127.0.0.1/udp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::Tcp,
)
.is_err());
}
#[cfg(feature = "websocket")]
#[test]
fn parse_multiaddresses_websocket() {
assert!(multiaddr_to_socket_address(
&"/ip6/::1/tcp/8888/ws".parse().expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip4/127.0.0.1/tcp/8888/ws".parse().expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip6/::1/tcp/8888/ws/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip4/127.0.0.1/tcp/8888/ws/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/ip6/::1/udp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_err());
assert!(multiaddr_to_socket_address(
&"/ip4/127.0.0.1/udp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_err());
assert!(multiaddr_to_socket_address(
&"/ip4/127.0.0.1/tcp/8888/ws/utp".parse().expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_err());
assert!(multiaddr_to_socket_address(
&"/ip6/::1/tcp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_err());
assert!(multiaddr_to_socket_address(
&"/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_err());
assert!(multiaddr_to_socket_address(
&"/dns/hello.world/tcp/8888/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_err());
assert!(multiaddr_to_socket_address(
&"/dns6/hello.world/tcp/8888/ws/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress")
,SocketListenerType::WebSocket,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/dns4/hello.world/tcp/8888/ws/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_ok());
assert!(multiaddr_to_socket_address(
&"/dns6/hello.world/tcp/8888/ws/p2p/12D3KooWT2ouvz5uMmCvHJGzAGRHiqDts5hzXR7NdoQ27pGdzp9Q"
.parse()
.expect("valid multiaddress"),
SocketListenerType::WebSocket,
)
.is_ok());
}
#[tokio::test]
async fn no_listeners_tcp() {
let (mut listener, _, _) = SocketListener::new::<TcpAddress>(Vec::new(), true, false);
futures::future::poll_fn(|cx| match listener.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
event => panic!("unexpected event: {event:?}"),
})
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn no_listeners_websocket() {
let (mut listener, _, _) = SocketListener::new::<WebSocketAddress>(Vec::new(), true, false);
futures::future::poll_fn(|cx| match listener.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
event => panic!("unexpected event: {event:?}"),
})
.await;
}
#[tokio::test]
async fn one_listener_tcp() {
let address: Multiaddr = "/ip6/::1/tcp/0".parse().unwrap();
let (mut listener, listen_addresses, _) =
SocketListener::new::<TcpAddress>(vec![address.clone()], true, false);
let Some(Protocol::Tcp(port)) = listen_addresses.first().unwrap().clone().iter().nth(1)
else {
panic!("invalid address");
};
let (res1, res2) =
tokio::join!(listener.next(), TcpStream::connect(format!("[::1]:{port}")));
assert!(res1.unwrap().is_ok() && res2.is_ok());
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn one_listener_websocket() {
let address: Multiaddr = "/ip6/::1/tcp/0/ws".parse().unwrap();
let (mut listener, listen_addresses, _) =
SocketListener::new::<WebSocketAddress>(vec![address.clone()], true, false);
let Some(Protocol::Tcp(port)) = listen_addresses.first().unwrap().clone().iter().nth(1)
else {
panic!("invalid address");
};
let (res1, res2) =
tokio::join!(listener.next(), TcpStream::connect(format!("[::1]:{port}")));
assert!(res1.unwrap().is_ok() && res2.is_ok());
}
#[tokio::test]
async fn two_listeners_tcp() {
let address1: Multiaddr = "/ip6/::1/tcp/0".parse().unwrap();
let address2: Multiaddr = "/ip4/127.0.0.1/tcp/0".parse().unwrap();
let (mut listener, listen_addresses, _) =
SocketListener::new::<TcpAddress>(vec![address1, address2], true, false);
let Some(Protocol::Tcp(port1)) = listen_addresses.first().unwrap().clone().iter().nth(1)
else {
panic!("invalid address");
};
let Some(Protocol::Tcp(port2)) =
listen_addresses.iter().nth(1).unwrap().clone().iter().nth(1)
else {
panic!("invalid address");
};
tokio::spawn(async move { while let Some(_) = listener.next().await {} });
let (res1, res2) = tokio::join!(
TcpStream::connect(format!("[::1]:{port1}")),
TcpStream::connect(format!("127.0.0.1:{port2}"))
);
assert!(res1.is_ok() && res2.is_ok());
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn two_listeners_websocket() {
let address1: Multiaddr = "/ip6/::1/tcp/0/ws".parse().unwrap();
let address2: Multiaddr = "/ip4/127.0.0.1/tcp/0/ws".parse().unwrap();
let (mut listener, listen_addresses, _) =
SocketListener::new::<WebSocketAddress>(vec![address1, address2], true, false);
let Some(Protocol::Tcp(port1)) = listen_addresses.first().unwrap().clone().iter().nth(1)
else {
panic!("invalid address");
};
let Some(Protocol::Tcp(port2)) =
listen_addresses.iter().nth(1).unwrap().clone().iter().nth(1)
else {
panic!("invalid address");
};
tokio::spawn(async move { while let Some(_) = listener.next().await {} });
let (res1, res2) = tokio::join!(
TcpStream::connect(format!("[::1]:{port1}")),
TcpStream::connect(format!("127.0.0.1:{port2}"))
);
assert!(res1.is_ok() && res2.is_ok());
}
#[tokio::test]
async fn local_dial_address() {
let dial_addresses = DialAddresses::Reuse {
listen_addresses: Arc::new(vec![
"[2001:7d0:84aa:3900:2a5d:9e85::]:8888".parse().unwrap(),
"92.168.127.1:9999".parse().unwrap(),
]),
};
assert_eq!(
dial_addresses.local_dial_address(&IpAddr::V4(Ipv4Addr::new(192, 168, 0, 1))),
Ok(Some(SocketAddr::new(
IpAddr::V4(Ipv4Addr::UNSPECIFIED),
9999
))),
);
assert_eq!(
dial_addresses.local_dial_address(&IpAddr::V6(Ipv6Addr::new(0, 1, 2, 3, 4, 5, 6, 7))),
Ok(Some(SocketAddr::new(
IpAddr::V6(Ipv6Addr::UNSPECIFIED),
8888
))),
);
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/common/mod.rs | src/transport/common/mod.rs | // Copyright 2024 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Shared transport protocol implementation
pub mod listener;
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/webrtc/config.rs | src/transport/webrtc/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! WebRTC transport configuration.
use multiaddr::Multiaddr;
/// WebRTC transport configuration.
#[derive(Debug)]
pub struct Config {
/// WebRTC listening address.
pub listen_addresses: Vec<Multiaddr>,
/// Connection datagram buffer size.
///
/// How many datagrams can the buffer between `WebRtcTransport` and a connection handler hold.
pub datagram_buffer_size: usize,
}
impl Default for Config {
fn default() -> Self {
Self {
listen_addresses: vec!["/ip4/127.0.0.1/udp/8888/webrtc-direct"
.parse()
.expect("valid multiaddress")],
datagram_buffer_size: 2048,
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/webrtc/opening.rs | src/transport/webrtc/opening.rs | // Copyright 2023-2024 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! WebRTC handshaking code for an opening connection.
use crate::{
config::Role,
crypto::{ed25519::Keypair, noise::NoiseContext},
transport::{webrtc::util::WebRtcMessage, Endpoint},
types::ConnectionId,
Error, PeerId,
};
use multiaddr::{multihash::Multihash, Multiaddr, Protocol};
use str0m::{
channel::ChannelId,
config::Fingerprint,
net::{DatagramRecv, DatagramSend, Protocol as Str0mProtocol, Receive},
Event, IceConnectionState, Input, Output, Rtc,
};
use std::{net::SocketAddr, time::Instant};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::webrtc::connection";
/// Create Noise prologue.
fn noise_prologue(local_fingerprint: Vec<u8>, remote_fingerprint: Vec<u8>) -> Vec<u8> {
const PREFIX: &[u8] = b"libp2p-webrtc-noise:";
let mut prologue =
Vec::with_capacity(PREFIX.len() + local_fingerprint.len() + remote_fingerprint.len());
prologue.extend_from_slice(PREFIX);
prologue.extend_from_slice(&remote_fingerprint);
prologue.extend_from_slice(&local_fingerprint);
prologue
}
/// WebRTC connection event.
#[derive(Debug)]
pub enum WebRtcEvent {
/// Register timeout for the connection.
Timeout {
/// Timeout.
timeout: Instant,
},
/// Transmit data to remote peer.
Transmit {
/// Destination.
destination: SocketAddr,
/// Datagram to transmit.
datagram: DatagramSend,
},
/// Connection closed.
ConnectionClosed,
/// Connection established.
ConnectionOpened {
/// Remote peer ID.
peer: PeerId,
/// Endpoint.
endpoint: Endpoint,
},
}
/// Opening WebRTC connection.
///
/// This object is used to track an opening connection which starts with a Noise handshake.
/// After the handshake is done, this object is destroyed and a new WebRTC connection object
/// is created which implements a normal connection event loop dealing with substreams.
pub struct OpeningWebRtcConnection {
/// WebRTC object
rtc: Rtc,
/// Connection state.
state: State,
/// Connection ID.
connection_id: ConnectionId,
/// Noise channel ID.
noise_channel_id: ChannelId,
/// Local keypair.
id_keypair: Keypair,
/// Peer address
peer_address: SocketAddr,
/// Local address.
local_address: SocketAddr,
}
/// Connection state.
#[derive(Debug)]
enum State {
/// Connection is poisoned.
Poisoned,
/// Connection is closed.
Closed,
/// Connection has been opened.
Opened {
/// Noise context.
context: NoiseContext,
},
/// Local Noise handshake has been sent to peer and the connection
/// is waiting for an answer.
HandshakeSent {
/// Noise context.
context: NoiseContext,
},
/// Response to local Noise handshake has been received and the connection
/// is being validated by `TransportManager`.
Validating {
/// Noise context.
context: NoiseContext,
},
}
impl OpeningWebRtcConnection {
/// Create new [`OpeningWebRtcConnection`].
pub fn new(
rtc: Rtc,
connection_id: ConnectionId,
noise_channel_id: ChannelId,
id_keypair: Keypair,
peer_address: SocketAddr,
local_address: SocketAddr,
) -> OpeningWebRtcConnection {
tracing::trace!(
target: LOG_TARGET,
?connection_id,
?peer_address,
"new connection opened",
);
Self {
rtc,
state: State::Closed,
connection_id,
noise_channel_id,
id_keypair,
peer_address,
local_address,
}
}
/// Get remote fingerprint to bytes.
fn remote_fingerprint(&mut self) -> Vec<u8> {
let fingerprint = self
.rtc
.direct_api()
.remote_dtls_fingerprint()
.expect("fingerprint to exist")
.clone();
Self::fingerprint_to_bytes(&fingerprint)
}
/// Get local fingerprint as bytes.
fn local_fingerprint(&mut self) -> Vec<u8> {
Self::fingerprint_to_bytes(self.rtc.direct_api().local_dtls_fingerprint())
}
/// Convert `Fingerprint` to bytes.
fn fingerprint_to_bytes(fingerprint: &Fingerprint) -> Vec<u8> {
const MULTIHASH_SHA256_CODE: u64 = 0x12;
Multihash::wrap(MULTIHASH_SHA256_CODE, &fingerprint.bytes)
.expect("fingerprint's len to be 32 bytes")
.to_bytes()
}
/// Once a Noise data channel has been opened, even though the light client was the dialer,
/// the WebRTC server will act as the dialer as per the specification.
///
/// Create the first Noise handshake message and send it to remote peer.
fn on_noise_channel_open(&mut self) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, "send initial noise handshake");
let State::Opened { mut context } = std::mem::replace(&mut self.state, State::Poisoned)
else {
return Err(Error::InvalidState);
};
// create first noise handshake and send it to remote peer
let payload = WebRtcMessage::encode(context.first_message(Role::Dialer)?);
self.rtc
.channel(self.noise_channel_id)
.ok_or(Error::ChannelDoesntExist)?
.write(true, payload.as_slice())
.map_err(Error::WebRtc)?;
self.state = State::HandshakeSent { context };
Ok(())
}
/// Handle timeout.
pub fn on_timeout(&mut self) -> crate::Result<()> {
if let Err(error) = self.rtc.handle_input(Input::Timeout(Instant::now())) {
tracing::error!(
target: LOG_TARGET,
?error,
"failed to handle timeout for `Rtc`"
);
self.rtc.disconnect();
return Err(Error::Disconnected);
}
Ok(())
}
/// Handle Noise handshake response.
///
/// The message contains remote's peer ID which is used by the `TransportManager` to validate
/// the connection. Note the Noise handshake requires one more messages to be sent by the dialer
/// (us) but the inbound connection must first be verified by the `TransportManager` which will
/// either accept or reject the connection.
///
/// If the peer is accepted, [`OpeningWebRtcConnection::on_accept()`] is called which creates
/// the final Noise message and sends it to the remote peer, concluding the handshake.
fn on_noise_channel_data(&mut self, data: Vec<u8>) -> crate::Result<WebRtcEvent> {
tracing::trace!(target: LOG_TARGET, "handle noise handshake reply");
let State::HandshakeSent { mut context } =
std::mem::replace(&mut self.state, State::Poisoned)
else {
return Err(Error::InvalidState);
};
let message = WebRtcMessage::decode(&data)?.payload.ok_or(Error::InvalidData)?;
let remote_peer_id = context.get_remote_peer_id(&message)?;
tracing::trace!(
target: LOG_TARGET,
?remote_peer_id,
"remote reply parsed successfully",
);
self.state = State::Validating { context };
let remote_fingerprint = self
.rtc
.direct_api()
.remote_dtls_fingerprint()
.expect("fingerprint to exist")
.clone()
.bytes;
const MULTIHASH_SHA256_CODE: u64 = 0x12;
let certificate = Multihash::wrap(MULTIHASH_SHA256_CODE, &remote_fingerprint)
.expect("fingerprint's len to be 32 bytes");
let address = Multiaddr::empty()
.with(Protocol::from(self.peer_address.ip()))
.with(Protocol::Udp(self.peer_address.port()))
.with(Protocol::WebRTC)
.with(Protocol::Certhash(certificate))
.with(Protocol::P2p(remote_peer_id.into()));
Ok(WebRtcEvent::ConnectionOpened {
peer: remote_peer_id,
endpoint: Endpoint::listener(address, self.connection_id),
})
}
/// Accept connection by sending the final Noise handshake message
/// and return the `Rtc` object for further use.
pub fn on_accept(mut self) -> crate::Result<Rtc> {
tracing::trace!(target: LOG_TARGET, "accept webrtc connection");
let State::Validating { mut context } = std::mem::replace(&mut self.state, State::Poisoned)
else {
return Err(Error::InvalidState);
};
// create second noise handshake message and send it to remote
let payload = WebRtcMessage::encode(context.second_message()?);
let mut channel =
self.rtc.channel(self.noise_channel_id).ok_or(Error::ChannelDoesntExist)?;
channel.write(true, payload.as_slice()).map_err(Error::WebRtc)?;
self.rtc.direct_api().close_data_channel(self.noise_channel_id);
Ok(self.rtc)
}
/// Handle input from peer.
pub fn on_input(&mut self, buffer: DatagramRecv) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer_address,
"handle input from peer",
);
let message = Input::Receive(
Instant::now(),
Receive {
source: self.peer_address,
proto: Str0mProtocol::Udp,
destination: self.local_address,
contents: buffer,
},
);
match self.rtc.accepts(&message) {
true => self.rtc.handle_input(message).map_err(|error| {
tracing::debug!(target: LOG_TARGET, source = ?self.peer_address, ?error, "failed to handle data");
Error::InputRejected
}),
false => {
tracing::warn!(
target: LOG_TARGET,
peer = ?self.peer_address,
"input rejected",
);
Err(Error::InputRejected)
}
}
}
/// Progress the state of [`OpeningWebRtcConnection`].
pub fn poll_process(&mut self) -> WebRtcEvent {
if !self.rtc.is_alive() {
tracing::debug!(
target: LOG_TARGET,
"`Rtc` is not alive, closing `WebRtcConnection`"
);
return WebRtcEvent::ConnectionClosed;
}
loop {
let output = match self.rtc.poll_output() {
Ok(output) => output,
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
connection_id = ?self.connection_id,
?error,
"`WebRtcConnection::poll_process()` failed",
);
return WebRtcEvent::ConnectionClosed;
}
};
match output {
Output::Transmit(transmit) => {
tracing::trace!(
target: LOG_TARGET,
"transmit data",
);
return WebRtcEvent::Transmit {
destination: transmit.destination,
datagram: transmit.contents,
};
}
Output::Timeout(timeout) => return WebRtcEvent::Timeout { timeout },
Output::Event(e) => match e {
Event::IceConnectionStateChange(v) =>
if v == IceConnectionState::Disconnected {
tracing::trace!(target: LOG_TARGET, "ice connection closed");
return WebRtcEvent::ConnectionClosed;
},
Event::ChannelOpen(channel_id, name) => {
tracing::trace!(
target: LOG_TARGET,
connection_id = ?self.connection_id,
?channel_id,
?name,
"channel opened",
);
if channel_id != self.noise_channel_id {
tracing::warn!(
target: LOG_TARGET,
connection_id = ?self.connection_id,
?channel_id,
"ignoring opened channel",
);
continue;
}
// TODO: https://github.com/paritytech/litep2p/issues/350 no expect
self.on_noise_channel_open().expect("to succeed");
}
Event::ChannelData(data) => {
tracing::trace!(
target: LOG_TARGET,
"data received over channel",
);
if data.id != self.noise_channel_id {
tracing::warn!(
target: LOG_TARGET,
channel_id = ?data.id,
connection_id = ?self.connection_id,
"ignoring data from channel",
);
continue;
}
// TODO: https://github.com/paritytech/litep2p/issues/350 no expect
return self.on_noise_channel_data(data.data).expect("to succeed");
}
Event::ChannelClose(channel_id) => {
tracing::debug!(target: LOG_TARGET, ?channel_id, "channel closed");
}
Event::Connected => match std::mem::replace(&mut self.state, State::Poisoned) {
State::Closed => {
let remote_fingerprint = self.remote_fingerprint();
let local_fingerprint = self.local_fingerprint();
let context = match NoiseContext::with_prologue(
&self.id_keypair,
noise_prologue(local_fingerprint, remote_fingerprint),
) {
Ok(context) => context,
Err(err) => {
tracing::error!(
target: LOG_TARGET,
peer = ?self.peer_address,
"NoiseContext failed with error {err}",
);
return WebRtcEvent::ConnectionClosed;
}
};
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer_address,
"connection opened",
);
self.state = State::Opened { context };
}
state => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer_address,
?state,
"invalid state for connection"
);
return WebRtcEvent::ConnectionClosed;
}
},
event => {
tracing::warn!(target: LOG_TARGET, ?event, "unhandled event");
}
},
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/webrtc/connection.rs | src/transport/webrtc/connection.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
error::{Error, ParseError, SubstreamError},
multistream_select::{
webrtc_listener_negotiate, HandshakeResult, ListenerSelectResult, WebRtcDialerState,
},
protocol::{Direction, Permit, ProtocolCommand, ProtocolSet},
substream::Substream,
transport::{
webrtc::{
substream::{Event as SubstreamEvent, Substream as WebRtcSubstream, SubstreamHandle},
util::WebRtcMessage,
},
Endpoint,
},
types::{protocol::ProtocolName, SubstreamId},
PeerId,
};
use futures::{Stream, StreamExt};
use indexmap::IndexMap;
use str0m::{
channel::{ChannelConfig, ChannelId},
net::{Protocol as Str0mProtocol, Receive},
Event, IceConnectionState, Input, Output, Rtc,
};
use tokio::{net::UdpSocket, sync::mpsc::Receiver};
use std::{
collections::HashMap,
net::SocketAddr,
pin::Pin,
sync::Arc,
task::{Context, Poll},
time::Instant,
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::webrtc::connection";
/// Channel context.
#[derive(Debug)]
struct ChannelContext {
/// Protocol name.
protocol: ProtocolName,
/// Fallback names.
fallback_names: Vec<ProtocolName>,
/// Substream ID.
substream_id: SubstreamId,
/// Permit which keeps the connection open.
permit: Permit,
}
/// Set of [`SubstreamHandle`]s.
struct SubstreamHandleSet {
/// Current index.
index: usize,
/// Substream handles.
handles: IndexMap<ChannelId, SubstreamHandle>,
}
impl SubstreamHandleSet {
/// Create new [`SubstreamHandleSet`].
pub fn new() -> Self {
Self {
index: 0usize,
handles: IndexMap::new(),
}
}
/// Get mutable access to `SubstreamHandle`.
pub fn get_mut(&mut self, key: &ChannelId) -> Option<&mut SubstreamHandle> {
self.handles.get_mut(key)
}
/// Insert new handle to [`SubstreamHandleSet`].
pub fn insert(&mut self, key: ChannelId, handle: SubstreamHandle) {
assert!(self.handles.insert(key, handle).is_none());
}
/// Remove handle from [`SubstreamHandleSet`].
pub fn remove(&mut self, key: &ChannelId) -> Option<SubstreamHandle> {
self.handles.shift_remove(key)
}
}
impl Stream for SubstreamHandleSet {
type Item = (ChannelId, Option<SubstreamEvent>);
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let len = match self.handles.len() {
0 => return Poll::Pending,
len => len,
};
let start_index = self.index;
loop {
let index = self.index % len;
self.index += 1;
let (key, stream) = self.handles.get_index_mut(index).expect("handle to exist");
match stream.poll_next_unpin(cx) {
Poll::Pending => {}
Poll::Ready(event) => return Poll::Ready(Some((*key, event))),
}
if self.index == start_index + len {
break Poll::Pending;
}
}
}
}
/// Channel state.
#[derive(Debug)]
enum ChannelState {
/// Channel is closing.
Closing,
/// Inbound channel is opening.
InboundOpening,
/// Outbound channel is opening.
OutboundOpening {
/// Channel context.
context: ChannelContext,
/// `multistream-select` dialer state.
dialer_state: WebRtcDialerState,
},
/// Channel is open.
Open {
/// Substream ID.
substream_id: SubstreamId,
/// Channel ID.
channel_id: ChannelId,
/// Connection permit.
permit: Permit,
},
}
/// WebRTC connection.
pub struct WebRtcConnection {
/// `str0m` WebRTC object.
rtc: Rtc,
/// Protocol set.
protocol_set: ProtocolSet,
/// Remote peer ID.
peer: PeerId,
/// Endpoint.
endpoint: Endpoint,
/// Peer address
peer_address: SocketAddr,
/// Local address.
local_address: SocketAddr,
/// Transport socket.
socket: Arc<UdpSocket>,
/// RX channel for receiving datagrams from the transport.
dgram_rx: Receiver<Vec<u8>>,
/// Pending outbound channels.
pending_outbound: HashMap<ChannelId, ChannelContext>,
/// Open channels.
channels: HashMap<ChannelId, ChannelState>,
/// Substream handles.
handles: SubstreamHandleSet,
}
impl WebRtcConnection {
/// Create new [`WebRtcConnection`].
pub fn new(
rtc: Rtc,
peer: PeerId,
peer_address: SocketAddr,
local_address: SocketAddr,
socket: Arc<UdpSocket>,
protocol_set: ProtocolSet,
endpoint: Endpoint,
dgram_rx: Receiver<Vec<u8>>,
) -> Self {
Self {
rtc,
protocol_set,
peer,
peer_address,
local_address,
socket,
endpoint,
dgram_rx,
pending_outbound: HashMap::new(),
channels: HashMap::new(),
handles: SubstreamHandleSet::new(),
}
}
/// Handle opened channel.
///
/// If the channel is inbound, nothing is done because we have to wait for data
/// `multistream-select` handshake to be received from remote peer before anything
/// else can be done.
///
/// If the channel is outbound, send `multistream-select` handshake to remote peer.
async fn on_channel_opened(
&mut self,
channel_id: ChannelId,
channel_name: String,
) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?channel_name,
"channel opened",
);
let Some(mut context) = self.pending_outbound.remove(&channel_id) else {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"inbound channel opened, wait for `multistream-select` message",
);
self.channels.insert(channel_id, ChannelState::InboundOpening);
return Ok(());
};
let fallback_names = std::mem::take(&mut context.fallback_names);
let (dialer_state, message) =
WebRtcDialerState::propose(context.protocol.clone(), fallback_names)?;
let message = WebRtcMessage::encode(message);
self.rtc
.channel(channel_id)
.ok_or(Error::ChannelDoesntExist)?
.write(true, message.as_ref())
.map_err(Error::WebRtc)?;
self.channels.insert(
channel_id,
ChannelState::OutboundOpening {
context,
dialer_state,
},
);
Ok(())
}
/// Handle closed channel.
async fn on_channel_closed(&mut self, channel_id: ChannelId) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"channel closed",
);
self.pending_outbound.remove(&channel_id);
self.channels.remove(&channel_id);
self.handles.remove(&channel_id);
Ok(())
}
/// Handle data received to an opening inbound channel.
///
/// The first message received over an inbound channel is the `multistream-select` handshake.
/// This handshake contains the protocol (and potentially fallbacks for that protocol) that
/// remote peer wants to use for this channel. Parse the handshake and check if any of the
/// proposed protocols are supported by the local node. If not, send rejection to remote peer
/// and close the channel. If the local node supports one of the protocols, send confirmation
/// for the protocol to remote peer and report an opened substream to the selected protocol.
async fn on_inbound_opening_channel_data(
&mut self,
channel_id: ChannelId,
data: Vec<u8>,
) -> crate::Result<(SubstreamId, SubstreamHandle, Permit)> {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"handle opening inbound substream",
);
let payload = WebRtcMessage::decode(&data)?.payload.ok_or(Error::InvalidData)?;
let (response, negotiated) = match webrtc_listener_negotiate(
&mut self.protocol_set.protocols().iter(),
payload.into(),
)? {
ListenerSelectResult::Accepted { protocol, message } => (message, Some(protocol)),
ListenerSelectResult::Rejected { message } => (message, None),
};
self.rtc
.channel(channel_id)
.ok_or(Error::ChannelDoesntExist)?
.write(true, WebRtcMessage::encode(response.to_vec()).as_ref())
.map_err(Error::WebRtc)?;
let protocol = negotiated.ok_or(Error::SubstreamDoesntExist)?;
let substream_id = self.protocol_set.next_substream_id();
let codec = self.protocol_set.protocol_codec(&protocol);
let permit = self.protocol_set.try_get_permit().ok_or(Error::ConnectionClosed)?;
let (substream, handle) = WebRtcSubstream::new();
let substream = Substream::new_webrtc(self.peer, substream_id, substream, codec);
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?substream_id,
?protocol,
"inbound substream opened",
);
self.protocol_set
.report_substream_open(self.peer, protocol.clone(), Direction::Inbound, substream)
.await
.map(|_| (substream_id, handle, permit))
.map_err(Into::into)
}
/// Handle data received to an opening outbound channel.
///
/// When an outbound channel is opened, the first message the local node sends it the
/// `multistream-select` handshake which contains the protocol (and any fallbacks for that
/// protocol) that the local node wants to use to negotiate for the channel. When a message is
/// received from a remote peer for a channel in state [`ChannelState::OutboundOpening`], parse
/// the `multistream-select` handshake response. The response either contains a rejection which
/// causes the substream to be closed, a partial response, or a full response. If a partial
/// response is heard, e.g., only the header line is received, the handshake cannot be concluded
/// and the channel is placed back in the [`ChannelState::OutboundOpening`] state to wait for
/// the rest of the handshake. If a full response is received (or rest of the partial response),
/// the protocol confirmation is verified and the substream is reported to the protocol.
///
/// If the substream fails to open for whatever reason, since this is an outbound substream,
/// the protocol is notified of the failure.
async fn on_outbound_opening_channel_data(
&mut self,
channel_id: ChannelId,
data: Vec<u8>,
mut dialer_state: WebRtcDialerState,
context: ChannelContext,
) -> Result<Option<(SubstreamId, SubstreamHandle, Permit)>, SubstreamError> {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
data_len = ?data.len(),
"handle opening outbound substream",
);
let rtc_message = WebRtcMessage::decode(&data)
.map_err(|err| SubstreamError::NegotiationError(err.into()))?;
let message = rtc_message.payload.ok_or(SubstreamError::NegotiationError(
ParseError::InvalidData.into(),
))?;
let HandshakeResult::Succeeded(protocol) = dialer_state.register_response(message)? else {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"multistream-select handshake not ready",
);
self.channels.insert(
channel_id,
ChannelState::OutboundOpening {
context,
dialer_state,
},
);
return Ok(None);
};
let ChannelContext {
substream_id,
permit,
..
} = context;
let codec = self.protocol_set.protocol_codec(&protocol);
let (substream, handle) = WebRtcSubstream::new();
let substream = Substream::new_webrtc(self.peer, substream_id, substream, codec);
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?substream_id,
?protocol,
"outbound substream opened",
);
self.protocol_set
.report_substream_open(
self.peer,
protocol.clone(),
Direction::Outbound(substream_id),
substream,
)
.await
.map(|_| Some((substream_id, handle, permit)))
}
/// Handle data received from an open channel.
async fn on_open_channel_data(
&mut self,
channel_id: ChannelId,
data: Vec<u8>,
) -> crate::Result<()> {
let message = WebRtcMessage::decode(&data)?;
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
flags = message.flags,
data_len = message.payload.as_ref().map_or(0usize, |payload| payload.len()),
"handle inbound message",
);
self.handles
.get_mut(&channel_id)
.ok_or_else(|| {
tracing::warn!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"data received from an unknown channel",
);
debug_assert!(false);
Error::InvalidState
})?
.on_message(message)
.await
}
/// Handle data received from a channel.
async fn on_inbound_data(&mut self, channel_id: ChannelId, data: Vec<u8>) -> crate::Result<()> {
let Some(state) = self.channels.remove(&channel_id) else {
tracing::warn!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"data received over a channel that doesn't exist",
);
debug_assert!(false);
return Err(Error::InvalidState);
};
match state {
ChannelState::InboundOpening => {
match self.on_inbound_opening_channel_data(channel_id, data).await {
Ok((substream_id, handle, permit)) => {
self.handles.insert(channel_id, handle);
self.channels.insert(
channel_id,
ChannelState::Open {
substream_id,
channel_id,
permit,
},
);
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?error,
"failed to handle opening inbound substream",
);
self.channels.insert(channel_id, ChannelState::Closing);
self.rtc.direct_api().close_data_channel(channel_id);
}
}
}
ChannelState::OutboundOpening {
context,
dialer_state,
} => {
let protocol = context.protocol.clone();
let substream_id = context.substream_id;
match self
.on_outbound_opening_channel_data(channel_id, data, dialer_state, context)
.await
{
Ok(Some((substream_id, handle, permit))) => {
self.handles.insert(channel_id, handle);
self.channels.insert(
channel_id,
ChannelState::Open {
substream_id,
channel_id,
permit,
},
);
}
Ok(None) => {}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?error,
"failed to handle opening outbound substream",
);
let _ = self
.protocol_set
.report_substream_open_failure(protocol, substream_id, error)
.await;
self.rtc.direct_api().close_data_channel(channel_id);
self.channels.insert(channel_id, ChannelState::Closing);
}
}
}
ChannelState::Open {
substream_id,
channel_id,
permit,
} => match self.on_open_channel_data(channel_id, data).await {
Ok(()) => {
self.channels.insert(
channel_id,
ChannelState::Open {
substream_id,
channel_id,
permit,
},
);
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?error,
"failed to handle data for an open channel",
);
self.rtc.direct_api().close_data_channel(channel_id);
self.channels.insert(channel_id, ChannelState::Closing);
}
},
ChannelState::Closing => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"channel closing, discarding received data",
);
self.channels.insert(channel_id, ChannelState::Closing);
}
}
Ok(())
}
/// Handle outbound data.
fn on_outbound_data(&mut self, channel_id: ChannelId, data: Vec<u8>) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
data_len = ?data.len(),
"send data",
);
self.rtc
.channel(channel_id)
.ok_or(Error::ChannelDoesntExist)?
.write(true, WebRtcMessage::encode(data).as_ref())
.map_err(Error::WebRtc)
.map(|_| ())
}
/// Open outbound substream.
fn on_open_substream(
&mut self,
protocol: ProtocolName,
fallback_names: Vec<ProtocolName>,
substream_id: SubstreamId,
permit: Permit,
) {
let channel_id = self.rtc.direct_api().create_data_channel(ChannelConfig {
label: "".to_string(),
ordered: false,
reliability: Default::default(),
negotiated: None,
protocol: protocol.to_string(),
});
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?substream_id,
?protocol,
?fallback_names,
"open data channel",
);
self.pending_outbound.insert(
channel_id,
ChannelContext {
protocol,
fallback_names,
substream_id,
permit,
},
);
}
/// Connection to peer has been closed.
async fn on_connection_closed(&mut self) {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
"connection closed",
);
let _ = self
.protocol_set
.report_connection_closed(self.peer, self.endpoint.connection_id())
.await;
}
/// Start running event loop of [`WebRtcConnection`].
pub async fn run(mut self) {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
"start webrtc connection event loop",
);
let _ = self
.protocol_set
.report_connection_established(self.peer, self.endpoint.clone())
.await;
loop {
// poll output until we get a timeout
let timeout = match self.rtc.poll_output().unwrap() {
Output::Timeout(v) => v,
Output::Transmit(v) => {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
datagram_len = ?v.contents.len(),
"transmit data",
);
self.socket.try_send_to(&v.contents, v.destination).unwrap();
continue;
}
Output::Event(v) => match v {
Event::IceConnectionStateChange(IceConnectionState::Disconnected) => {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
"ice connection state changed to closed",
);
return self.on_connection_closed().await;
}
Event::ChannelOpen(channel_id, name) => {
if let Err(error) = self.on_channel_opened(channel_id, name).await {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?error,
"failed to handle opened channel",
);
}
continue;
}
Event::ChannelClose(channel_id) => {
if let Err(error) = self.on_channel_closed(channel_id).await {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
?error,
"failed to handle closed channel",
);
}
continue;
}
Event::ChannelData(info) => {
if let Err(error) = self.on_inbound_data(info.id, info.data).await {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
channel_id = ?info.id,
?error,
"failed to handle channel data",
);
}
continue;
}
event => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?event,
"unhandled event",
);
continue;
}
},
};
let duration = timeout - Instant::now();
if duration.is_zero() {
self.rtc.handle_input(Input::Timeout(Instant::now())).unwrap();
continue;
}
tokio::select! {
biased;
datagram = self.dgram_rx.recv() => match datagram {
Some(datagram) => {
let input = Input::Receive(
Instant::now(),
Receive {
proto: Str0mProtocol::Udp,
source: self.peer_address,
destination: self.local_address,
contents: datagram.as_slice().try_into().unwrap(),
},
);
self.rtc.handle_input(input).unwrap();
}
None => {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
"read `None` from `dgram_rx`",
);
return self.on_connection_closed().await;
}
},
event = self.handles.next() => match event {
None => unreachable!(),
Some((channel_id, None | Some(SubstreamEvent::Close))) => {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?channel_id,
"channel closed",
);
self.rtc.direct_api().close_data_channel(channel_id);
self.channels.insert(channel_id, ChannelState::Closing);
self.handles.remove(&channel_id);
}
Some((channel_id, Some(SubstreamEvent::Message(data)))) => {
if let Err(error) = self.on_outbound_data(channel_id, data) {
tracing::debug!(
target: LOG_TARGET,
?channel_id,
?error,
"failed to send data to remote peer",
);
}
}
Some((_, Some(SubstreamEvent::RecvClosed))) => {}
},
command = self.protocol_set.next() => match command {
None | Some(ProtocolCommand::ForceClose) => {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?command,
"`ProtocolSet` instructed to close connection",
);
return self.on_connection_closed().await;
}
Some(ProtocolCommand::OpenSubstream { protocol, fallback_names, substream_id, permit, .. }) => {
self.on_open_substream(protocol, fallback_names, substream_id, permit);
}
},
_ = tokio::time::sleep(duration) => {
self.rtc.handle_input(Input::Timeout(Instant::now())).unwrap();
}
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/webrtc/util.rs | src/transport/webrtc/util.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{codec::unsigned_varint::UnsignedVarint, error::ParseError, transport::webrtc::schema};
use prost::Message;
use tokio_util::codec::{Decoder, Encoder};
/// WebRTC mesage.
#[derive(Debug)]
pub struct WebRtcMessage {
/// Payload.
pub payload: Option<Vec<u8>>,
// Flags.
pub flags: Option<i32>,
}
impl WebRtcMessage {
/// Encode WebRTC message.
pub fn encode(payload: Vec<u8>) -> Vec<u8> {
let protobuf_payload = schema::webrtc::Message {
message: (!payload.is_empty()).then_some(payload),
flag: None,
};
let mut payload = Vec::with_capacity(protobuf_payload.encoded_len());
protobuf_payload
.encode(&mut payload)
.expect("Vec<u8> to provide needed capacity");
let mut out_buf = bytes::BytesMut::with_capacity(payload.len() + 4);
let mut codec = UnsignedVarint::new(None);
let _result = codec.encode(payload.into(), &mut out_buf);
out_buf.into()
}
/// Encode WebRTC message with flags.
#[allow(unused)]
pub fn encode_with_flags(payload: Vec<u8>, flags: i32) -> Vec<u8> {
let protobuf_payload = schema::webrtc::Message {
message: (!payload.is_empty()).then_some(payload),
flag: Some(flags),
};
let mut payload = Vec::with_capacity(protobuf_payload.encoded_len());
protobuf_payload
.encode(&mut payload)
.expect("Vec<u8> to provide needed capacity");
let mut out_buf = bytes::BytesMut::with_capacity(payload.len() + 4);
let mut codec = UnsignedVarint::new(None);
let _result = codec.encode(payload.into(), &mut out_buf);
out_buf.into()
}
/// Decode payload into [`WebRtcMessage`].
pub fn decode(payload: &[u8]) -> Result<Self, ParseError> {
// TODO: https://github.com/paritytech/litep2p/issues/352 set correct size
let mut codec = UnsignedVarint::new(None);
let mut data = bytes::BytesMut::from(payload);
let result = codec
.decode(&mut data)
.map_err(|_| ParseError::InvalidData)?
.ok_or(ParseError::InvalidData)?;
match schema::webrtc::Message::decode(result) {
Ok(message) => Ok(Self {
payload: message.message,
flags: message.flag,
}),
Err(_) => Err(ParseError::InvalidData),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn with_payload_no_flags() {
let message = WebRtcMessage::encode("Hello, world!".as_bytes().to_vec());
let decoded = WebRtcMessage::decode(&message).unwrap();
assert_eq!(decoded.payload, Some("Hello, world!".as_bytes().to_vec()));
assert_eq!(decoded.flags, None);
}
#[test]
fn with_payload_and_flags() {
let message = WebRtcMessage::encode_with_flags("Hello, world!".as_bytes().to_vec(), 1i32);
let decoded = WebRtcMessage::decode(&message).unwrap();
assert_eq!(decoded.payload, Some("Hello, world!".as_bytes().to_vec()));
assert_eq!(decoded.flags, Some(1i32));
}
#[test]
fn no_payload_with_flags() {
let message = WebRtcMessage::encode_with_flags(vec![], 2i32);
let decoded = WebRtcMessage::decode(&message).unwrap();
assert_eq!(decoded.payload, None);
assert_eq!(decoded.flags, Some(2i32));
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/webrtc/mod.rs | src/transport/webrtc/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! WebRTC transport.
use crate::{
error::{AddressError, Error},
transport::{
manager::TransportHandle,
webrtc::{config::Config, connection::WebRtcConnection, opening::OpeningWebRtcConnection},
Endpoint, Transport, TransportBuilder, TransportEvent,
},
types::ConnectionId,
PeerId,
};
use futures::{future::BoxFuture, Future, Stream};
use futures_timer::Delay;
use hickory_resolver::TokioResolver;
use multiaddr::{multihash::Multihash, Multiaddr, Protocol};
use socket2::{Domain, Socket, Type};
use str0m::{
channel::{ChannelConfig, ChannelId},
config::{CryptoProvider, DtlsCert, DtlsCertOptions},
ice::IceCreds,
net::{DatagramRecv, Protocol as Str0mProtocol, Receive},
Candidate, DtlsCertConfig, Input, Rtc,
};
use tokio::{
io::ReadBuf,
net::UdpSocket,
sync::mpsc::{channel, error::TrySendError, Sender},
};
use std::{
collections::{HashMap, VecDeque},
net::{IpAddr, SocketAddr},
pin::Pin,
sync::Arc,
task::{Context, Poll},
time::{Duration, Instant},
};
pub(crate) use substream::Substream;
mod connection;
mod opening;
mod substream;
mod util;
pub mod config;
pub(super) mod schema {
pub(super) mod webrtc {
include!(concat!(env!("OUT_DIR"), "/webrtc.rs"));
}
pub(super) mod noise {
include!(concat!(env!("OUT_DIR"), "/noise.rs"));
}
}
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::webrtc";
/// Hardcoded remote fingerprint.
const REMOTE_FINGERPRINT: &str =
"sha-256 FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF:FF";
/// Connection context.
struct ConnectionContext {
/// Remote peer ID.
peer: PeerId,
/// Connection ID.
connection_id: ConnectionId,
/// TX channel for sending datagrams to the connection event loop.
tx: Sender<Vec<u8>>,
}
/// Events received from opening connections that are handled
/// by the [`WebRtcTransport`] event loop.
enum ConnectionEvent {
/// Connection established.
ConnectionEstablished {
/// Remote peer ID.
peer: PeerId,
/// Endpoint.
endpoint: Endpoint,
},
/// Connection to peer closed.
ConnectionClosed,
/// Timeout.
Timeout {
/// Timeout duration.
duration: Duration,
},
}
/// WebRTC transport.
pub(crate) struct WebRtcTransport {
/// Transport context.
context: TransportHandle,
/// UDP socket.
socket: Arc<UdpSocket>,
/// DTLS certificate.
dtls_cert: DtlsCert,
/// Assigned listen addresss.
listen_address: SocketAddr,
/// Datagram buffer size.
datagram_buffer_size: usize,
/// Connected peers.
open: HashMap<SocketAddr, ConnectionContext>,
/// OpeningWebRtc connections.
opening: HashMap<SocketAddr, OpeningWebRtcConnection>,
/// `ConnectionId -> SocketAddr` mappings.
connections: HashMap<ConnectionId, (PeerId, SocketAddr, Endpoint)>,
/// Pending timeouts.
timeouts: HashMap<SocketAddr, BoxFuture<'static, ()>>,
/// Pending events.
pending_events: VecDeque<TransportEvent>,
}
impl WebRtcTransport {
/// Extract socket address and `PeerId`, if found, from `address`.
fn get_socket_address(address: &Multiaddr) -> crate::Result<(SocketAddr, Option<PeerId>)> {
tracing::trace!(target: LOG_TARGET, ?address, "parse multi address");
let mut iter = address.iter();
let socket_address = match iter.next() {
Some(Protocol::Ip6(address)) => match iter.next() {
Some(Protocol::Udp(port)) => SocketAddr::new(IpAddr::V6(address), port),
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid transport protocol, expected `Upd`",
);
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
},
Some(Protocol::Ip4(address)) => match iter.next() {
Some(Protocol::Udp(port)) => SocketAddr::new(IpAddr::V4(address), port),
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid transport protocol, expected `Udp`",
);
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
},
protocol => {
tracing::error!(target: LOG_TARGET, ?protocol, "invalid transport protocol");
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
};
match iter.next() {
Some(Protocol::WebRTC) => {}
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid protocol, expected `WebRTC`"
);
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
}
let maybe_peer = match iter.next() {
Some(Protocol::P2p(multihash)) => Some(PeerId::from_multihash(multihash)?),
None => None,
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid protocol, expected `P2p` or `None`"
);
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
};
Ok((socket_address, maybe_peer))
}
/// Create RTC client and open channel for Noise handshake.
fn make_rtc_client(
&self,
ufrag: &str,
pass: &str,
source: SocketAddr,
destination: SocketAddr,
) -> (Rtc, ChannelId) {
let mut rtc = Rtc::builder()
.set_ice_lite(true)
.set_dtls_cert_config(DtlsCertConfig::PregeneratedCert(self.dtls_cert.clone()))
.set_fingerprint_verification(false)
.build();
rtc.add_local_candidate(Candidate::host(destination, Str0mProtocol::Udp).unwrap());
rtc.add_remote_candidate(Candidate::host(source, Str0mProtocol::Udp).unwrap());
rtc.direct_api()
.set_remote_fingerprint(REMOTE_FINGERPRINT.parse().expect("parse() to succeed"));
rtc.direct_api().set_remote_ice_credentials(IceCreds {
ufrag: ufrag.to_owned(),
pass: pass.to_owned(),
});
rtc.direct_api().set_local_ice_credentials(IceCreds {
ufrag: ufrag.to_owned(),
pass: pass.to_owned(),
});
rtc.direct_api().set_ice_controlling(false);
rtc.direct_api().start_dtls(false).unwrap();
rtc.direct_api().start_sctp(false);
let noise_channel_id = rtc.direct_api().create_data_channel(ChannelConfig {
label: "noise".to_string(),
ordered: false,
reliability: Default::default(),
negotiated: Some(0),
protocol: "".to_string(),
});
(rtc, noise_channel_id)
}
/// Poll opening connection.
fn poll_connection(&mut self, source: &SocketAddr) -> ConnectionEvent {
let Some(connection) = self.opening.get_mut(source) else {
tracing::warn!(
target: LOG_TARGET,
?source,
"connection doesn't exist",
);
return ConnectionEvent::ConnectionClosed;
};
loop {
match connection.poll_process() {
opening::WebRtcEvent::Timeout { timeout } => {
let duration = timeout - Instant::now();
match duration.is_zero() {
true => match connection.on_timeout() {
Ok(()) => continue,
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?source,
?error,
"failed to handle timeout",
);
return ConnectionEvent::ConnectionClosed;
}
},
false => return ConnectionEvent::Timeout { duration },
}
}
opening::WebRtcEvent::Transmit {
destination,
datagram,
} =>
if let Err(error) = self.socket.try_send_to(&datagram, destination) {
tracing::warn!(
target: LOG_TARGET,
?source,
?error,
"failed to send datagram",
);
},
opening::WebRtcEvent::ConnectionClosed => return ConnectionEvent::ConnectionClosed,
opening::WebRtcEvent::ConnectionOpened { peer, endpoint } => {
return ConnectionEvent::ConnectionEstablished { peer, endpoint };
}
}
}
}
/// Handle socket input.
///
/// If the datagram was received from an active client, it's dispatched to the connection
/// handler, if there is space in the queue. If the datagram opened a new connection or it
/// belonged to a client who is opening, the event loop is instructed to poll the client
/// until it timeouts.
///
/// Returns `true` if the client should be polled.
fn on_socket_input(&mut self, source: SocketAddr, buffer: Vec<u8>) -> crate::Result<bool> {
if let Some(ConnectionContext {
peer,
connection_id,
tx,
}) = self.open.get_mut(&source)
{
match tx.try_send(buffer) {
Ok(_) => return Ok(false),
Err(TrySendError::Full(_)) => {
tracing::warn!(
target: LOG_TARGET,
?source,
?peer,
?connection_id,
"channel full, dropping datagram",
);
return Ok(false);
}
Err(TrySendError::Closed(_)) => return Ok(false),
}
}
if buffer.is_empty() {
// str0m crate panics if the buffer doesn't contain at least one byte:
// https://github.com/algesten/str0m/blob/2c5dc8ee8ddead08699dd6852a27476af6992a5c/src/io/mod.rs#L222
return Err(Error::InvalidData);
}
// if the peer doesn't exist, decode the message and expect to receive `Stun`
// so that a new connection can be initialized
let contents: DatagramRecv =
buffer.as_slice().try_into().map_err(|_| Error::InvalidData)?;
// Handle non stun packets.
if !is_stun_packet(&buffer) {
tracing::debug!(
target: LOG_TARGET,
?source,
"received non-stun message"
);
if let Err(error) = self.opening.get_mut(&source).expect("to exist").on_input(contents)
{
tracing::error!(
target: LOG_TARGET,
?error,
?source,
"failed to handle inbound datagram"
);
}
return Ok(true);
}
let stun_message =
str0m::ice::StunMessage::parse(&buffer).map_err(|_| Error::InvalidData)?;
let Some((ufrag, pass)) = stun_message.split_username() else {
tracing::warn!(
target: LOG_TARGET,
?source,
"failed to split username/password",
);
return Err(Error::InvalidData);
};
tracing::debug!(
target: LOG_TARGET,
?source,
?ufrag,
?pass,
"received stun message"
);
// create new `Rtc` object for the peer and give it the received STUN message
let (mut rtc, noise_channel_id) =
self.make_rtc_client(ufrag, pass, source, self.socket.local_addr().unwrap());
rtc.handle_input(Input::Receive(
Instant::now(),
Receive {
source,
proto: Str0mProtocol::Udp,
destination: self.socket.local_addr().unwrap(),
contents,
},
))
.expect("client to handle input successfully");
let connection_id = self.context.next_connection_id();
let connection = OpeningWebRtcConnection::new(
rtc,
connection_id,
noise_channel_id,
self.context.keypair.clone(),
source,
self.listen_address,
);
self.opening.insert(source, connection);
Ok(true)
}
}
impl TransportBuilder for WebRtcTransport {
type Config = Config;
type Transport = WebRtcTransport;
/// Create new [`Transport`] object.
fn new(
context: TransportHandle,
config: Self::Config,
_resolver: Arc<TokioResolver>,
) -> crate::Result<(Self, Vec<Multiaddr>)>
where
Self: Sized,
{
tracing::info!(
target: LOG_TARGET,
listen_addresses = ?config.listen_addresses,
"start webrtc transport",
);
let (listen_address, _) = Self::get_socket_address(&config.listen_addresses[0])?;
let socket = if listen_address.is_ipv4() {
let socket = Socket::new(Domain::IPV4, Type::DGRAM, Some(socket2::Protocol::UDP))?;
socket.bind(&listen_address.into())?;
socket
} else {
let socket = Socket::new(Domain::IPV6, Type::DGRAM, Some(socket2::Protocol::UDP))?;
socket.set_only_v6(true)?;
socket.bind(&listen_address.into())?;
socket
};
socket.set_reuse_address(true)?;
socket.set_nonblocking(true)?;
#[cfg(unix)]
socket.set_reuse_port(true)?;
let socket = UdpSocket::from_std(socket.into())?;
let listen_address = socket.local_addr()?;
let dtls_cert = DtlsCert::new(CryptoProvider::OpenSsl, DtlsCertOptions::default());
let listen_multi_addresses = {
let fingerprint = dtls_cert.fingerprint().bytes;
const MULTIHASH_SHA256_CODE: u64 = 0x12;
let certificate = Multihash::wrap(MULTIHASH_SHA256_CODE, &fingerprint)
.expect("fingerprint's len to be 32 bytes");
vec![Multiaddr::empty()
.with(Protocol::from(listen_address.ip()))
.with(Protocol::Udp(listen_address.port()))
.with(Protocol::WebRTC)
.with(Protocol::Certhash(certificate))]
};
Ok((
Self {
context,
dtls_cert,
listen_address,
open: HashMap::new(),
opening: HashMap::new(),
connections: HashMap::new(),
socket: Arc::new(socket),
timeouts: HashMap::new(),
pending_events: VecDeque::new(),
datagram_buffer_size: config.datagram_buffer_size,
},
listen_multi_addresses,
))
}
}
impl Transport for WebRtcTransport {
fn dial(&mut self, connection_id: ConnectionId, address: Multiaddr) -> crate::Result<()> {
tracing::warn!(
target: LOG_TARGET,
?connection_id,
?address,
"webrtc cannot dial",
);
debug_assert!(false);
Err(Error::NotSupported("webrtc cannot dial peers".to_string()))
}
fn accept_pending(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
?connection_id,
"webrtc cannot accept pending connections",
);
debug_assert!(false);
Err(Error::NotSupported(
"webrtc cannot accept pending connections".to_string(),
))
}
fn reject_pending(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
?connection_id,
"webrtc cannot reject pending connections",
);
debug_assert!(false);
Err(Error::NotSupported(
"webrtc cannot reject pending connections".to_string(),
))
}
fn accept(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
?connection_id,
"inbound connection accepted",
);
let (peer, source, endpoint) =
self.connections.remove(&connection_id).ok_or_else(|| {
tracing::warn!(
target: LOG_TARGET,
?connection_id,
"pending connection doens't exist",
);
Error::InvalidState
})?;
let connection = self.opening.remove(&source).ok_or_else(|| {
tracing::warn!(
target: LOG_TARGET,
?connection_id,
"pending connection doens't exist",
);
Error::InvalidState
})?;
let rtc = connection.on_accept()?;
let (tx, rx) = channel(self.datagram_buffer_size);
let protocol_set = self.context.protocol_set(connection_id);
let connection_id = endpoint.connection_id();
let connection = WebRtcConnection::new(
rtc,
peer,
source,
self.listen_address,
Arc::clone(&self.socket),
protocol_set,
endpoint,
rx,
);
self.open.insert(
source,
ConnectionContext {
tx,
peer,
connection_id,
},
);
self.context.executor.run(Box::pin(async move {
connection.run().await;
}));
Ok(())
}
fn reject(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
?connection_id,
"inbound connection rejected",
);
let (_, source, _) = self.connections.remove(&connection_id).ok_or_else(|| {
tracing::warn!(
target: LOG_TARGET,
?connection_id,
"pending connection doens't exist",
);
Error::InvalidState
})?;
self.opening
.remove(&source)
.ok_or_else(|| {
tracing::warn!(
target: LOG_TARGET,
?connection_id,
"pending connection doens't exist",
);
Error::InvalidState
})
.map(|_| ())
}
fn open(
&mut self,
_connection_id: ConnectionId,
_addresses: Vec<Multiaddr>,
) -> crate::Result<()> {
Ok(())
}
fn negotiate(&mut self, _connection_id: ConnectionId) -> crate::Result<()> {
Ok(())
}
fn cancel(&mut self, _connection_id: ConnectionId) {}
}
impl Stream for WebRtcTransport {
type Item = TransportEvent;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let this = Pin::into_inner(self);
if let Some(event) = this.pending_events.pop_front() {
return Poll::Ready(Some(event));
}
loop {
let mut buf = vec![0u8; 16384];
let mut read_buf = ReadBuf::new(&mut buf);
match this.socket.poll_recv_from(cx, &mut read_buf) {
Poll::Pending => break,
Poll::Ready(Err(error)) => {
tracing::info!(
target: LOG_TARGET,
?error,
"webrtc udp socket closed",
);
return Poll::Ready(None);
}
Poll::Ready(Ok(source)) => {
let nread = read_buf.filled().len();
buf.truncate(nread);
match this.on_socket_input(source, buf) {
Ok(false) => {}
Ok(true) => loop {
match this.poll_connection(&source) {
ConnectionEvent::ConnectionEstablished { peer, endpoint } => {
this.connections.insert(
endpoint.connection_id(),
(peer, source, endpoint.clone()),
);
// keep polling the connection until it registers a timeout
this.pending_events.push_back(
TransportEvent::ConnectionEstablished { peer, endpoint },
);
}
ConnectionEvent::ConnectionClosed => {
this.opening.remove(&source);
this.timeouts.remove(&source);
break;
}
ConnectionEvent::Timeout { duration } => {
this.timeouts.insert(
source,
Box::pin(async move { Delay::new(duration).await }),
);
break;
}
}
},
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?source,
?error,
"failed to handle datagram",
);
}
}
}
}
}
// go over all pending timeouts to see if any of them have expired
// and if any of them have, poll the connection until it registers another timeout
let pending_events = this
.timeouts
.iter_mut()
.filter_map(|(source, mut delay)| match Pin::new(&mut delay).poll(cx) {
Poll::Pending => None,
Poll::Ready(_) => Some(*source),
})
.collect::<Vec<_>>()
.into_iter()
.filter_map(|source| {
let mut pending_event = None;
loop {
match this.poll_connection(&source) {
ConnectionEvent::ConnectionEstablished { peer, endpoint } => {
this.connections
.insert(endpoint.connection_id(), (peer, source, endpoint.clone()));
// keep polling the connection until it registers a timeout
pending_event =
Some(TransportEvent::ConnectionEstablished { peer, endpoint });
}
ConnectionEvent::ConnectionClosed => {
this.opening.remove(&source);
return None;
}
ConnectionEvent::Timeout { duration } => {
this.timeouts.insert(
source,
Box::pin(async move {
Delay::new(duration);
}),
);
break;
}
}
}
pending_event
})
.collect::<VecDeque<_>>();
this.timeouts.retain(|source, _| this.opening.contains_key(source));
this.pending_events.extend(pending_events);
this.pending_events
.pop_front()
.map_or(Poll::Pending, |event| Poll::Ready(Some(event)))
}
}
/// Check if the packet received is STUN.
///
/// Extracted from the STUN RFC 5389 (<https://datatracker.ietf.org/doc/html/rfc5389#page-10>):
/// All STUN messages MUST start with a 20-byte header followed by zero
/// or more Attributes. The STUN header contains a STUN message type,
/// magic cookie, transaction ID, and message length.
///
/// ```ignore
/// 0 1 2 3
/// 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
/// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/// |0 0| STUN Message Type | Message Length |
/// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/// | Magic Cookie |
/// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/// | |
/// | Transaction ID (96 bits) |
/// | |
/// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
/// ```
fn is_stun_packet(bytes: &[u8]) -> bool {
// 20 bytes for the header, then follows attributes.
bytes.len() >= 20 && bytes[0] < 2
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/webrtc/substream.rs | src/transport/webrtc/substream.rs | // Copyright 2024 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
transport::webrtc::{schema::webrtc::message::Flag, util::WebRtcMessage},
Error,
};
use bytes::{Buf, BufMut, BytesMut};
use futures::Stream;
use parking_lot::Mutex;
use tokio::sync::mpsc::{channel, Receiver, Sender};
use tokio_util::sync::PollSender;
use std::{
pin::Pin,
sync::Arc,
task::{Context, Poll},
};
/// Maximum frame size.
const MAX_FRAME_SIZE: usize = 16384;
/// Substream event.
#[derive(Debug, PartialEq, Eq)]
pub enum Event {
/// Receiver closed.
RecvClosed,
/// Send/receive message.
Message(Vec<u8>),
/// Close substream.
Close,
}
/// Substream stream.
enum State {
/// Substream is fully open.
Open,
/// Remote is no longer interested in receiving anything.
SendClosed,
}
/// Channel-backed substream. Must be owned and polled by exactly one task at a time.
pub struct Substream {
/// Substream state.
state: Arc<Mutex<State>>,
/// Read buffer.
read_buffer: BytesMut,
/// TX channel for sending messages to `peer`, wrapped in a [`PollSender`]
/// so that backpressure is driven by the caller's waker.
tx: PollSender<Event>,
/// RX channel for receiving messages from `peer`.
rx: Receiver<Event>,
}
impl Substream {
/// Create new [`Substream`].
pub fn new() -> (Self, SubstreamHandle) {
let (outbound_tx, outbound_rx) = channel(256);
let (inbound_tx, inbound_rx) = channel(256);
let state = Arc::new(Mutex::new(State::Open));
let handle = SubstreamHandle {
tx: inbound_tx,
rx: outbound_rx,
state: Arc::clone(&state),
};
(
Self {
state,
tx: PollSender::new(outbound_tx),
rx: inbound_rx,
read_buffer: BytesMut::new(),
},
handle,
)
}
}
/// Substream handle that is given to the WebRTC transport backend.
pub struct SubstreamHandle {
state: Arc<Mutex<State>>,
/// TX channel for sending inbound messages from `peer` to the associated `Substream`.
tx: Sender<Event>,
/// RX channel for receiving outbound messages to `peer` from the associated `Substream`.
rx: Receiver<Event>,
}
impl SubstreamHandle {
/// Handle message received from a remote peer.
///
/// If the message contains any flags, handle them first and appropriately close the correct
/// side of the substream. If the message contained any payload, send it to the protocol for
/// further processing.
pub async fn on_message(&self, message: WebRtcMessage) -> crate::Result<()> {
if let Some(flags) = message.flags {
if flags == Flag::Fin as i32 {
self.tx.send(Event::RecvClosed).await?;
}
if flags & 1 == Flag::StopSending as i32 {
*self.state.lock() = State::SendClosed;
}
if flags & 2 == Flag::ResetStream as i32 {
return Err(Error::ConnectionClosed);
}
}
if let Some(payload) = message.payload {
if !payload.is_empty() {
return self.tx.send(Event::Message(payload)).await.map_err(From::from);
}
}
Ok(())
}
}
impl Stream for SubstreamHandle {
type Item = Event;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.rx.poll_recv(cx)
}
}
impl tokio::io::AsyncRead for Substream {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
// if there are any remaining bytes from a previous read, consume them first
if self.read_buffer.remaining() > 0 {
let num_bytes = std::cmp::min(self.read_buffer.remaining(), buf.remaining());
buf.put_slice(&self.read_buffer[..num_bytes]);
self.read_buffer.advance(num_bytes);
// TODO: optimize by trying to read more data from substream and not exiting early
return Poll::Ready(Ok(()));
}
match futures::ready!(self.rx.poll_recv(cx)) {
None | Some(Event::Close) | Some(Event::RecvClosed) =>
Poll::Ready(Err(std::io::ErrorKind::BrokenPipe.into())),
Some(Event::Message(message)) => {
if message.len() > MAX_FRAME_SIZE {
return Poll::Ready(Err(std::io::ErrorKind::PermissionDenied.into()));
}
match buf.remaining() >= message.len() {
true => buf.put_slice(&message),
false => {
let remaining = buf.remaining();
buf.put_slice(&message[..remaining]);
self.read_buffer.put_slice(&message[remaining..]);
}
}
Poll::Ready(Ok(()))
}
}
}
}
impl tokio::io::AsyncWrite for Substream {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, std::io::Error>> {
if let State::SendClosed = *self.state.lock() {
return Poll::Ready(Err(std::io::ErrorKind::BrokenPipe.into()));
}
match futures::ready!(self.tx.poll_reserve(cx)) {
Ok(()) => {}
Err(_) => return Poll::Ready(Err(std::io::ErrorKind::BrokenPipe.into())),
};
let num_bytes = std::cmp::min(MAX_FRAME_SIZE, buf.len());
let frame = buf[..num_bytes].to_vec();
match self.tx.send_item(Event::Message(frame)) {
Ok(()) => Poll::Ready(Ok(num_bytes)),
Err(_) => Poll::Ready(Err(std::io::ErrorKind::BrokenPipe.into())),
}
}
fn poll_flush(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Result<(), std::io::Error>> {
Poll::Ready(Ok(()))
}
fn poll_shutdown(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), std::io::Error>> {
match futures::ready!(self.tx.poll_reserve(cx)) {
Ok(()) => {}
Err(_) => return Poll::Ready(Err(std::io::ErrorKind::BrokenPipe.into())),
};
match self.tx.send_item(Event::Close) {
Ok(()) => Poll::Ready(Ok(())),
Err(_) => Poll::Ready(Err(std::io::ErrorKind::BrokenPipe.into())),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use futures::StreamExt;
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, ReadBuf};
#[tokio::test]
async fn write_small_frame() {
let (mut substream, mut handle) = Substream::new();
substream.write_all(&vec![0u8; 1337]).await.unwrap();
assert_eq!(handle.next().await, Some(Event::Message(vec![0u8; 1337])));
futures::future::poll_fn(|cx| match handle.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
Poll::Ready(_) => panic!("invalid event"),
})
.await;
}
#[tokio::test]
async fn write_large_frame() {
let (mut substream, mut handle) = Substream::new();
substream.write_all(&vec![0u8; (2 * MAX_FRAME_SIZE) + 1]).await.unwrap();
assert_eq!(
handle.rx.recv().await,
Some(Event::Message(vec![0u8; MAX_FRAME_SIZE]))
);
assert_eq!(
handle.rx.recv().await,
Some(Event::Message(vec![0u8; MAX_FRAME_SIZE]))
);
assert_eq!(handle.rx.recv().await, Some(Event::Message(vec![0u8; 1])));
futures::future::poll_fn(|cx| match handle.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
Poll::Ready(_) => panic!("invalid event"),
})
.await;
}
#[tokio::test]
async fn try_to_write_to_closed_substream() {
let (mut substream, handle) = Substream::new();
*handle.state.lock() = State::SendClosed;
match substream.write_all(&vec![0u8; 1337]).await {
Err(error) => assert_eq!(error.kind(), std::io::ErrorKind::BrokenPipe),
_ => panic!("invalid event"),
}
}
#[tokio::test]
async fn substream_shutdown() {
let (mut substream, mut handle) = Substream::new();
substream.write_all(&vec![1u8; 1337]).await.unwrap();
substream.shutdown().await.unwrap();
assert_eq!(handle.next().await, Some(Event::Message(vec![1u8; 1337])));
assert_eq!(handle.next().await, Some(Event::Close));
}
#[tokio::test]
async fn try_to_read_from_closed_substream() {
let (mut substream, handle) = Substream::new();
handle
.on_message(WebRtcMessage {
payload: None,
flags: Some(0i32),
})
.await
.unwrap();
match substream.read(&mut vec![0u8; 256]).await {
Err(error) => assert_eq!(error.kind(), std::io::ErrorKind::BrokenPipe),
_ => panic!("invalid event"),
}
}
#[tokio::test]
async fn read_small_frame() {
let (mut substream, handle) = Substream::new();
handle.tx.send(Event::Message(vec![1u8; 256])).await.unwrap();
let mut buf = vec![0u8; 2048];
match substream.read(&mut buf).await {
Ok(nread) => {
assert_eq!(nread, 256);
assert_eq!(buf[..nread], vec![1u8; 256]);
}
Err(error) => panic!("invalid event: {error:?}"),
}
let mut read_buf = ReadBuf::new(&mut buf);
futures::future::poll_fn(|cx| {
match Pin::new(&mut substream).poll_read(cx, &mut read_buf) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event"),
}
})
.await;
}
#[tokio::test]
async fn read_small_frame_in_two_reads() {
let (mut substream, handle) = Substream::new();
let mut first = vec![1u8; 256];
first.extend_from_slice(&vec![2u8; 256]);
handle.tx.send(Event::Message(first)).await.unwrap();
let mut buf = vec![0u8; 256];
match substream.read(&mut buf).await {
Ok(nread) => {
assert_eq!(nread, 256);
assert_eq!(buf[..nread], vec![1u8; 256]);
}
Err(error) => panic!("invalid event: {error:?}"),
}
match substream.read(&mut buf).await {
Ok(nread) => {
assert_eq!(nread, 256);
assert_eq!(buf[..nread], vec![2u8; 256]);
}
Err(error) => panic!("invalid event: {error:?}"),
}
let mut read_buf = ReadBuf::new(&mut buf);
futures::future::poll_fn(|cx| {
match Pin::new(&mut substream).poll_read(cx, &mut read_buf) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event"),
}
})
.await;
}
#[tokio::test]
async fn read_frames() {
let (mut substream, handle) = Substream::new();
let mut first = vec![1u8; 256];
first.extend_from_slice(&vec![2u8; 256]);
handle.tx.send(Event::Message(first)).await.unwrap();
handle.tx.send(Event::Message(vec![4u8; 2048])).await.unwrap();
let mut buf = vec![0u8; 256];
match substream.read(&mut buf).await {
Ok(nread) => {
assert_eq!(nread, 256);
assert_eq!(buf[..nread], vec![1u8; 256]);
}
Err(error) => panic!("invalid event: {error:?}"),
}
let mut buf = vec![0u8; 128];
match substream.read(&mut buf).await {
Ok(nread) => {
assert_eq!(nread, 128);
assert_eq!(buf[..nread], vec![2u8; 128]);
}
Err(error) => panic!("invalid event: {error:?}"),
}
let mut buf = vec![0u8; 128];
match substream.read(&mut buf).await {
Ok(nread) => {
assert_eq!(nread, 128);
assert_eq!(buf[..nread], vec![2u8; 128]);
}
Err(error) => panic!("invalid event: {error:?}"),
}
let mut buf = vec![0u8; MAX_FRAME_SIZE];
match substream.read(&mut buf).await {
Ok(nread) => {
assert_eq!(nread, 2048);
assert_eq!(buf[..nread], vec![4u8; 2048]);
}
Err(error) => panic!("invalid event: {error:?}"),
}
let mut read_buf = ReadBuf::new(&mut buf);
futures::future::poll_fn(|cx| {
match Pin::new(&mut substream).poll_read(cx, &mut read_buf) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event"),
}
})
.await;
}
#[tokio::test]
async fn backpressure_works() {
let (mut substream, _handle) = Substream::new();
// use all available bandwidth which by default is `256 * MAX_FRAME_SIZE`,
for _ in 0..128 {
substream.write_all(&vec![0u8; 2 * MAX_FRAME_SIZE]).await.unwrap();
}
// try to write one more byte but since all available bandwidth
// is taken the call will block
futures::future::poll_fn(
|cx| match Pin::new(&mut substream).poll_write(cx, &[0u8; 1]) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event"),
},
)
.await;
}
#[tokio::test]
async fn backpressure_released_wakes_blocked_writer() {
use tokio::time::{sleep, timeout, Duration};
let (mut substream, mut handle) = Substream::new();
// Fill the channel to capacity, same pattern as `backpressure_works`.
for _ in 0..128 {
substream.write_all(&vec![0u8; 2 * MAX_FRAME_SIZE]).await.unwrap();
}
// Spawn a writer task that will try to write once more. This should initially block
// because the channel is full and rely on the AtomicWaker to be woken later.
let writer = tokio::spawn(async move {
substream
.write_all(&vec![1u8; MAX_FRAME_SIZE])
.await
.expect("write should eventually succeed");
});
// Give the writer a short moment to reach the blocked (Pending) state.
sleep(Duration::from_millis(10)).await;
assert!(
!writer.is_finished(),
"writer should be blocked by backpressure"
);
// Now consume a single message from the receiving side. This will:
// - free capacity in the channel
// - call `write_waker.wake()` from `poll_next`
//
// That wake must cause the blocked writer to be polled again and complete its write.
let _ = handle.next().await.expect("expected at least one outbound message");
// The writer should now complete in a timely fashion, proving that:
// - registering the waker before `try_reserve` works (no lost wakeup)
// - the wake from `poll_next` correctly unblocks the writer.
timeout(Duration::from_secs(1), writer)
.await
.expect("writer task did not complete after capacity was freed")
.expect("writer task panicked");
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/s2n-quic/config.rs | src/transport/s2n-quic/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! QUIC transport configuration.
use multiaddr::Multiaddr;
/// QUIC transport configuration.
#[derive(Debug, Clone)]
pub struct Config {
/// Listen address address for the transport.
pub listen_address: Multiaddr,
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/s2n-quic/connection.rs | src/transport/s2n-quic/connection.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
codec::{
generic::Unspecified, identity::Identity, unsigned_varint::UnsignedVarint, ProtocolCodec,
},
config::Role,
error::Error,
multistream_select::{dialer_select_proto, listener_select_proto, Negotiated, Version},
protocol::{Direction, Permit, ProtocolCommand, ProtocolSet},
substream::Substream as SubstreamT,
transport::substream::Substream,
types::{protocol::ProtocolName, ConnectionId, SubstreamId},
PeerId,
};
use futures::{future::BoxFuture, stream::FuturesUnordered, AsyncRead, AsyncWrite, StreamExt};
use s2n_quic::{
connection::{Connection, Handle},
stream::BidirectionalStream,
};
use tokio_util::codec::Framed;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::quic::connection";
/// QUIC connection error.
#[derive(Debug)]
enum ConnectionError {
/// Timeout
Timeout {
/// Protocol.
protocol: Option<ProtocolName>,
/// Substream ID.
substream_id: Option<SubstreamId>,
},
/// Failed to negotiate connection/substream.
FailedToNegotiate {
/// Protocol.
protocol: Option<ProtocolName>,
/// Substream ID.
substream_id: Option<SubstreamId>,
/// Error.
error: Error,
},
}
/// QUIC connection.
pub(crate) struct QuicConnection {
/// Inner QUIC connection.
connection: Connection,
/// Remote peer ID.
peer: PeerId,
/// Connection ID.
connection_id: ConnectionId,
/// Transport context.
protocol_set: ProtocolSet,
/// Pending substreams.
pending_substreams:
FuturesUnordered<BoxFuture<'static, Result<NegotiatedSubstream, ConnectionError>>>,
}
#[derive(Debug)]
pub struct NegotiatedSubstream {
/// Substream direction.
direction: Direction,
/// Protocol name.
protocol: ProtocolName,
/// `s2n-quic` stream.
io: BidirectionalStream,
/// Permit.
permit: Permit,
}
impl QuicConnection {
/// Create new [`QuiConnection`].
pub(crate) fn new(
peer: PeerId,
protocol_set: ProtocolSet,
connection: Connection,
connection_id: ConnectionId,
) -> Self {
Self {
peer,
connection,
connection_id,
pending_substreams: FuturesUnordered::new(),
protocol_set,
}
}
/// Negotiate protocol.
async fn negotiate_protocol<S: AsyncRead + AsyncWrite + Unpin>(
stream: S,
role: &Role,
protocols: Vec<&str>,
) -> crate::Result<(Negotiated<S>, ProtocolName)> {
tracing::trace!(target: LOG_TARGET, ?protocols, "negotiating protocols");
let (protocol, socket) = match role {
Role::Dialer => dialer_select_proto(stream, protocols, Version::V1).await?,
Role::Listener => listener_select_proto(stream, protocols).await?,
};
tracing::trace!(target: LOG_TARGET, ?protocol, "protocol negotiated");
Ok((socket, ProtocolName::from(protocol.to_string())))
}
/// Open substream for `protocol`.
pub async fn open_substream(
mut handle: Handle,
permit: Permit,
direction: Direction,
protocol: ProtocolName,
fallback_names: Vec<ProtocolName>,
) -> crate::Result<NegotiatedSubstream> {
tracing::debug!(target: LOG_TARGET, ?protocol, ?direction, "open substream");
let stream = match handle.open_bidirectional_stream().await {
Ok(stream) => {
tracing::trace!(
target: LOG_TARGET,
?protocol,
?direction,
id = ?stream.id(),
"substream opened"
);
stream
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?direction,
?error,
"failed to open substream"
);
return Err(Error::Unknown);
}
};
// TODO: https://github.com/paritytech/litep2p/issues/346 protocols don't change after
// they've been initialized so this should be done only once.
let protocols = std::iter::once(&*protocol)
.chain(fallback_names.iter().map(|protocol| &**protocol))
.collect();
let (io, protocol) = Self::negotiate_protocol(stream, &Role::Dialer, protocols).await?;
Ok(NegotiatedSubstream {
io: io.inner(),
direction,
permit,
protocol,
})
}
/// Accept substream.
pub async fn accept_substream(
stream: BidirectionalStream,
permit: Permit,
substream_id: SubstreamId,
protocols: Vec<ProtocolName>,
) -> crate::Result<NegotiatedSubstream> {
tracing::trace!(
target: LOG_TARGET,
?substream_id,
quic_id = ?stream.id(),
"accept inbound substream"
);
let protocols = protocols.iter().map(|protocol| &**protocol).collect::<Vec<&str>>();
let (io, protocol) = Self::negotiate_protocol(stream, &Role::Listener, protocols).await?;
tracing::trace!(
target: LOG_TARGET,
?substream_id,
?protocol,
"substream accepted and negotiated"
);
Ok(NegotiatedSubstream {
io: io.inner(),
direction: Direction::Inbound,
protocol,
permit,
})
}
/// Start [`QuicConnection`] event loop.
pub(crate) async fn start(mut self) -> crate::Result<()> {
tracing::debug!(target: LOG_TARGET, "starting quic connection handler");
loop {
tokio::select! {
substream = self.connection.accept_bidirectional_stream() => match substream {
Ok(Some(stream)) => {
let substream = self.protocol_set.next_substream_id();
let protocols = self.protocol_set.protocols();
let permit = self.protocol_set.try_get_permit().ok_or(Error::ConnectionClosed)?;
self.pending_substreams.push(Box::pin(async move {
match tokio::time::timeout(
std::time::Duration::from_secs(5), // TODO: https://github.com/paritytech/litep2p/issues/348 make this configurable
Self::accept_substream(stream, permit, substream, protocols),
)
.await
{
Ok(Ok(substream)) => Ok(substream),
Ok(Err(error)) => Err(ConnectionError::FailedToNegotiate {
protocol: None,
substream_id: None,
error,
}),
Err(_) => Err(ConnectionError::Timeout {
protocol: None,
substream_id: None
}),
}
}));
}
Ok(None) => {
tracing::debug!(target: LOG_TARGET, peer = ?self.peer, "connection closed");
self.protocol_set.report_connection_closed(self.peer, self.connection_id).await?;
return Ok(())
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?error,
"connection closed with error"
);
self.protocol_set.report_connection_closed(self.peer, self.connection_id).await?;
return Ok(())
}
},
substream = self.pending_substreams.select_next_some(), if !self.pending_substreams.is_empty() => {
match substream {
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?error,
"failed to accept/open substream",
);
let (protocol, substream_id, error) = match error {
ConnectionError::Timeout { protocol, substream_id } => {
(protocol, substream_id, Error::Timeout)
}
ConnectionError::FailedToNegotiate { protocol, substream_id, error } => {
(protocol, substream_id, error)
}
};
if let (Some(protocol), Some(substream_id)) = (protocol, substream_id) {
if let Err(error) = self.protocol_set
.report_substream_open_failure(protocol, substream_id, error)
.await
{
tracing::error!(
target: LOG_TARGET,
?error,
"failed to register opened substream to protocol"
);
}
}
}
Ok(substream) => {
let protocol = substream.protocol.clone();
let direction = substream.direction;
let substream = Substream::new(substream.io, substream.permit);
let substream: Box<dyn SubstreamT> = match self.protocol_set.protocol_codec(&protocol) {
ProtocolCodec::Identity(payload_size) => {
Box::new(Framed::new(substream, Identity::new(payload_size)))
}
ProtocolCodec::UnsignedVarint(max_size) => {
Box::new(Framed::new(substream, UnsignedVarint::new(max_size)))
}
ProtocolCodec::Unspecified => {
Box::new(Framed::new(substream, Generic::new()))
}
};
if let Err(error) = self.protocol_set
.report_substream_open(self.peer, protocol, direction, substream)
.await
{
tracing::error!(
target: LOG_TARGET,
?error,
"failed to register opened substream to protocol"
);
}
}
}
}
protocol = self.protocol_set.next_event() => match protocol {
Some(ProtocolCommand::OpenSubstream { protocol, fallback_names, substream_id, permit, .. }) => {
let handle = self.connection.handle();
tracing::trace!(
target: LOG_TARGET,
?protocol,
?fallback_names,
?substream_id,
"open substream"
);
self.pending_substreams.push(Box::pin(async move {
match tokio::time::timeout(
std::time::Duration::from_secs(5), // TODO: https://github.com/paritytech/litep2p/issues/348 make this configurable
Self::open_substream(
handle,
permit,
Direction::Outbound(substream_id),
protocol.clone(),
fallback_names
),
)
.await
{
Ok(Ok(substream)) => Ok(substream),
Ok(Err(error)) => Err(ConnectionError::FailedToNegotiate {
protocol: Some(protocol),
substream_id: Some(substream_id),
error,
}),
Err(_) => Err(ConnectionError::Timeout {
protocol: Some(protocol),
substream_id: Some(substream_id)
}),
}
}));
}
None => {
tracing::debug!(target: LOG_TARGET, "protocols have exited, shutting down connection");
return self.protocol_set.report_connection_closed(self.peer, self.connection_id).await
}
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
crypto::{
ed25519::Keypair,
tls::{certificate::generate, TlsProvider},
PublicKey,
},
protocol::{Transport, TransportEvent},
transport::manager::{SupportedTransport, TransportManager, TransportManagerEvent},
};
use multiaddr::Multiaddr;
use s2n_quic::{client::Connect, Client, Server};
use tokio::sync::mpsc::{channel, Receiver};
// context for testing
struct QuicContext {
manager: TransportManager,
peer: PeerId,
server: Server,
client: Client,
rx: Receiver<PeerId>,
connect: Connect,
}
// prepare quic context for testing
fn prepare_quic_context() -> QuicContext {
let keypair = Keypair::generate();
let (certificate, key) = generate(&keypair).unwrap();
let (tx, rx) = channel(1);
let peer = PeerId::from_public_key(&PublicKey::Ed25519(keypair.public()));
let provider = TlsProvider::new(key, certificate, None, Some(tx.clone()));
let server = Server::builder()
.with_tls(provider)
.expect("TLS provider to be enabled successfully")
.with_io("127.0.0.1:0")
.unwrap()
.start()
.unwrap();
let listen_address = server.local_addr().unwrap();
let keypair = Keypair::generate();
let (certificate, key) = generate(&keypair).unwrap();
let provider = TlsProvider::new(key, certificate, Some(peer), None);
let client = Client::builder()
.with_tls(provider)
.expect("TLS provider to be enabled successfully")
.with_io("0.0.0.0:0")
.unwrap()
.start()
.unwrap();
let connect = Connect::new(listen_address).with_server_name("localhost");
let (manager, _handle) = TransportManager::new(keypair.clone());
QuicContext {
manager,
peer,
server,
client,
connect,
rx,
}
}
#[tokio::test]
async fn connection_closed() {
let QuicContext {
mut manager,
mut server,
peer,
client,
connect,
rx: _rx,
} = prepare_quic_context();
let res = tokio::join!(server.accept(), client.connect(connect));
let (Some(connection1), Ok(connection2)) = res else {
panic!("failed to establish connection");
};
let mut service1 = manager.register_protocol(
ProtocolName::from("/notif/1"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let mut service2 = manager.register_protocol(
ProtocolName::from("/notif/2"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let transport_handle = manager.register_transport(SupportedTransport::Quic);
let mut protocol_set = transport_handle.protocol_set();
protocol_set
.report_connection_established(ConnectionId::from(0usize), peer, Multiaddr::empty())
.await
.unwrap();
// ignore connection established events
let _ = service1.next_event().await.unwrap();
let _ = service2.next_event().await.unwrap();
let _ = manager.next().await.unwrap();
tokio::spawn(async move {
let _ =
QuicConnection::new(peer, protocol_set, connection1, ConnectionId::from(0usize))
.start()
.await;
});
// drop connection and verify that both protocols are notified of it
drop(connection2);
let (
Some(TransportEvent::ConnectionClosed { .. }),
Some(TransportEvent::ConnectionClosed { .. }),
) = tokio::join!(service1.next_event(), service2.next_event())
else {
panic!("invalid event received");
};
// verify that the `TransportManager` is also notified about the closed connection
let Some(TransportManagerEvent::ConnectionClosed { .. }) = manager.next().await else {
panic!("invalid event received");
};
}
#[tokio::test]
async fn outbound_substream_timeouts() {
let QuicContext {
mut manager,
mut server,
peer,
client,
connect,
rx: _rx,
} = prepare_quic_context();
let res = tokio::join!(server.accept(), client.connect(connect));
let (Some(connection1), Ok(_connection2)) = res else {
panic!("failed to establish connection");
};
let mut service1 = manager.register_protocol(
ProtocolName::from("/notif/1"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let mut service2 = manager.register_protocol(
ProtocolName::from("/notif/2"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let transport_handle = manager.register_transport(SupportedTransport::Quic);
let mut protocol_set = transport_handle.protocol_set();
protocol_set
.report_connection_established(ConnectionId::from(0usize), peer, Multiaddr::empty())
.await
.unwrap();
// ignore connection established events
let _ = service1.next_event().await.unwrap();
let _ = service2.next_event().await.unwrap();
let _ = manager.next().await.unwrap();
tokio::spawn(async move {
let _ =
QuicConnection::new(peer, protocol_set, connection1, ConnectionId::from(0usize))
.start()
.await;
});
let _ = service1.open_substream(peer).await.unwrap();
let Some(TransportEvent::SubstreamOpenFailure { .. }) = service1.next_event().await else {
panic!("invalid event received");
};
}
#[tokio::test]
async fn outbound_substream_protocol_not_supported() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let QuicContext {
mut manager,
mut server,
peer,
client,
connect,
rx: _rx,
} = prepare_quic_context();
let res = tokio::join!(server.accept(), client.connect(connect));
let (Some(connection1), Ok(mut connection2)) = res else {
panic!("failed to establish connection");
};
let mut service1 = manager.register_protocol(
ProtocolName::from("/notif/1"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let mut service2 = manager.register_protocol(
ProtocolName::from("/notif/2"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let transport_handle = manager.register_transport(SupportedTransport::Quic);
let mut protocol_set = transport_handle.protocol_set();
protocol_set
.report_connection_established(ConnectionId::from(0usize), peer, Multiaddr::empty())
.await
.unwrap();
// ignore connection established events
let _ = service1.next_event().await.unwrap();
let _ = service2.next_event().await.unwrap();
let _ = manager.next().await.unwrap();
tokio::spawn(async move {
let _ =
QuicConnection::new(peer, protocol_set, connection1, ConnectionId::from(0usize))
.start()
.await;
});
let _ = service1.open_substream(peer).await.unwrap();
let stream = connection2.accept_bidirectional_stream().await.unwrap().unwrap();
assert!(
listener_select_proto(stream, vec!["/unsupported/1", "/unsupported/2"])
.await
.is_err()
);
let Some(TransportEvent::SubstreamOpenFailure { .. }) = service1.next_event().await else {
panic!("invalid event received");
};
}
#[tokio::test]
async fn connection_closed_while_negotiating_protocol() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let QuicContext {
mut manager,
mut server,
peer,
client,
connect,
rx: _rx,
} = prepare_quic_context();
let res = tokio::join!(server.accept(), client.connect(connect));
let (Some(connection1), Ok(mut connection2)) = res else {
panic!("failed to establish connection");
};
let mut service1 = manager.register_protocol(
ProtocolName::from("/notif/1"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let mut service2 = manager.register_protocol(
ProtocolName::from("/notif/2"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let transport_handle = manager.register_transport(SupportedTransport::Quic);
let mut protocol_set = transport_handle.protocol_set();
protocol_set
.report_connection_established(ConnectionId::from(0usize), peer, Multiaddr::empty())
.await
.unwrap();
// ignore connection established events
let _ = service1.next_event().await.unwrap();
let _ = service2.next_event().await.unwrap();
let _ = manager.next().await.unwrap();
tokio::spawn(async move {
let _ =
QuicConnection::new(peer, protocol_set, connection1, ConnectionId::from(0usize))
.start()
.await;
});
let _ = service1.open_substream(peer).await.unwrap();
let stream = connection2.accept_bidirectional_stream().await.unwrap().unwrap();
drop(stream);
drop(connection2);
let Some(TransportEvent::SubstreamOpenFailure { .. }) = service1.next_event().await else {
panic!("invalid event received");
};
}
#[tokio::test]
async fn outbound_substream_opened_and_negotiated() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let QuicContext {
mut manager,
mut server,
peer,
client,
connect,
rx: _rx,
} = prepare_quic_context();
let res = tokio::join!(server.accept(), client.connect(connect));
let (Some(connection1), Ok(mut connection2)) = res else {
panic!("failed to establish connection");
};
let mut service1 = manager.register_protocol(
ProtocolName::from("/notif/1"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let mut service2 = manager.register_protocol(
ProtocolName::from("/notif/2"),
Vec::new(),
ProtocolCodec::UnsignedVarint(None),
);
let transport_handle = manager.register_transport(SupportedTransport::Quic);
let mut protocol_set = transport_handle.protocol_set();
protocol_set
.report_connection_established(ConnectionId::from(0usize), peer, Multiaddr::empty())
.await
.unwrap();
// ignore connection established events
let _ = service1.next_event().await.unwrap();
let _ = service2.next_event().await.unwrap();
let _ = manager.next().await.unwrap();
tokio::spawn(async move {
let _ =
QuicConnection::new(peer, protocol_set, connection1, ConnectionId::from(0usize))
.start()
.await;
});
let _ = service1.open_substream(peer).await.unwrap();
let stream = connection2.accept_bidirectional_stream().await.unwrap().unwrap();
let (_io, _proto) =
listener_select_proto(stream, vec!["/notif/1", "/notif/2"]).await.unwrap();
let Some(TransportEvent::SubstreamOpened { .. }) = service1.next_event().await else {
panic!("invalid event received");
};
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/s2n-quic/mod.rs | src/transport/s2n-quic/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! QUIC transport.
use crate::{
crypto::tls::{certificate::generate, TlsProvider},
error::{AddressError, Error},
transport::{
manager::{TransportHandle, TransportManagerCommand},
quic::{config::Config, connection::QuicConnection},
Transport,
},
types::ConnectionId,
PeerId,
};
use futures::{future::BoxFuture, stream::FuturesUnordered, StreamExt};
use multiaddr::{Multiaddr, Protocol};
use multihash::Multihash;
use s2n_quic::{
client::Connect,
connection::{Connection, Error as ConnectionError},
Client, Server,
};
use tokio::sync::mpsc::{channel, Receiver, Sender};
use std::{
collections::HashMap,
net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},
};
mod connection;
pub mod config;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::quic";
/// Convert `SocketAddr` to `Multiaddr`
fn socket_addr_to_multi_addr(address: &SocketAddr) -> Multiaddr {
let mut multiaddr = Multiaddr::from(address.ip());
multiaddr.push(Protocol::Udp(address.port()));
multiaddr.push(Protocol::QuicV1);
multiaddr
}
/// QUIC transport object.
#[derive(Debug)]
pub(crate) struct QuicTransport {
/// QUIC server.
server: Server,
/// Transport context.
context: TransportHandle,
/// Assigned listen address.
listen_address: SocketAddr,
/// Listen address assigned for clients.
client_listen_address: SocketAddr,
/// Pending dials.
pending_dials: HashMap<ConnectionId, Multiaddr>,
/// Pending connections.
pending_connections: FuturesUnordered<
BoxFuture<'static, (ConnectionId, PeerId, Result<Connection, ConnectionError>)>,
>,
/// RX channel for receiving the client `PeerId`.
rx: Receiver<PeerId>,
/// TX channel for send the client `PeerId` to server.
_tx: Sender<PeerId>,
}
impl QuicTransport {
/// Extract socket address and `PeerId`, if found, from `address`.
fn get_socket_address(address: &Multiaddr) -> crate::Result<(SocketAddr, Option<PeerId>)> {
tracing::trace!(target: LOG_TARGET, ?address, "parse multi address");
let mut iter = address.iter();
let socket_address = match iter.next() {
Some(Protocol::Ip6(address)) => match iter.next() {
Some(Protocol::Udp(port)) => SocketAddr::new(IpAddr::V6(address), port),
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid transport protocol, expected `QuicV1`",
);
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
},
Some(Protocol::Ip4(address)) => match iter.next() {
Some(Protocol::Udp(port)) => SocketAddr::new(IpAddr::V4(address), port),
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid transport protocol, expected `QuicV1`",
);
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
},
protocol => {
tracing::error!(target: LOG_TARGET, ?protocol, "invalid transport protocol");
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
};
// verify that quic exists
match iter.next() {
Some(Protocol::QuicV1) => {}
_ => return Err(Error::AddressError(AddressError::InvalidProtocol)),
}
let maybe_peer = match iter.next() {
Some(Protocol::P2p(multihash)) => Some(PeerId::from_multihash(multihash)?),
None => None,
protocol => {
tracing::error!(
target: LOG_TARGET,
?protocol,
"invalid protocol, expected `P2p` or `None`"
);
return Err(Error::AddressError(AddressError::InvalidProtocol));
}
};
Ok((socket_address, maybe_peer))
}
/// Accept QUIC conenction.
async fn accept_connection(&mut self, connection: Connection) -> crate::Result<()> {
let connection_id = self.context.next_connection_id();
let address = socket_addr_to_multi_addr(
&connection.remote_addr().expect("remote address to be known"),
);
let Ok(peer) = self.rx.try_recv() else {
tracing::error!(target: LOG_TARGET, "failed to receive client `PeerId` from tls verifier");
return Ok(());
};
tracing::info!(target: LOG_TARGET, ?address, ?peer, "accepted connection from remote peer");
// TODO: https://github.com/paritytech/litep2p/issues/349 verify that the peer can actually be accepted
let mut protocol_set = self.context.protocol_set();
protocol_set.report_connection_established(connection_id, peer, address).await?;
tokio::spawn(async move {
let quic_connection =
QuicConnection::new(peer, protocol_set, connection, connection_id);
if let Err(error) = quic_connection.start().await {
tracing::debug!(target: LOG_TARGET, ?error, "quic connection exited with an error");
}
});
Ok(())
}
/// Handle established connection.
async fn on_connection_established(
&mut self,
peer: PeerId,
connection_id: ConnectionId,
result: Result<Connection, ConnectionError>,
) -> crate::Result<()> {
match result {
Ok(connection) => {
let address = match self.pending_dials.remove(&connection_id) {
Some(address) => address,
None => {
let address = connection
.remote_addr()
.map_err(|_| Error::AddressError(AddressError::AddressNotAvailable))?;
Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Udp(address.port()))
.with(Protocol::QuicV1)
.with(Protocol::P2p(
Multihash::from_bytes(&peer.to_bytes()).unwrap(),
))
}
};
let mut protocol_set = self.context.protocol_set();
protocol_set.report_connection_established(connection_id, peer, address).await?;
tokio::spawn(async move {
let quic_connection =
QuicConnection::new(peer, protocol_set, connection, connection_id);
if let Err(error) = quic_connection.start().await {
tracing::debug!(target: LOG_TARGET, ?error, "quic connection exited with an error");
}
});
Ok(())
}
Err(error) => match self.pending_dials.remove(&connection_id) {
Some(address) => {
let error = if std::matches!(
error,
ConnectionError::MaxHandshakeDurationExceeded { .. }
) {
Error::Timeout
} else {
Error::TransportError(error.to_string())
};
self.context.report_dial_failure(connection_id, address, error).await;
Ok(())
}
None => {
tracing::debug!(
target: LOG_TARGET,
?error,
"failed to establish connection"
);
Ok(())
}
},
}
}
/// Dial remote peer.
async fn on_dial_peer(
&mut self,
address: Multiaddr,
connection: ConnectionId,
) -> crate::Result<()> {
tracing::debug!(target: LOG_TARGET, ?address, "open connection");
let Ok((socket_address, Some(peer))) = Self::get_socket_address(&address) else {
return Err(Error::AddressError(AddressError::PeerIdMissing));
};
let (certificate, key) = generate(&self.context.keypair).unwrap();
let provider = TlsProvider::new(key, certificate, Some(peer), None);
let client = Client::builder()
.with_tls(provider)
.expect("TLS provider to be enabled successfully")
.with_io(self.client_listen_address)?
.start()?;
let connect = Connect::new(socket_address).with_server_name("localhost");
self.pending_dials.insert(connection, address);
self.pending_connections.push(Box::pin(async move {
(connection, peer, client.connect(connect).await)
}));
Ok(())
}
}
#[async_trait::async_trait]
impl Transport for QuicTransport {
type Config = Config;
/// Create new [`QuicTransport`] object.
async fn new(context: TransportHandle, config: Self::Config) -> crate::Result<Self>
where
Self: Sized,
{
tracing::info!(
target: LOG_TARGET,
listen_address = ?config.listen_address,
"start quic transport",
);
let (listen_address, _) = Self::get_socket_address(&config.listen_address)?;
let (certificate, key) = generate(&context.keypair)?;
let (_tx, rx) = channel(1);
let provider = TlsProvider::new(key, certificate, None, Some(_tx.clone()));
let server = Server::builder()
.with_tls(provider)
.expect("TLS provider to be enabled successfully")
.with_io(listen_address)?
.start()?;
let listen_address = server.local_addr()?;
let client_listen_address = match listen_address.ip() {
std::net::IpAddr::V4(_) => SocketAddr::new(IpAddr::V4(Ipv4Addr::UNSPECIFIED), 0),
std::net::IpAddr::V6(_) => SocketAddr::new(IpAddr::V6(Ipv6Addr::UNSPECIFIED), 0),
};
Ok(Self {
rx,
_tx,
server,
context,
listen_address,
client_listen_address,
pending_dials: HashMap::new(),
pending_connections: FuturesUnordered::new(),
})
}
/// Get assigned listen address.
fn listen_address(&self) -> Multiaddr {
socket_addr_to_multi_addr(&self.listen_address)
}
/// Start [`QuicTransport`] event loop.
async fn start(mut self) -> crate::Result<()> {
loop {
tokio::select! {
connection = self.server.accept() => match connection {
Some(connection) => if let Err(error) = self.accept_connection(connection).await {
tracing::error!(target: LOG_TARGET, ?error, "failed to accept quic connection");
return Err(error);
},
None => {
tracing::error!(target: LOG_TARGET, "failed to accept connection, closing quic transport");
return Ok(())
}
},
connection = self.pending_connections.select_next_some(), if !self.pending_connections.is_empty() => {
let (connection_id, peer, result) = connection;
if let Err(error) = self.on_connection_established(peer, connection_id, result).await {
tracing::debug!(target: LOG_TARGET, ?peer, ?error, "failed to handle established connection");
}
}
command = self.context.next() => match command.ok_or(Error::EssentialTaskClosed)? {
TransportManagerCommand::Dial { address, connection } => {
if let Err(error) = self.on_dial_peer(address.clone(), connection).await {
tracing::debug!(target: LOG_TARGET, ?address, ?connection, "failed to dial peer");
let _ = self.context.report_dial_failure(connection, address, error).await;
}
}
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
codec::ProtocolCodec,
crypto::{ed25519::Keypair, PublicKey},
transport::manager::{
ProtocolContext, SupportedTransport, TransportHandle, TransportManager,
TransportManagerCommand, TransportManagerEvent,
},
types::protocol::ProtocolName,
};
use tokio::sync::mpsc::channel;
#[tokio::test]
async fn connect_and_accept_works() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let keypair1 = Keypair::generate();
let (tx1, _rx1) = channel(64);
let (event_tx1, mut event_rx1) = channel(64);
let (_command_tx1, command_rx1) = channel(64);
let handle1 = TransportHandle {
protocol_names: Vec::new(),
next_substream_id: Default::default(),
next_connection_id: Default::default(),
tx: event_tx1,
rx: command_rx1,
keypair: keypair1.clone(),
protocols: HashMap::from_iter([(
ProtocolName::from("/notif/1"),
ProtocolContext {
tx: tx1,
codec: ProtocolCodec::Identity(32),
fallback_names: Vec::new(),
},
)]),
};
let transport_config1 = config::Config {
listen_address: "/ip4/127.0.0.1/udp/0/quic-v1".parse().unwrap(),
};
let transport1 = QuicTransport::new(handle1, transport_config1).await.unwrap();
let _peer1: PeerId = PeerId::from_public_key(&PublicKey::Ed25519(keypair1.public()));
let listen_address = Transport::listen_address(&transport1).to_string();
let listen_address: Multiaddr =
format!("{}/p2p/{}", listen_address, _peer1.to_string()).parse().unwrap();
tokio::spawn(transport1.start());
let keypair2 = Keypair::generate();
let (tx2, _rx2) = channel(64);
let (event_tx2, mut event_rx2) = channel(64);
let (command_tx2, command_rx2) = channel(64);
let handle2 = TransportHandle {
protocol_names: Vec::new(),
next_substream_id: Default::default(),
next_connection_id: Default::default(),
tx: event_tx2,
rx: command_rx2,
keypair: keypair2.clone(),
protocols: HashMap::from_iter([(
ProtocolName::from("/notif/1"),
ProtocolContext {
tx: tx2,
codec: ProtocolCodec::Identity(32),
fallback_names: Vec::new(),
},
)]),
};
let transport_config2 = config::Config {
listen_address: "/ip4/127.0.0.1/udp/0/quic-v1".parse().unwrap(),
};
let transport2 = QuicTransport::new(handle2, transport_config2).await.unwrap();
tokio::spawn(transport2.start());
command_tx2
.send(TransportManagerCommand::Dial {
address: listen_address,
connection: ConnectionId::new(),
})
.await
.unwrap();
let (res1, res2) = tokio::join!(event_rx1.recv(), event_rx2.recv());
assert!(std::matches!(
res1,
Some(TransportManagerEvent::ConnectionEstablished { .. })
));
assert!(std::matches!(
res2,
Some(TransportManagerEvent::ConnectionEstablished { .. })
));
}
#[tokio::test]
async fn dial_peer_id_missing() {
let (mut manager, _handle) = TransportManager::new(Keypair::generate());
let handle = manager.register_transport(SupportedTransport::Quic);
let mut transport = QuicTransport::new(
handle,
Config {
listen_address: "/ip4/127.0.0.1/udp/0/quic-v1".parse().unwrap(),
},
)
.await
.unwrap();
let address = Multiaddr::empty()
.with(Protocol::Ip4(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Udp(8888));
match transport.on_dial_peer(address, ConnectionId::from(0usize)).await {
Err(Error::AddressError(AddressError::PeerIdMissing)) => {}
_ => panic!("invalid result for `on_dial_peer()`"),
}
}
#[tokio::test]
async fn dial_failure() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut manager, _handle) = TransportManager::new(Keypair::generate());
let handle = manager.register_transport(SupportedTransport::Quic);
let mut transport = QuicTransport::new(
handle,
Config {
listen_address: "/ip4/127.0.0.1/udp/0/quic-v1".parse().unwrap(),
},
)
.await
.unwrap();
let peer = PeerId::random();
let address = Multiaddr::empty()
.with(Protocol::from(std::net::Ipv4Addr::new(255, 254, 253, 252)))
.with(Protocol::Udp(8888))
.with(Protocol::QuicV1)
.with(Protocol::P2p(
Multihash::from_bytes(&peer.to_bytes()).unwrap(),
));
manager.dial_address(address.clone()).await.unwrap();
assert!(transport.pending_dials.is_empty());
match transport.on_dial_peer(address, ConnectionId::from(0usize)).await {
Ok(()) => {}
_ => panic!("invalid result for `on_dial_peer()`"),
}
assert!(!transport.pending_dials.is_empty());
tokio::spawn(transport.start());
std::matches!(
manager.next().await,
Some(TransportManagerEvent::DialFailure { .. })
);
}
#[tokio::test]
async fn pending_dial_is_cleaned() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let keypair = Keypair::generate();
let (mut manager, _handle) = TransportManager::new(keypair.clone());
let handle = manager.register_transport(SupportedTransport::Quic);
let mut transport = QuicTransport::new(
handle,
Config {
listen_address: "/ip4/127.0.0.1/udp/0/quic-v1".parse().unwrap(),
},
)
.await
.unwrap();
let peer = PeerId::random();
let address = Multiaddr::empty()
.with(Protocol::from(std::net::Ipv4Addr::new(255, 254, 253, 252)))
.with(Protocol::Udp(8888))
.with(Protocol::QuicV1)
.with(Protocol::P2p(
Multihash::from_bytes(&peer.to_bytes()).unwrap(),
));
assert!(transport.pending_dials.is_empty());
match transport.on_dial_peer(address.clone(), ConnectionId::from(0usize)).await {
Ok(()) => {}
_ => panic!("invalid result for `on_dial_peer()`"),
}
assert!(!transport.pending_dials.is_empty());
let Ok((socket_address, Some(peer))) = QuicTransport::get_socket_address(&address) else {
panic!("invalid address");
};
let (certificate, key) = generate(&keypair).unwrap();
let provider = TlsProvider::new(key, certificate, Some(peer), None);
let client = Client::builder()
.with_tls(provider)
.expect("TLS provider to be enabled successfully")
.with_io("0.0.0.0:0")
.unwrap()
.start()
.unwrap();
let connect = Connect::new(socket_address).with_server_name("localhost");
let _ = transport
.on_connection_established(
peer,
ConnectionId::from(0usize),
client.connect(connect).await,
)
.await;
assert!(transport.pending_dials.is_empty());
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/websocket/config.rs | src/transport/websocket/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! WebSocket transport configuration.
use crate::{
crypto::noise::{MAX_READ_AHEAD_FACTOR, MAX_WRITE_BUFFER_SIZE},
transport::{CONNECTION_OPEN_TIMEOUT, SUBSTREAM_OPEN_TIMEOUT},
};
/// WebSocket transport configuration.
#[derive(Debug)]
pub struct Config {
/// Listen address address for the transport.
///
/// Default listen addreses are ["/ip4/0.0.0.0/tcp/0/ws", "/ip6/::/tcp/0/ws"].
pub listen_addresses: Vec<multiaddr::Multiaddr>,
/// Whether to set `SO_REUSEPORT` and bind a socket to the listen address port for outbound
/// connections.
///
/// Note that `SO_REUSEADDR` is always set on listening sockets.
///
/// Defaults to `true`.
pub reuse_port: bool,
/// Enable `TCP_NODELAY`.
///
/// Defaults to `false`.
pub nodelay: bool,
/// Yamux configuration.
pub yamux_config: crate::yamux::Config,
/// Noise read-ahead frame count.
///
/// Specifies how many Noise frames are read per call to the underlying socket.
///
/// By default this is configured to `5` so each call to the underlying socket can read up
/// to `5` Noise frame per call. Fewer frames may be read if there isn't enough data in the
/// socket. Each Noise frame is `65 KB` so the default setting allocates `65 KB * 5 = 325 KB`
/// per connection.
pub noise_read_ahead_frame_count: usize,
/// Noise write buffer size.
///
/// Specifes how many Noise frames are tried to be coalesced into a single system call.
/// By default the value is set to `2` which means that the `NoiseSocket` will allocate
/// `130 KB` for each outgoing connection.
///
/// The write buffer size is separate from the read-ahead frame count so by default
/// the Noise code will allocate `2 * 65 KB + 5 * 65 KB = 455 KB` per connection.
pub noise_write_buffer_size: usize,
/// Connection open timeout.
///
/// How long should litep2p wait for a connection to be opened before the host
/// is deemed unreachable.
pub connection_open_timeout: std::time::Duration,
/// Substream open timeout.
///
/// How long should litep2p wait for a substream to be opened before considering
/// the substream rejected.
pub substream_open_timeout: std::time::Duration,
}
impl Default for Config {
fn default() -> Self {
Self {
listen_addresses: vec![
"/ip4/0.0.0.0/tcp/0/ws".parse().expect("valid address"),
"/ip6/::/tcp/0/ws".parse().expect("valid address"),
],
reuse_port: true,
nodelay: false,
yamux_config: Default::default(),
noise_read_ahead_frame_count: MAX_READ_AHEAD_FACTOR,
noise_write_buffer_size: MAX_WRITE_BUFFER_SIZE,
connection_open_timeout: CONNECTION_OPEN_TIMEOUT,
substream_open_timeout: SUBSTREAM_OPEN_TIMEOUT,
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/websocket/stream.rs | src/transport/websocket/stream.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Stream implementation for `tokio_tungstenite::WebSocketStream` that implements
//! `AsyncRead + AsyncWrite`
use bytes::{Buf, Bytes};
use futures::{SinkExt, StreamExt};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_tungstenite::{tungstenite::Message, WebSocketStream};
use std::{
pin::Pin,
task::{Context, Poll},
};
const LOG_TARGET: &str = "litep2p::transport::websocket::stream";
/// Buffered stream which implements `AsyncRead + AsyncWrite`
#[derive(Debug)]
pub(super) struct BufferedStream<S: AsyncRead + AsyncWrite + Unpin> {
/// Read buffer.
///
/// The buffer is taken directly from the WebSocket stream.
read_buffer: Bytes,
/// Underlying WebSocket stream.
stream: WebSocketStream<S>,
}
impl<S: AsyncRead + AsyncWrite + Unpin> BufferedStream<S> {
/// Create new [`BufferedStream`].
pub(super) fn new(stream: WebSocketStream<S>) -> Self {
Self {
read_buffer: Bytes::new(),
stream,
}
}
}
impl<S: AsyncRead + AsyncWrite + Unpin> futures::AsyncWrite for BufferedStream<S> {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<std::io::Result<usize>> {
match futures::ready!(self.stream.poll_ready_unpin(cx)) {
Ok(()) => {
let message = Message::Binary(Bytes::copy_from_slice(buf));
if let Err(err) = self.stream.start_send_unpin(message) {
tracing::debug!(target: LOG_TARGET, "Error during start send: {:?}", err);
return Poll::Ready(Err(std::io::ErrorKind::UnexpectedEof.into()));
}
Poll::Ready(Ok(buf.len()))
}
Err(err) => {
tracing::debug!(target: LOG_TARGET, "Error during poll ready: {:?}", err);
Poll::Ready(Err(std::io::ErrorKind::UnexpectedEof.into()))
}
}
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<()>> {
self.stream.poll_flush_unpin(cx).map_err(|err| {
tracing::debug!(target: LOG_TARGET, "Error during poll flush: {:?}", err);
std::io::ErrorKind::UnexpectedEof.into()
})
}
fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<()>> {
self.stream.poll_close_unpin(cx).map_err(|err| {
tracing::debug!(target: LOG_TARGET, "Error during poll close: {:?}", err);
std::io::ErrorKind::PermissionDenied.into()
})
}
}
impl<S: AsyncRead + AsyncWrite + Unpin> futures::AsyncRead for BufferedStream<S> {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut [u8],
) -> Poll<std::io::Result<usize>> {
loop {
if self.read_buffer.is_empty() {
let next_chunk = match self.stream.poll_next_unpin(cx) {
Poll::Ready(Some(Ok(chunk))) => match chunk {
Message::Binary(chunk) => chunk,
_event => return Poll::Ready(Err(std::io::ErrorKind::Unsupported.into())),
},
Poll::Ready(Some(Err(_error))) =>
return Poll::Ready(Err(std::io::ErrorKind::UnexpectedEof.into())),
Poll::Ready(None) => return Poll::Ready(Ok(0)),
Poll::Pending => return Poll::Pending,
};
self.read_buffer = next_chunk;
continue;
}
let len = std::cmp::min(self.read_buffer.len(), buf.len());
buf[..len].copy_from_slice(&self.read_buffer[..len]);
self.read_buffer.advance(len);
return Poll::Ready(Ok(len));
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use futures::{AsyncRead, AsyncReadExt, AsyncWriteExt};
use tokio::io::DuplexStream;
use tokio_tungstenite::{tungstenite::protocol::Role, WebSocketStream};
async fn create_test_stream() -> (BufferedStream<DuplexStream>, BufferedStream<DuplexStream>) {
let (client, server) = tokio::io::duplex(1024);
(
BufferedStream::new(WebSocketStream::from_raw_socket(client, Role::Client, None).await),
BufferedStream::new(WebSocketStream::from_raw_socket(server, Role::Server, None).await),
)
}
#[tokio::test]
async fn test_write_to_buffer() {
let (mut stream, mut _server) = create_test_stream().await;
let data = b"hello";
let bytes_written = stream.write(data).await.unwrap();
assert_eq!(bytes_written, data.len());
}
#[tokio::test]
async fn test_flush_empty_buffer() {
let (mut stream, mut _server) = create_test_stream().await;
assert!(stream.flush().await.is_ok());
}
#[tokio::test]
async fn test_write_and_flush() {
let (mut stream, mut _server) = create_test_stream().await;
let data = b"hello world";
stream.write_all(data).await.unwrap();
assert!(stream.flush().await.is_ok());
}
#[tokio::test]
async fn test_close_stream() {
let (mut stream, mut _server) = create_test_stream().await;
assert!(stream.close().await.is_ok());
}
#[tokio::test]
async fn test_ping_pong_stream() {
let (mut stream, mut server) = create_test_stream().await;
stream.write(b"hello").await.unwrap();
assert!(stream.flush().await.is_ok());
let mut message = [0u8; 5];
server.read(&mut message).await.unwrap();
assert_eq!(&message, b"hello");
server.write(b"world").await.unwrap();
assert!(server.flush().await.is_ok());
stream.read(&mut message).await.unwrap();
assert_eq!(&message, b"world");
assert!(stream.close().await.is_ok());
drop(stream);
assert!(server.write(b"world").await.is_ok());
match server.flush().await {
Err(error) => if error.kind() == std::io::ErrorKind::UnexpectedEof {},
state => panic!("Unexpected state {state:?}"),
};
}
#[tokio::test]
async fn test_read_poll_pending() {
let (mut stream, mut _server) = create_test_stream().await;
let mut buffer = [0u8; 10];
let mut cx = std::task::Context::from_waker(futures::task::noop_waker_ref());
let pin_stream = Pin::new(&mut stream);
assert!(matches!(
pin_stream.poll_read(&mut cx, &mut buffer),
Poll::Pending
));
}
#[tokio::test]
async fn test_read_from_internal_buffers() {
let (mut stream, server) = create_test_stream().await;
drop(server);
stream.read_buffer = Bytes::from_static(b"hello world");
let mut buffer = [0u8; 32];
let bytes_read = stream.read(&mut buffer).await.unwrap();
assert_eq!(bytes_read, 11);
assert_eq!(&buffer[..bytes_read], b"hello world");
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/websocket/connection.rs | src/transport/websocket/connection.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
config::Role,
crypto::{
ed25519::Keypair,
noise::{self, NoiseSocket},
},
error::{Error, NegotiationError, SubstreamError},
multistream_select::{dialer_select_proto, listener_select_proto, Negotiated, Version},
protocol::{Direction, Permit, ProtocolCommand, ProtocolSet},
substream,
transport::{
websocket::{stream::BufferedStream, substream::Substream},
Endpoint,
},
types::{protocol::ProtocolName, ConnectionId, SubstreamId},
BandwidthSink, PeerId,
};
use futures::{future::BoxFuture, stream::FuturesUnordered, AsyncRead, AsyncWrite, StreamExt};
use multiaddr::{multihash::Multihash, Multiaddr, Protocol};
use tokio::net::TcpStream;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use tokio_util::compat::FuturesAsyncReadCompatExt;
use url::Url;
use std::time::Duration;
mod schema {
pub(super) mod noise {
include!(concat!(env!("OUT_DIR"), "/noise.rs"));
}
}
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::websocket::connection";
/// Negotiated substream and its context.
pub struct NegotiatedSubstream {
/// Substream direction.
direction: Direction,
/// Substream ID.
substream_id: SubstreamId,
/// Protocol name.
protocol: ProtocolName,
/// Yamux substream.
io: crate::yamux::Stream,
/// Permit.
permit: Permit,
}
/// WebSocket connection error.
#[derive(Debug)]
enum ConnectionError {
/// Timeout
Timeout {
/// Protocol.
protocol: Option<ProtocolName>,
/// Substream ID.
substream_id: Option<SubstreamId>,
},
/// Failed to negotiate connection/substream.
FailedToNegotiate {
/// Protocol.
protocol: Option<ProtocolName>,
/// Substream ID.
substream_id: Option<SubstreamId>,
/// Error.
error: SubstreamError,
},
}
/// Negotiated connection.
pub(super) struct NegotiatedConnection {
/// Remote peer ID.
peer: PeerId,
/// Endpoint.
endpoint: Endpoint,
/// Yamux connection.
connection:
crate::yamux::ControlledConnection<NoiseSocket<BufferedStream<MaybeTlsStream<TcpStream>>>>,
/// Yamux control.
control: crate::yamux::Control,
}
impl std::fmt::Debug for NegotiatedConnection {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("NegotiatedConnection")
.field("peer", &self.peer)
.field("endpoint", &self.endpoint)
.finish()
}
}
impl NegotiatedConnection {
/// Get `ConnectionId` of the negotiated connection.
pub fn connection_id(&self) -> ConnectionId {
self.endpoint.connection_id()
}
/// Get `PeerId` of the negotiated connection.
pub fn peer(&self) -> PeerId {
self.peer
}
/// Get `Endpoint` of the negotiated connection.
pub fn endpoint(&self) -> Endpoint {
self.endpoint.clone()
}
}
/// WebSocket connection.
pub(crate) struct WebSocketConnection {
/// Protocol context.
protocol_set: ProtocolSet,
/// Yamux connection.
connection:
crate::yamux::ControlledConnection<NoiseSocket<BufferedStream<MaybeTlsStream<TcpStream>>>>,
/// Yamux control.
control: crate::yamux::Control,
/// Remote peer ID.
peer: PeerId,
/// Endpoint.
endpoint: Endpoint,
/// Substream open timeout.
substream_open_timeout: Duration,
/// Connection ID.
connection_id: ConnectionId,
/// Bandwidth sink.
bandwidth_sink: BandwidthSink,
/// Pending substreams.
pending_substreams:
FuturesUnordered<BoxFuture<'static, Result<NegotiatedSubstream, ConnectionError>>>,
}
impl WebSocketConnection {
/// Create new [`WebSocketConnection`].
pub(super) fn new(
connection: NegotiatedConnection,
protocol_set: ProtocolSet,
bandwidth_sink: BandwidthSink,
substream_open_timeout: Duration,
) -> Self {
let NegotiatedConnection {
peer,
endpoint,
connection,
control,
} = connection;
Self {
connection_id: endpoint.connection_id(),
protocol_set,
connection,
control,
peer,
endpoint,
bandwidth_sink,
substream_open_timeout,
pending_substreams: FuturesUnordered::new(),
}
}
/// Negotiate protocol.
async fn negotiate_protocol<S: AsyncRead + AsyncWrite + Unpin>(
stream: S,
role: &Role,
protocols: Vec<&str>,
substream_open_timeout: Duration,
) -> Result<(Negotiated<S>, ProtocolName), NegotiationError> {
tracing::trace!(target: LOG_TARGET, ?protocols, "negotiating protocols");
match tokio::time::timeout(substream_open_timeout, async move {
match role {
Role::Dialer => dialer_select_proto(stream, protocols, Version::V1).await,
Role::Listener => listener_select_proto(stream, protocols).await,
}
})
.await
{
Err(_) => Err(NegotiationError::Timeout),
Ok(Err(error)) => Err(NegotiationError::MultistreamSelectError(error)),
Ok(Ok((protocol, socket))) => {
tracing::trace!(target: LOG_TARGET, ?protocol, "protocol negotiated");
Ok((socket, ProtocolName::from(protocol.to_string())))
}
}
}
/// Open WebSocket connection.
pub(super) async fn open_connection(
connection_id: ConnectionId,
keypair: Keypair,
stream: WebSocketStream<MaybeTlsStream<TcpStream>>,
address: Multiaddr,
dialed_peer: PeerId,
ws_address: Url,
yamux_config: crate::yamux::Config,
max_read_ahead_factor: usize,
max_write_buffer_size: usize,
substream_open_timeout: Duration,
) -> Result<NegotiatedConnection, NegotiationError> {
tracing::trace!(
target: LOG_TARGET,
?address,
?ws_address,
?connection_id,
"open connection to remote peer",
);
Self::negotiate_connection(
stream,
Some(dialed_peer),
Role::Dialer,
address,
connection_id,
keypair,
yamux_config,
max_read_ahead_factor,
max_write_buffer_size,
substream_open_timeout,
)
.await
}
/// Accept WebSocket connection.
pub(super) async fn accept_connection(
stream: TcpStream,
connection_id: ConnectionId,
keypair: Keypair,
address: Multiaddr,
yamux_config: crate::yamux::Config,
max_read_ahead_factor: usize,
max_write_buffer_size: usize,
substream_open_timeout: Duration,
) -> Result<NegotiatedConnection, NegotiationError> {
let stream = MaybeTlsStream::Plain(stream);
Self::negotiate_connection(
tokio_tungstenite::accept_async(stream)
.await
.map_err(NegotiationError::WebSocket)?,
None,
Role::Listener,
address,
connection_id,
keypair,
yamux_config,
max_read_ahead_factor,
max_write_buffer_size,
substream_open_timeout,
)
.await
}
/// Negotiate WebSocket connection.
pub(super) async fn negotiate_connection(
stream: WebSocketStream<MaybeTlsStream<TcpStream>>,
dialed_peer: Option<PeerId>,
role: Role,
address: Multiaddr,
connection_id: ConnectionId,
keypair: Keypair,
yamux_config: crate::yamux::Config,
max_read_ahead_factor: usize,
max_write_buffer_size: usize,
substream_open_timeout: Duration,
) -> Result<NegotiatedConnection, NegotiationError> {
tracing::trace!(
target: LOG_TARGET,
?connection_id,
?address,
?role,
?dialed_peer,
"negotiate connection"
);
let stream = BufferedStream::new(stream);
// negotiate `noise`
let (stream, _) =
Self::negotiate_protocol(stream, &role, vec!["/noise"], substream_open_timeout).await?;
tracing::trace!(
target: LOG_TARGET,
"`multistream-select` and `noise` negotiated"
);
// perform noise handshake
let (stream, peer) = noise::handshake(
stream.inner(),
&keypair,
role,
max_read_ahead_factor,
max_write_buffer_size,
substream_open_timeout,
noise::HandshakeTransport::WebSocket,
)
.await?;
if let Some(dialed_peer) = dialed_peer {
if peer != dialed_peer {
return Err(NegotiationError::PeerIdMismatch(dialed_peer, peer));
}
}
let stream: NoiseSocket<BufferedStream<_>> = stream;
tracing::trace!(target: LOG_TARGET, "noise handshake done");
// negotiate `yamux`
let (stream, _) =
Self::negotiate_protocol(stream, &role, vec!["/yamux/1.0.0"], substream_open_timeout)
.await?;
tracing::trace!(target: LOG_TARGET, "`yamux` negotiated");
let connection = crate::yamux::Connection::new(stream.inner(), yamux_config, role.into());
let (control, connection) = crate::yamux::Control::new(connection);
let address = match role {
Role::Dialer => address,
Role::Listener => address.with(Protocol::P2p(Multihash::from(peer))),
};
Ok(NegotiatedConnection {
peer,
control,
connection,
endpoint: match role {
Role::Dialer => Endpoint::dialer(address, connection_id),
Role::Listener => Endpoint::listener(address, connection_id),
},
})
}
/// Accept substream.
pub async fn accept_substream(
stream: crate::yamux::Stream,
permit: Permit,
substream_id: SubstreamId,
protocols: Vec<ProtocolName>,
substream_open_timeout: Duration,
) -> Result<NegotiatedSubstream, NegotiationError> {
tracing::trace!(
target: LOG_TARGET,
?substream_id,
"accept inbound substream"
);
let protocols = protocols.iter().map(|protocol| &**protocol).collect::<Vec<&str>>();
let (io, protocol) =
Self::negotiate_protocol(stream, &Role::Listener, protocols, substream_open_timeout)
.await?;
tracing::trace!(
target: LOG_TARGET,
?substream_id,
"substream accepted and negotiated"
);
Ok(NegotiatedSubstream {
io: io.inner(),
direction: Direction::Inbound,
substream_id,
protocol,
permit,
})
}
/// Open substream for `protocol`.
pub async fn open_substream(
mut control: crate::yamux::Control,
permit: Permit,
substream_id: SubstreamId,
protocol: ProtocolName,
fallback_names: Vec<ProtocolName>,
substream_open_timeout: Duration,
) -> Result<NegotiatedSubstream, SubstreamError> {
tracing::debug!(target: LOG_TARGET, ?protocol, ?substream_id, "open substream");
let stream = match control.open_stream().await {
Ok(stream) => {
tracing::trace!(target: LOG_TARGET, ?substream_id, "substream opened");
stream
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?substream_id,
?error,
"failed to open substream"
);
return Err(SubstreamError::YamuxError(
error,
Direction::Outbound(substream_id),
));
}
};
// TODO: https://github.com/paritytech/litep2p/issues/346 protocols don't change after
// they've been initialized so this should be done only once
let protocols = std::iter::once(&*protocol)
.chain(fallback_names.iter().map(|protocol| &**protocol))
.collect();
let (io, protocol) =
Self::negotiate_protocol(stream, &Role::Dialer, protocols, substream_open_timeout)
.await?;
Ok(NegotiatedSubstream {
io: io.inner(),
substream_id,
direction: Direction::Outbound(substream_id),
protocol,
permit,
})
}
/// Start connection event loop.
pub(crate) async fn start(mut self) -> crate::Result<()> {
self.protocol_set
.report_connection_established(self.peer, self.endpoint)
.await?;
loop {
tokio::select! {
substream = self.connection.next() => match substream {
Some(Ok(stream)) => {
let substream = self.protocol_set.next_substream_id();
let protocols = self.protocol_set.protocols();
let permit = self.protocol_set.try_get_permit().ok_or(Error::ConnectionClosed)?;
let substream_open_timeout = self.substream_open_timeout;
self.pending_substreams.push(Box::pin(async move {
match tokio::time::timeout(
substream_open_timeout,
Self::accept_substream(stream, permit, substream, protocols, substream_open_timeout),
)
.await
{
Ok(Ok(substream)) => Ok(substream),
Ok(Err(error)) => Err(ConnectionError::FailedToNegotiate {
protocol: None,
substream_id: None,
error: SubstreamError::NegotiationError(error),
}),
Err(_) => Err(ConnectionError::Timeout {
protocol: None,
substream_id: None
}),
}
}));
},
Some(Err(error)) => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
?error,
"connection closed with error"
);
self.protocol_set.report_connection_closed(self.peer, self.connection_id).await?;
return Ok(())
}
None => {
tracing::debug!(target: LOG_TARGET, peer = ?self.peer, "connection closed");
self.protocol_set.report_connection_closed(self.peer, self.connection_id).await?;
return Ok(())
}
},
substream = self.pending_substreams.select_next_some(), if !self.pending_substreams.is_empty() => {
match substream {
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?error,
"failed to accept/open substream",
);
let (protocol, substream_id, error) = match error {
ConnectionError::Timeout { protocol, substream_id } => {
(protocol, substream_id, SubstreamError::NegotiationError(NegotiationError::Timeout))
}
ConnectionError::FailedToNegotiate { protocol, substream_id, error } => {
(protocol, substream_id, error)
}
};
if let (Some(protocol), Some(substream_id)) = (protocol, substream_id) {
self.protocol_set
.report_substream_open_failure(protocol, substream_id, error)
.await?;
}
}
Ok(substream) => {
let protocol = substream.protocol.clone();
let direction = substream.direction;
let substream_id = substream.substream_id;
let socket = FuturesAsyncReadCompatExt::compat(substream.io);
let bandwidth_sink = self.bandwidth_sink.clone();
let substream = substream::Substream::new_websocket(
self.peer,
substream_id,
Substream::new(socket, bandwidth_sink, substream.permit),
self.protocol_set.protocol_codec(&protocol)
);
self.protocol_set
.report_substream_open(self.peer, protocol, direction, substream)
.await?;
}
}
}
protocol = self.protocol_set.next() => match protocol {
Some(ProtocolCommand::OpenSubstream { protocol, fallback_names, substream_id, permit, .. }) => {
let control = self.control.clone();
let substream_open_timeout = self.substream_open_timeout;
tracing::trace!(
target: LOG_TARGET,
?protocol,
?substream_id,
"open substream"
);
self.pending_substreams.push(Box::pin(async move {
match tokio::time::timeout(
substream_open_timeout,
Self::open_substream(
control,
permit,
substream_id,
protocol.clone(),
fallback_names,
substream_open_timeout
),
)
.await
{
Ok(Ok(substream)) => Ok(substream),
Ok(Err(error)) => Err(ConnectionError::FailedToNegotiate {
protocol: Some(protocol),
substream_id: Some(substream_id),
error,
}),
Err(_) => Err(ConnectionError::Timeout {
protocol: Some(protocol),
substream_id: Some(substream_id)
}),
}
}));
}
Some(ProtocolCommand::ForceClose) => {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
connection_id = ?self.connection_id,
"force closing connection",
);
return self.protocol_set.report_connection_closed(self.peer, self.connection_id).await
}
None => {
tracing::debug!(target: LOG_TARGET, "protocols have exited, shutting down connection");
return self.protocol_set.report_connection_closed(self.peer, self.connection_id).await
}
}
}
}
}
}
#[cfg(test)]
mod tests {
use crate::transport::websocket::WebSocketTransport;
use super::*;
use futures::AsyncWriteExt;
use hickory_resolver::TokioResolver;
use std::sync::Arc;
use tokio::net::TcpListener;
#[tokio::test]
async fn multistream_select_not_supported_dialer() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let listener = TcpListener::bind("[::1]:0").await.unwrap();
let address = listener.local_addr().unwrap();
tokio::spawn(async move {
let (stream, _) = listener.accept().await.unwrap();
// Negotiate websocket.
let stream = tokio_tungstenite::accept_async(stream).await.unwrap();
let mut stream = BufferedStream::new(stream);
stream.write_all(&vec![0x12u8; 256]).await.unwrap();
});
let peer_id = PeerId::random();
let address = Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Tcp(address.port()))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")))
.with(Protocol::P2p(peer_id.into()));
let (url, peer) = WebSocketTransport::multiaddr_into_url(address.clone()).unwrap();
let (_, stream) = WebSocketTransport::dial_peer(
address.clone(),
Default::default(),
Duration::from_secs(10),
false,
Arc::new(TokioResolver::builder_tokio().unwrap().build()),
)
.await
.unwrap();
match WebSocketConnection::open_connection(
ConnectionId::from(0usize),
Keypair::generate(),
stream,
address.clone(),
peer,
url,
Default::default(),
5,
2,
Duration::from_secs(10),
)
.await
{
Ok(_) => panic!("connection was supposed to fail"),
Err(NegotiationError::MultistreamSelectError(
crate::multistream_select::NegotiationError::ProtocolError(_),
)) => {}
Err(error) => panic!("invalid error: {error:?}"),
}
}
#[tokio::test]
async fn multistream_select_not_supported_listener() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let listener = TcpListener::bind("[::1]:0").await.unwrap();
let address = listener.local_addr().unwrap();
let (Ok(dialer), Ok((stream, dialer_address))) =
tokio::join!(TcpStream::connect(address), listener.accept(),)
else {
panic!("failed to establish connection");
};
let peer_id = PeerId::random();
let dialer_address = Multiaddr::empty()
.with(Protocol::from(dialer_address.ip()))
.with(Protocol::Tcp(dialer_address.port()))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")))
.with(Protocol::P2p(peer_id.into()));
let (url, _peer) = WebSocketTransport::multiaddr_into_url(dialer_address.clone()).unwrap();
tokio::spawn(async move {
// Negotiate websocket.
let stream = tokio_tungstenite::client_async_tls(url, dialer).await.unwrap().0;
let mut dialer = BufferedStream::new(stream);
let _ = dialer.write_all(&vec![0x12u8; 256]).await;
});
match WebSocketConnection::accept_connection(
stream,
ConnectionId::from(0usize),
Keypair::generate(),
dialer_address,
Default::default(),
5,
2,
Duration::from_secs(10),
)
.await
{
Ok(_) => panic!("connection was supposed to fail"),
Err(NegotiationError::MultistreamSelectError(
crate::multistream_select::NegotiationError::ProtocolError(_),
)) => {}
Err(error) => panic!("invalid error: {error:?}"),
}
}
#[tokio::test]
async fn noise_not_supported_dialer() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let listener = TcpListener::bind("[::1]:0").await.unwrap();
let address = listener.local_addr().unwrap();
tokio::spawn(async move {
let (stream, _) = listener.accept().await.unwrap();
let stream = tokio_tungstenite::accept_async(stream).await.unwrap();
let stream = BufferedStream::new(stream);
// attempt to negotiate yamux, skipping noise entirely
assert!(WebSocketConnection::negotiate_protocol(
stream,
&Role::Listener,
vec!["/yamux/1.0.0"],
std::time::Duration::from_secs(10),
)
.await
.is_err());
});
let peer_id = PeerId::random();
let address = Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Tcp(address.port()))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")))
.with(Protocol::P2p(peer_id.into()));
let (url, peer) = WebSocketTransport::multiaddr_into_url(address.clone()).unwrap();
let (_, stream) = WebSocketTransport::dial_peer(
address.clone(),
Default::default(),
Duration::from_secs(10),
false,
Arc::new(TokioResolver::builder_tokio().unwrap().build()),
)
.await
.unwrap();
match WebSocketConnection::open_connection(
ConnectionId::from(0usize),
Keypair::generate(),
stream,
address.clone(),
peer,
url,
Default::default(),
5,
2,
Duration::from_secs(10),
)
.await
{
Ok(_) => panic!("connection was supposed to fail"),
Err(NegotiationError::MultistreamSelectError(
crate::multistream_select::NegotiationError::Failed,
)) => {}
Err(error) => panic!("invalid error: {error:?}"),
}
}
#[tokio::test]
async fn noise_not_supported_listener() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let listener = TcpListener::bind("[::1]:0").await.unwrap();
let address = listener.local_addr().unwrap();
let (Ok(dialer), Ok((stream, dialer_address))) =
tokio::join!(TcpStream::connect(address), listener.accept(),)
else {
panic!("failed to establish connection");
};
let peer_id = PeerId::random();
let dialer_address = Multiaddr::empty()
.with(Protocol::from(dialer_address.ip()))
.with(Protocol::Tcp(dialer_address.port()))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")))
.with(Protocol::P2p(peer_id.into()));
let (url, _peer) = WebSocketTransport::multiaddr_into_url(dialer_address.clone()).unwrap();
tokio::spawn(async move {
// Negotiate websocket.
let stream = tokio_tungstenite::client_async_tls(url, dialer).await.unwrap().0;
let dialer = BufferedStream::new(stream);
// attempt to negotiate yamux, skipping noise entirely
assert!(WebSocketConnection::negotiate_protocol(
dialer,
&Role::Dialer,
vec!["/yamux/1.0.0"],
std::time::Duration::from_secs(10),
)
.await
.is_err());
});
match WebSocketConnection::accept_connection(
stream,
ConnectionId::from(0usize),
Keypair::generate(),
dialer_address,
Default::default(),
5,
2,
Duration::from_secs(10),
)
.await
{
Ok(_) => panic!("connection was supposed to fail"),
Err(NegotiationError::MultistreamSelectError(
crate::multistream_select::NegotiationError::Failed,
)) => {}
Err(error) => panic!("invalid error: {error:?}"),
}
}
#[tokio::test]
async fn noise_timeout_listener() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let listener = TcpListener::bind("[::1]:0").await.unwrap();
let address = listener.local_addr().unwrap();
let (Ok(dialer), Ok((stream, dialer_address))) =
tokio::join!(TcpStream::connect(address), listener.accept(),)
else {
panic!("failed to establish connection");
};
let keypair = Keypair::generate();
let peer_id = PeerId::from_public_key(&keypair.public().into());
let dialer_address = Multiaddr::empty()
.with(Protocol::from(dialer_address.ip()))
.with(Protocol::Tcp(dialer_address.port()))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")))
.with(Protocol::P2p(peer_id.into()));
let (url, _peer) = WebSocketTransport::multiaddr_into_url(dialer_address.clone()).unwrap();
tokio::spawn(async move {
// Negotiate websocket.
let stream = tokio_tungstenite::client_async_tls(url, dialer).await.unwrap().0;
let dialer = BufferedStream::new(stream);
// Sleep while negotiating /yamux.
let (stream, _proto) = WebSocketConnection::negotiate_protocol(
dialer,
&Role::Dialer,
vec!["/noise"],
std::time::Duration::from_secs(10),
)
.await
.unwrap();
let (_stream, _peer) = noise::handshake(
stream.inner(),
&keypair,
Role::Dialer,
5,
2,
std::time::Duration::from_secs(10),
noise::HandshakeTransport::WebSocket,
)
.await
.unwrap();
tokio::time::sleep(std::time::Duration::from_secs(60)).await;
});
match WebSocketConnection::accept_connection(
stream,
ConnectionId::from(0usize),
Keypair::generate(),
dialer_address,
Default::default(),
5,
2,
Duration::from_secs(10),
)
.await
{
Ok(_) => panic!("connection was supposed to fail"),
Err(NegotiationError::Timeout) => {}
Err(error) => panic!("invalid error: {error:?}"),
}
}
#[tokio::test]
async fn noise_wrong_handshake_listener() {
let _ = tracing_subscriber::fmt()
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/websocket/mod.rs | src/transport/websocket/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rigts to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! WebSocket transport.
use crate::{
error::{AddressError, Error, NegotiationError},
transport::{
common::listener::{DialAddresses, GetSocketAddr, SocketListener, WebSocketAddress},
manager::TransportHandle,
websocket::{
config::Config,
connection::{NegotiatedConnection, WebSocketConnection},
},
Transport, TransportBuilder, TransportEvent,
},
types::ConnectionId,
utils::futures_stream::FuturesStream,
DialError, PeerId,
};
use futures::{
future::BoxFuture,
stream::{AbortHandle, FuturesUnordered},
Stream, StreamExt, TryFutureExt,
};
use hickory_resolver::TokioResolver;
use multiaddr::{Multiaddr, Protocol};
use socket2::{Domain, Socket, Type};
use std::{net::SocketAddr, sync::Arc};
use tokio::net::TcpStream;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use url::Url;
use std::{
collections::HashMap,
pin::Pin,
task::{Context, Poll},
time::Duration,
};
pub(crate) use substream::Substream;
mod connection;
mod stream;
mod substream;
pub mod config;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::websocket";
/// Pending inbound connection.
struct PendingInboundConnection {
/// Socket address of the remote peer.
connection: TcpStream,
/// Address of the remote peer.
address: SocketAddr,
}
#[derive(Debug)]
enum RawConnectionResult {
/// The first successful connection.
Connected {
negotiated: NegotiatedConnection,
errors: Vec<(Multiaddr, DialError)>,
},
/// All connection attempts failed.
Failed {
connection_id: ConnectionId,
errors: Vec<(Multiaddr, DialError)>,
},
/// Future was canceled.
Canceled { connection_id: ConnectionId },
}
/// WebSocket transport.
pub(crate) struct WebSocketTransport {
/// Transport context.
context: TransportHandle,
/// Transport configuration.
config: Config,
/// WebSocket listener.
listener: SocketListener,
/// Dial addresses.
dial_addresses: DialAddresses,
/// Pending dials.
pending_dials: HashMap<ConnectionId, Multiaddr>,
/// Pending inbound connections.
pending_inbound_connections: HashMap<ConnectionId, PendingInboundConnection>,
/// Pending connections.
pending_connections:
FuturesStream<BoxFuture<'static, Result<NegotiatedConnection, (ConnectionId, DialError)>>>,
/// Pending raw, unnegotiated connections.
pending_raw_connections: FuturesStream<BoxFuture<'static, RawConnectionResult>>,
/// Opened raw connection, waiting for approval/rejection from `TransportManager`.
opened: HashMap<ConnectionId, NegotiatedConnection>,
/// Cancel raw connections futures.
///
/// This is cancelling `Self::pending_raw_connections`.
cancel_futures: HashMap<ConnectionId, AbortHandle>,
/// Negotiated connections waiting validation.
pending_open: HashMap<ConnectionId, NegotiatedConnection>,
/// DNS resolver.
resolver: Arc<TokioResolver>,
}
impl WebSocketTransport {
/// Handle inbound connection.
fn on_inbound_connection(
&mut self,
connection_id: ConnectionId,
connection: TcpStream,
address: SocketAddr,
) {
let keypair = self.context.keypair.clone();
let yamux_config = self.config.yamux_config.clone();
let connection_open_timeout = self.config.connection_open_timeout;
let max_read_ahead_factor = self.config.noise_read_ahead_frame_count;
let max_write_buffer_size = self.config.noise_write_buffer_size;
let substream_open_timeout = self.config.substream_open_timeout;
let address = Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Tcp(address.port()))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")));
self.pending_connections.push(Box::pin(async move {
match tokio::time::timeout(connection_open_timeout, async move {
WebSocketConnection::accept_connection(
connection,
connection_id,
keypair,
address,
yamux_config,
max_read_ahead_factor,
max_write_buffer_size,
substream_open_timeout,
)
.await
.map_err(|error| (connection_id, error.into()))
})
.await
{
Err(_) => Err((connection_id, DialError::Timeout)),
Ok(Err(error)) => Err(error),
Ok(Ok(result)) => Ok(result),
}
}));
}
/// Convert `Multiaddr` into `url::Url`
fn multiaddr_into_url(address: Multiaddr) -> Result<(Url, PeerId), AddressError> {
let mut protocol_stack = address.iter();
let dial_address = match protocol_stack.next().ok_or(AddressError::InvalidProtocol)? {
Protocol::Ip4(address) => address.to_string(),
Protocol::Ip6(address) => format!("[{address}]"),
Protocol::Dns(address) | Protocol::Dns4(address) | Protocol::Dns6(address) =>
address.to_string(),
_ => return Err(AddressError::InvalidProtocol),
};
let url = match protocol_stack.next().ok_or(AddressError::InvalidProtocol)? {
Protocol::Tcp(port) => match protocol_stack.next() {
Some(Protocol::Ws(_)) => format!("ws://{dial_address}:{port}/"),
Some(Protocol::Wss(_)) => format!("wss://{dial_address}:{port}/"),
_ => return Err(AddressError::InvalidProtocol),
},
_ => return Err(AddressError::InvalidProtocol),
};
let peer = match protocol_stack.next() {
Some(Protocol::P2p(multihash)) => PeerId::from_multihash(multihash)?,
protocol => {
tracing::warn!(
target: LOG_TARGET,
?protocol,
"invalid protocol, expected `Protocol::Ws`/`Protocol::Wss`",
);
return Err(AddressError::PeerIdMissing);
}
};
tracing::trace!(target: LOG_TARGET, ?url, "parse address");
url::Url::parse(&url)
.map(|url| (url, peer))
.map_err(|_| AddressError::InvalidUrl)
}
/// Dial remote peer over `address`.
async fn dial_peer(
address: Multiaddr,
dial_addresses: DialAddresses,
connection_open_timeout: Duration,
nodelay: bool,
resolver: Arc<TokioResolver>,
) -> Result<(Multiaddr, WebSocketStream<MaybeTlsStream<TcpStream>>), DialError> {
let (url, _) = Self::multiaddr_into_url(address.clone())?;
let (socket_address, _) = WebSocketAddress::multiaddr_to_socket_address(&address)?;
let remote_address =
match tokio::time::timeout(connection_open_timeout, socket_address.lookup_ip(resolver))
.await
{
Err(_) => return Err(DialError::Timeout),
Ok(Err(error)) => return Err(error.into()),
Ok(Ok(address)) => address,
};
let domain = match remote_address.is_ipv4() {
true => Domain::IPV4,
false => Domain::IPV6,
};
let socket = Socket::new(domain, Type::STREAM, Some(socket2::Protocol::TCP))?;
if remote_address.is_ipv6() {
socket.set_only_v6(true)?;
}
socket.set_nonblocking(true)?;
socket.set_nodelay(nodelay)?;
match dial_addresses.local_dial_address(&remote_address.ip()) {
Ok(Some(dial_address)) => {
socket.set_reuse_address(true)?;
#[cfg(unix)]
socket.set_reuse_port(true)?;
socket.bind(&dial_address.into())?;
}
Ok(None) => {}
Err(()) => {
tracing::debug!(
target: LOG_TARGET,
?remote_address,
"tcp listener not enabled for remote address, using ephemeral port",
);
}
}
let future = async move {
match socket.connect(&remote_address.into()) {
Ok(()) => {}
Err(error) if error.raw_os_error() == Some(libc::EINPROGRESS) => {}
Err(error) if error.kind() == std::io::ErrorKind::WouldBlock => {}
Err(err) => return Err(DialError::from(err)),
}
let stream = TcpStream::try_from(Into::<std::net::TcpStream>::into(socket))?;
stream.writable().await?;
if let Some(e) = stream.take_error()? {
return Err(DialError::from(e));
}
Ok((
address,
tokio_tungstenite::client_async_tls(url, stream)
.await
.map_err(NegotiationError::WebSocket)?
.0,
))
};
match tokio::time::timeout(connection_open_timeout, future).await {
Err(_) => Err(DialError::Timeout),
Ok(Err(error)) => Err(error),
Ok(Ok((address, stream))) => Ok((address, stream)),
}
}
}
impl TransportBuilder for WebSocketTransport {
type Config = Config;
type Transport = WebSocketTransport;
/// Create new [`Transport`] object.
fn new(
context: TransportHandle,
mut config: Self::Config,
resolver: Arc<TokioResolver>,
) -> crate::Result<(Self, Vec<Multiaddr>)>
where
Self: Sized,
{
tracing::debug!(
target: LOG_TARGET,
listen_addresses = ?config.listen_addresses,
"start websocket transport",
);
let (listener, listen_addresses, dial_addresses) = SocketListener::new::<WebSocketAddress>(
std::mem::take(&mut config.listen_addresses),
config.reuse_port,
config.nodelay,
);
Ok((
Self {
listener,
config,
context,
dial_addresses,
opened: HashMap::new(),
pending_open: HashMap::new(),
pending_dials: HashMap::new(),
pending_inbound_connections: HashMap::new(),
pending_connections: FuturesStream::new(),
pending_raw_connections: FuturesStream::new(),
cancel_futures: HashMap::new(),
resolver,
},
listen_addresses,
))
}
}
impl Transport for WebSocketTransport {
fn dial(&mut self, connection_id: ConnectionId, address: Multiaddr) -> crate::Result<()> {
let yamux_config = self.config.yamux_config.clone();
let keypair = self.context.keypair.clone();
let (ws_address, peer) = Self::multiaddr_into_url(address.clone())?;
let connection_open_timeout = self.config.connection_open_timeout;
let max_read_ahead_factor = self.config.noise_read_ahead_frame_count;
let max_write_buffer_size = self.config.noise_write_buffer_size;
let substream_open_timeout = self.config.substream_open_timeout;
let dial_addresses = self.dial_addresses.clone();
let nodelay = self.config.nodelay;
let resolver = self.resolver.clone();
self.pending_dials.insert(connection_id, address.clone());
tracing::debug!(target: LOG_TARGET, ?connection_id, ?address, "open connection");
let future = async move {
let (_, stream) = WebSocketTransport::dial_peer(
address.clone(),
dial_addresses,
connection_open_timeout,
nodelay,
resolver,
)
.await
.map_err(|error| (connection_id, error))?;
WebSocketConnection::open_connection(
connection_id,
keypair,
stream,
address,
peer,
ws_address,
yamux_config,
max_read_ahead_factor,
max_write_buffer_size,
substream_open_timeout,
)
.await
.map_err(|error| (connection_id, error.into()))
};
self.pending_connections.push(Box::pin(async move {
match tokio::time::timeout(connection_open_timeout, future).await {
Err(_) => Err((connection_id, DialError::Timeout)),
Ok(Err(error)) => Err(error),
Ok(Ok(result)) => Ok(result),
}
}));
Ok(())
}
fn accept(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
let context = self
.pending_open
.remove(&connection_id)
.ok_or(Error::ConnectionDoesntExist(connection_id))?;
let protocol_set = self.context.protocol_set(connection_id);
let bandwidth_sink = self.context.bandwidth_sink.clone();
let substream_open_timeout = self.config.substream_open_timeout;
tracing::trace!(
target: LOG_TARGET,
?connection_id,
"start connection",
);
self.context.executor.run(Box::pin(async move {
if let Err(error) = WebSocketConnection::new(
context,
protocol_set,
bandwidth_sink,
substream_open_timeout,
)
.start()
.await
{
tracing::debug!(
target: LOG_TARGET,
?connection_id,
?error,
"connection exited with error",
);
}
}));
Ok(())
}
fn reject(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
self.pending_open
.remove(&connection_id)
.map_or(Err(Error::ConnectionDoesntExist(connection_id)), |_| Ok(()))
}
fn accept_pending(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
let pending = self.pending_inbound_connections.remove(&connection_id).ok_or_else(|| {
tracing::error!(
target: LOG_TARGET,
?connection_id,
"Cannot accept non existent pending connection",
);
Error::ConnectionDoesntExist(connection_id)
})?;
self.on_inbound_connection(connection_id, pending.connection, pending.address);
Ok(())
}
fn reject_pending(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
self.pending_inbound_connections.remove(&connection_id).map_or_else(
|| {
tracing::error!(
target: LOG_TARGET,
?connection_id,
"Cannot reject non existent pending connection",
);
Err(Error::ConnectionDoesntExist(connection_id))
},
|_| Ok(()),
)
}
fn open(
&mut self,
connection_id: ConnectionId,
addresses: Vec<Multiaddr>,
) -> crate::Result<()> {
let num_addresses = addresses.len();
let mut futures: FuturesUnordered<_> = addresses
.into_iter()
.map(|address| {
let yamux_config = self.config.yamux_config.clone();
let keypair = self.context.keypair.clone();
let connection_open_timeout = self.config.connection_open_timeout;
let max_read_ahead_factor = self.config.noise_read_ahead_frame_count;
let max_write_buffer_size = self.config.noise_write_buffer_size;
let substream_open_timeout = self.config.substream_open_timeout;
let dial_addresses = self.dial_addresses.clone();
let nodelay = self.config.nodelay;
let resolver = self.resolver.clone();
async move {
let (address, stream) = WebSocketTransport::dial_peer(
address.clone(),
dial_addresses,
connection_open_timeout,
nodelay,
resolver,
)
.await
.map_err(|error| (address, error))?;
let open_address = address.clone();
let (ws_address, peer) = Self::multiaddr_into_url(address.clone())
.map_err(|error| (address.clone(), error.into()))?;
WebSocketConnection::open_connection(
connection_id,
keypair,
stream,
address,
peer,
ws_address,
yamux_config,
max_read_ahead_factor,
max_write_buffer_size,
substream_open_timeout,
)
.await
.map_err(|error| (open_address, error.into()))
}
})
.collect();
// Future that will resolve to the first successful connection.
let future = async move {
let mut errors = Vec::with_capacity(num_addresses);
while let Some(result) = futures.next().await {
match result {
Ok(negotiated) => return RawConnectionResult::Connected { negotiated, errors },
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?connection_id,
?error,
"failed to open connection",
);
errors.push(error)
}
}
}
RawConnectionResult::Failed {
connection_id,
errors,
}
};
let (fut, handle) = futures::future::abortable(future);
let fut = fut.unwrap_or_else(move |_| RawConnectionResult::Canceled { connection_id });
self.pending_raw_connections.push(Box::pin(fut));
self.cancel_futures.insert(connection_id, handle);
Ok(())
}
fn negotiate(&mut self, connection_id: ConnectionId) -> crate::Result<()> {
let negotiated = self
.opened
.remove(&connection_id)
.ok_or(Error::ConnectionDoesntExist(connection_id))?;
self.pending_connections.push(Box::pin(async move { Ok(negotiated) }));
Ok(())
}
fn cancel(&mut self, connection_id: ConnectionId) {
// Cancel the future if it exists.
// State clean-up happens inside the `poll_next`.
if let Some(handle) = self.cancel_futures.get(&connection_id) {
handle.abort();
}
}
}
impl Stream for WebSocketTransport {
type Item = TransportEvent;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
if let Poll::Ready(event) = self.listener.poll_next_unpin(cx) {
return match event {
None => {
tracing::error!(
target: LOG_TARGET,
"Websocket listener terminated, ignore if the node is stopping",
);
Poll::Ready(None)
}
Some(Err(error)) => {
tracing::error!(
target: LOG_TARGET,
?error,
"Websocket listener terminated with error",
);
Poll::Ready(None)
}
Some(Ok((connection, address))) => {
let connection_id = self.context.next_connection_id();
tracing::trace!(
target: LOG_TARGET,
?connection_id,
?address,
"pending inbound Websocket connection",
);
self.pending_inbound_connections.insert(
connection_id,
PendingInboundConnection {
connection,
address,
},
);
Poll::Ready(Some(TransportEvent::PendingInboundConnection {
connection_id,
}))
}
};
}
while let Poll::Ready(Some(result)) = self.pending_raw_connections.poll_next_unpin(cx) {
tracing::trace!(target: LOG_TARGET, ?result, "raw connection result");
match result {
RawConnectionResult::Connected { negotiated, errors } => {
let Some(handle) = self.cancel_futures.remove(&negotiated.connection_id())
else {
tracing::warn!(
target: LOG_TARGET,
connection_id = ?negotiated.connection_id(),
address = ?negotiated.endpoint().address(),
?errors,
"raw connection without a cancel handle",
);
continue;
};
if !handle.is_aborted() {
let connection_id = negotiated.connection_id();
let address = negotiated.endpoint().address().clone();
self.opened.insert(connection_id, negotiated);
return Poll::Ready(Some(TransportEvent::ConnectionOpened {
connection_id,
address,
}));
}
}
RawConnectionResult::Failed {
connection_id,
errors,
} => {
let Some(handle) = self.cancel_futures.remove(&connection_id) else {
tracing::warn!(
target: LOG_TARGET,
?connection_id,
?errors,
"raw connection without a cancel handle",
);
continue;
};
if !handle.is_aborted() {
return Poll::Ready(Some(TransportEvent::OpenFailure {
connection_id,
errors,
}));
}
}
RawConnectionResult::Canceled { connection_id } => {
if self.cancel_futures.remove(&connection_id).is_none() {
tracing::warn!(
target: LOG_TARGET,
?connection_id,
"raw cancelled connection without a cancel handle",
);
}
}
}
}
while let Poll::Ready(Some(connection)) = self.pending_connections.poll_next_unpin(cx) {
match connection {
Ok(connection) => {
let peer = connection.peer();
let endpoint = connection.endpoint();
self.pending_dials.remove(&connection.connection_id());
self.pending_open.insert(connection.connection_id(), connection);
return Poll::Ready(Some(TransportEvent::ConnectionEstablished {
peer,
endpoint,
}));
}
Err((connection_id, error)) => {
if let Some(address) = self.pending_dials.remove(&connection_id) {
return Poll::Ready(Some(TransportEvent::DialFailure {
connection_id,
address,
error,
}));
} else {
tracing::debug!(target: LOG_TARGET, ?error, ?connection_id, "Pending inbound connection failed");
}
}
}
}
Poll::Pending
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/transport/websocket/substream.rs | src/transport/websocket/substream.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{protocol::Permit, BandwidthSink};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_util::compat::Compat;
use std::{
io,
pin::Pin,
task::{Context, Poll},
};
/// Substream that holds the inner substream provided by the transport
/// and a permit which keeps the connection open.
#[derive(Debug)]
pub struct Substream {
/// Underlying socket.
io: Compat<crate::yamux::Stream>,
/// Bandwidth sink.
bandwidth_sink: BandwidthSink,
/// Connection permit.
_permit: Permit,
}
impl Substream {
/// Create new [`Substream`].
pub fn new(
io: Compat<crate::yamux::Stream>,
bandwidth_sink: BandwidthSink,
_permit: Permit,
) -> Self {
Self {
io,
bandwidth_sink,
_permit,
}
}
}
impl AsyncRead for Substream {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> Poll<io::Result<()>> {
let len = buf.filled().len();
match futures::ready!(Pin::new(&mut self.io).poll_read(cx, buf)) {
Err(error) => Poll::Ready(Err(error)),
Ok(res) => {
let inbound_size = buf.filled().len().saturating_sub(len);
self.bandwidth_sink.increase_inbound(inbound_size);
Poll::Ready(Ok(res))
}
}
}
}
impl AsyncWrite for Substream {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, io::Error>> {
match futures::ready!(Pin::new(&mut self.io).poll_write(cx, buf)) {
Err(error) => Poll::Ready(Err(error)),
Ok(nwritten) => {
self.bandwidth_sink.increase_outbound(nwritten);
Poll::Ready(Ok(nwritten))
}
}
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> {
Pin::new(&mut self.io).poll_flush(cx)
}
fn poll_shutdown(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), io::Error>> {
Pin::new(&mut self.io).poll_shutdown(cx)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/substream/mod.rs | src/substream/mod.rs | // Copyright 2020 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Substream-related helper code.
use crate::{
codec::ProtocolCodec, error::SubstreamError, transport::tcp, types::SubstreamId, PeerId,
};
#[cfg(feature = "quic")]
use crate::transport::quic;
#[cfg(feature = "webrtc")]
use crate::transport::webrtc;
#[cfg(feature = "websocket")]
use crate::transport::websocket;
use bytes::{Buf, Bytes, BytesMut};
use futures::{Sink, Stream};
use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt, ReadBuf};
use unsigned_varint::{decode, encode};
use std::{
collections::{hash_map::Entry, HashMap, VecDeque},
fmt,
hash::Hash,
io::ErrorKind,
pin::Pin,
task::{Context, Poll},
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::substream";
macro_rules! poll_flush {
($substream:expr, $cx:ident) => {{
match $substream {
SubstreamType::Tcp(substream) => Pin::new(substream).poll_flush($cx),
#[cfg(feature = "websocket")]
SubstreamType::WebSocket(substream) => Pin::new(substream).poll_flush($cx),
#[cfg(feature = "quic")]
SubstreamType::Quic(substream) => Pin::new(substream).poll_flush($cx),
#[cfg(feature = "webrtc")]
SubstreamType::WebRtc(substream) => Pin::new(substream).poll_flush($cx),
#[cfg(test)]
SubstreamType::Mock(_) => unreachable!(),
}
}};
}
macro_rules! poll_write {
($substream:expr, $cx:ident, $frame:expr) => {{
match $substream {
SubstreamType::Tcp(substream) => Pin::new(substream).poll_write($cx, $frame),
#[cfg(feature = "websocket")]
SubstreamType::WebSocket(substream) => Pin::new(substream).poll_write($cx, $frame),
#[cfg(feature = "quic")]
SubstreamType::Quic(substream) => Pin::new(substream).poll_write($cx, $frame),
#[cfg(feature = "webrtc")]
SubstreamType::WebRtc(substream) => Pin::new(substream).poll_write($cx, $frame),
#[cfg(test)]
SubstreamType::Mock(_) => unreachable!(),
}
}};
}
macro_rules! poll_read {
($substream:expr, $cx:ident, $buffer:expr) => {{
match $substream {
SubstreamType::Tcp(substream) => Pin::new(substream).poll_read($cx, $buffer),
#[cfg(feature = "websocket")]
SubstreamType::WebSocket(substream) => Pin::new(substream).poll_read($cx, $buffer),
#[cfg(feature = "quic")]
SubstreamType::Quic(substream) => Pin::new(substream).poll_read($cx, $buffer),
#[cfg(feature = "webrtc")]
SubstreamType::WebRtc(substream) => Pin::new(substream).poll_read($cx, $buffer),
#[cfg(test)]
SubstreamType::Mock(_) => unreachable!(),
}
}};
}
macro_rules! poll_shutdown {
($substream:expr, $cx:ident) => {{
match $substream {
SubstreamType::Tcp(substream) => Pin::new(substream).poll_shutdown($cx),
#[cfg(feature = "websocket")]
SubstreamType::WebSocket(substream) => Pin::new(substream).poll_shutdown($cx),
#[cfg(feature = "quic")]
SubstreamType::Quic(substream) => Pin::new(substream).poll_shutdown($cx),
#[cfg(feature = "webrtc")]
SubstreamType::WebRtc(substream) => Pin::new(substream).poll_shutdown($cx),
#[cfg(test)]
SubstreamType::Mock(substream) => {
let _ = Pin::new(substream).poll_close($cx);
todo!();
}
}
}};
}
macro_rules! delegate_poll_next {
($substream:expr, $cx:ident) => {{
#[cfg(test)]
if let SubstreamType::Mock(inner) = $substream {
return Pin::new(inner).poll_next($cx);
}
}};
}
macro_rules! delegate_poll_ready {
($substream:expr, $cx:ident) => {{
#[cfg(test)]
if let SubstreamType::Mock(inner) = $substream {
return Pin::new(inner).poll_ready($cx);
}
}};
}
macro_rules! delegate_start_send {
($substream:expr, $item:ident) => {{
#[cfg(test)]
if let SubstreamType::Mock(inner) = $substream {
return Pin::new(inner).start_send($item);
}
}};
}
macro_rules! delegate_poll_flush {
($substream:expr, $cx:ident) => {{
#[cfg(test)]
if let SubstreamType::Mock(inner) = $substream {
return Pin::new(inner).poll_flush($cx);
}
}};
}
macro_rules! check_size {
($max_size:expr, $size:expr) => {{
if let Some(max_size) = $max_size {
if $size > max_size {
return Err(SubstreamError::IoError(ErrorKind::PermissionDenied).into());
}
}
}};
}
/// Substream type.
enum SubstreamType {
Tcp(tcp::Substream),
#[cfg(feature = "websocket")]
WebSocket(websocket::Substream),
#[cfg(feature = "quic")]
Quic(quic::Substream),
#[cfg(feature = "webrtc")]
WebRtc(webrtc::Substream),
#[cfg(test)]
Mock(Box<dyn crate::mock::substream::Substream>),
}
impl fmt::Debug for SubstreamType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Tcp(_) => write!(f, "Tcp"),
#[cfg(feature = "websocket")]
Self::WebSocket(_) => write!(f, "WebSocket"),
#[cfg(feature = "quic")]
Self::Quic(_) => write!(f, "Quic"),
#[cfg(feature = "webrtc")]
Self::WebRtc(_) => write!(f, "WebRtc"),
#[cfg(test)]
Self::Mock(_) => write!(f, "Mock"),
}
}
}
/// Backpressure boundary for `Sink`.
const BACKPRESSURE_BOUNDARY: usize = 65536;
/// `Litep2p` substream type.
///
/// Implements [`tokio::io::AsyncRead`]/[`tokio::io::AsyncWrite`] traits which can be wrapped
/// in a `Framed` to implement a custom codec.
///
/// In case a codec for the protocol was specified,
/// [`Sink::send()`](futures::Sink)/[`Stream::next()`](futures::Stream) are also provided which
/// implement the necessary framing to read/write codec-encoded messages from the underlying socket.
pub struct Substream {
/// Remote peer ID.
peer: PeerId,
// Inner substream.
substream: SubstreamType,
/// Substream ID.
substream_id: SubstreamId,
/// Protocol codec.
codec: ProtocolCodec,
pending_out_frames: VecDeque<Bytes>,
pending_out_bytes: usize,
pending_out_frame: Option<Bytes>,
read_buffer: BytesMut,
offset: usize,
pending_frames: VecDeque<BytesMut>,
current_frame_size: Option<usize>,
size_vec: BytesMut,
}
impl fmt::Debug for Substream {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Substream")
.field("peer", &self.peer)
.field("substream_id", &self.substream_id)
.field("codec", &self.codec)
.field("protocol", &self.substream)
.finish()
}
}
impl Substream {
/// Create new [`Substream`].
fn new(
peer: PeerId,
substream_id: SubstreamId,
substream: SubstreamType,
codec: ProtocolCodec,
) -> Self {
Self {
peer,
substream,
codec,
substream_id,
read_buffer: BytesMut::zeroed(1024),
offset: 0usize,
pending_frames: VecDeque::new(),
current_frame_size: None,
pending_out_bytes: 0usize,
pending_out_frames: VecDeque::new(),
pending_out_frame: None,
size_vec: BytesMut::zeroed(10),
}
}
/// Create new [`Substream`] for TCP.
pub(crate) fn new_tcp(
peer: PeerId,
substream_id: SubstreamId,
substream: tcp::Substream,
codec: ProtocolCodec,
) -> Self {
tracing::trace!(target: LOG_TARGET, ?peer, ?codec, "create new substream for tcp");
Self::new(peer, substream_id, SubstreamType::Tcp(substream), codec)
}
/// Create new [`Substream`] for WebSocket.
#[cfg(feature = "websocket")]
pub(crate) fn new_websocket(
peer: PeerId,
substream_id: SubstreamId,
substream: websocket::Substream,
codec: ProtocolCodec,
) -> Self {
tracing::trace!(target: LOG_TARGET, ?peer, ?codec, "create new substream for websocket");
Self::new(
peer,
substream_id,
SubstreamType::WebSocket(substream),
codec,
)
}
/// Create new [`Substream`] for QUIC.
#[cfg(feature = "quic")]
pub(crate) fn new_quic(
peer: PeerId,
substream_id: SubstreamId,
substream: quic::Substream,
codec: ProtocolCodec,
) -> Self {
tracing::trace!(target: LOG_TARGET, ?peer, ?codec, "create new substream for quic");
Self::new(peer, substream_id, SubstreamType::Quic(substream), codec)
}
/// Create new [`Substream`] for WebRTC.
#[cfg(feature = "webrtc")]
pub(crate) fn new_webrtc(
peer: PeerId,
substream_id: SubstreamId,
substream: webrtc::Substream,
codec: ProtocolCodec,
) -> Self {
tracing::trace!(target: LOG_TARGET, ?peer, ?codec, "create new substream for webrtc");
Self::new(peer, substream_id, SubstreamType::WebRtc(substream), codec)
}
/// Create new [`Substream`] for mocking.
#[cfg(test)]
pub(crate) fn new_mock(
peer: PeerId,
substream_id: SubstreamId,
substream: Box<dyn crate::mock::substream::Substream>,
) -> Self {
tracing::trace!(target: LOG_TARGET, ?peer, "create new substream for mocking");
Self::new(
peer,
substream_id,
SubstreamType::Mock(substream),
ProtocolCodec::Unspecified,
)
}
/// Close the substream.
pub async fn close(self) {
let _ = match self.substream {
SubstreamType::Tcp(mut substream) => substream.shutdown().await,
#[cfg(feature = "websocket")]
SubstreamType::WebSocket(mut substream) => substream.shutdown().await,
#[cfg(feature = "quic")]
SubstreamType::Quic(mut substream) => substream.shutdown().await,
#[cfg(feature = "webrtc")]
SubstreamType::WebRtc(mut substream) => substream.shutdown().await,
#[cfg(test)]
SubstreamType::Mock(mut substream) => {
let _ = futures::SinkExt::close(&mut substream).await;
Ok(())
}
};
}
/// Send identity payload to remote peer.
async fn send_identity_payload<T: AsyncWrite + Unpin>(
io: &mut T,
payload_size: usize,
payload: Bytes,
) -> Result<(), SubstreamError> {
if payload.len() != payload_size {
return Err(SubstreamError::IoError(ErrorKind::PermissionDenied));
}
io.write_all(&payload).await.map_err(|_| SubstreamError::ConnectionClosed)?;
// Flush the stream.
io.flush().await.map_err(From::from)
}
/// Send unsigned varint payload to remote peer.
async fn send_unsigned_varint_payload<T: AsyncWrite + Unpin>(
io: &mut T,
bytes: Bytes,
max_size: Option<usize>,
) -> Result<(), SubstreamError> {
if let Some(max_size) = max_size {
if bytes.len() > max_size {
return Err(SubstreamError::IoError(ErrorKind::PermissionDenied));
}
}
// Write the length of the frame.
let mut buffer = unsigned_varint::encode::usize_buffer();
let encoded_len = unsigned_varint::encode::usize(bytes.len(), &mut buffer).len();
io.write_all(&buffer[..encoded_len]).await?;
// Write the frame.
io.write_all(bytes.as_ref()).await?;
// Flush the stream.
io.flush().await.map_err(From::from)
}
/// Send framed data to remote peer.
///
/// This function may be faster than the provided [`futures::Sink`] implementation for
/// [`Substream`] as it has direct access to the API of the underlying socket as opposed
/// to going through [`tokio::io::AsyncWrite`].
///
/// # Cancel safety
///
/// This method is not cancellation safe. If that is required, use the provided
/// [`futures::Sink`] implementation.
///
/// # Panics
///
/// Panics if no codec is provided.
pub async fn send_framed(&mut self, bytes: Bytes) -> Result<(), SubstreamError> {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
codec = ?self.codec,
frame_len = ?bytes.len(),
"send framed"
);
match &mut self.substream {
#[cfg(test)]
SubstreamType::Mock(ref mut substream) =>
futures::SinkExt::send(substream, bytes).await,
SubstreamType::Tcp(ref mut substream) => match self.codec {
ProtocolCodec::Unspecified => panic!("codec is unspecified"),
ProtocolCodec::Identity(payload_size) =>
Self::send_identity_payload(substream, payload_size, bytes).await,
ProtocolCodec::UnsignedVarint(max_size) =>
Self::send_unsigned_varint_payload(substream, bytes, max_size).await,
},
#[cfg(feature = "websocket")]
SubstreamType::WebSocket(ref mut substream) => match self.codec {
ProtocolCodec::Unspecified => panic!("codec is unspecified"),
ProtocolCodec::Identity(payload_size) =>
Self::send_identity_payload(substream, payload_size, bytes).await,
ProtocolCodec::UnsignedVarint(max_size) =>
Self::send_unsigned_varint_payload(substream, bytes, max_size).await,
},
#[cfg(feature = "quic")]
SubstreamType::Quic(ref mut substream) => match self.codec {
ProtocolCodec::Unspecified => panic!("codec is unspecified"),
ProtocolCodec::Identity(payload_size) =>
Self::send_identity_payload(substream, payload_size, bytes).await,
ProtocolCodec::UnsignedVarint(max_size) => {
check_size!(max_size, bytes.len());
let mut buffer = unsigned_varint::encode::usize_buffer();
let len = unsigned_varint::encode::usize(bytes.len(), &mut buffer);
let len = BytesMut::from(len);
substream.write_all_chunks(&mut [len.freeze(), bytes]).await
}
},
#[cfg(feature = "webrtc")]
SubstreamType::WebRtc(ref mut substream) => match self.codec {
ProtocolCodec::Unspecified => panic!("codec is unspecified"),
ProtocolCodec::Identity(payload_size) =>
Self::send_identity_payload(substream, payload_size, bytes).await,
ProtocolCodec::UnsignedVarint(max_size) =>
Self::send_unsigned_varint_payload(substream, bytes, max_size).await,
},
}
}
}
impl tokio::io::AsyncRead for Substream {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
poll_read!(&mut self.substream, cx, buf)
}
}
impl tokio::io::AsyncWrite for Substream {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, std::io::Error>> {
poll_write!(&mut self.substream, cx, buf)
}
fn poll_flush(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), std::io::Error>> {
poll_flush!(&mut self.substream, cx)
}
fn poll_shutdown(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), std::io::Error>> {
poll_shutdown!(&mut self.substream, cx)
}
}
enum ReadError {
Overflow,
NotEnoughBytes,
DecodeError,
}
// Return the payload size and the number of bytes it took to encode it
fn read_payload_size(buffer: &[u8]) -> Result<(usize, usize), ReadError> {
let max_len = encode::usize_buffer().len();
for i in 0..std::cmp::min(buffer.len(), max_len) {
if decode::is_last(buffer[i]) {
match decode::usize(&buffer[..=i]) {
Err(_) => return Err(ReadError::DecodeError),
Ok(size) => return Ok((size.0, i + 1)),
}
}
}
match buffer.len() < max_len {
true => Err(ReadError::NotEnoughBytes),
false => Err(ReadError::Overflow),
}
}
impl Stream for Substream {
type Item = Result<BytesMut, SubstreamError>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let this = Pin::into_inner(self);
// `MockSubstream` implements `Stream` so calls to `poll_next()` must be delegated
delegate_poll_next!(&mut this.substream, cx);
loop {
match this.codec {
ProtocolCodec::Identity(payload_size) => {
let mut read_buf =
ReadBuf::new(&mut this.read_buffer[this.offset..payload_size]);
match futures::ready!(poll_read!(&mut this.substream, cx, &mut read_buf)) {
Ok(_) => {
let nread = read_buf.filled().len();
if nread == 0 {
tracing::trace!(
target: LOG_TARGET,
peer = ?this.peer,
"read zero bytes, substream closed"
);
return Poll::Ready(None);
}
if nread == payload_size {
let mut payload = std::mem::replace(
&mut this.read_buffer,
BytesMut::zeroed(payload_size),
);
payload.truncate(payload_size);
this.offset = 0usize;
return Poll::Ready(Some(Ok(payload)));
} else {
this.offset += read_buf.filled().len();
}
}
Err(error) => return Poll::Ready(Some(Err(error.into()))),
}
}
ProtocolCodec::UnsignedVarint(max_size) => {
loop {
// return all pending frames first
if let Some(frame) = this.pending_frames.pop_front() {
return Poll::Ready(Some(Ok(frame)));
}
match this.current_frame_size.take() {
Some(frame_size) => {
let mut read_buf =
ReadBuf::new(&mut this.read_buffer[this.offset..]);
this.current_frame_size = Some(frame_size);
match futures::ready!(poll_read!(
&mut this.substream,
cx,
&mut read_buf
)) {
Err(_error) => return Poll::Ready(None),
Ok(_) => {
let nread = match read_buf.filled().len() {
0 => return Poll::Ready(None),
nread => nread,
};
this.offset += nread;
if this.offset == frame_size {
let out_frame = std::mem::replace(
&mut this.read_buffer,
BytesMut::new(),
);
this.offset = 0;
this.current_frame_size = None;
return Poll::Ready(Some(Ok(out_frame)));
} else {
this.current_frame_size = Some(frame_size);
continue;
}
}
}
}
None => {
let mut read_buf =
ReadBuf::new(&mut this.size_vec[this.offset..this.offset + 1]);
match futures::ready!(poll_read!(
&mut this.substream,
cx,
&mut read_buf
)) {
Err(_error) => return Poll::Ready(None),
Ok(_) => {
if read_buf.filled().is_empty() {
return Poll::Ready(None);
}
this.offset += 1;
match read_payload_size(&this.size_vec[..this.offset]) {
Err(ReadError::NotEnoughBytes) => continue,
Err(_) =>
return Poll::Ready(Some(Err(
SubstreamError::ReadFailure(Some(
this.substream_id,
)),
))),
Ok((size, num_bytes)) => {
debug_assert_eq!(num_bytes, this.offset);
if let Some(max_size) = max_size {
if size > max_size {
return Poll::Ready(Some(Err(
SubstreamError::ReadFailure(Some(
this.substream_id,
)),
)));
}
}
this.offset = 0;
// Handle empty payloads detected as 0-length frame.
// The offset must be cleared to 0 to not interfere
// with next framing.
if size == 0 {
return Poll::Ready(Some(Ok(BytesMut::new())));
}
this.current_frame_size = Some(size);
this.read_buffer = BytesMut::zeroed(size);
}
}
}
}
}
}
}
}
ProtocolCodec::Unspecified => panic!("codec is unspecified"),
}
}
}
}
// TODO: https://github.com/paritytech/litep2p/issues/341 this code can definitely be optimized
impl Sink<Bytes> for Substream {
type Error = SubstreamError;
fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// `MockSubstream` implements `Sink` so calls to `poll_ready()` must be delegated
delegate_poll_ready!(&mut self.substream, cx);
if self.pending_out_bytes >= BACKPRESSURE_BOUNDARY {
return poll_flush!(&mut self.substream, cx).map_err(From::from);
}
Poll::Ready(Ok(()))
}
fn start_send(mut self: Pin<&mut Self>, item: Bytes) -> Result<(), Self::Error> {
// `MockSubstream` implements `Sink` so calls to `start_send()` must be delegated
delegate_start_send!(&mut self.substream, item);
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
substream_id = ?self.substream_id,
data_len = item.len(),
"Substream::start_send()",
);
match self.codec {
ProtocolCodec::Identity(payload_size) => {
if item.len() != payload_size {
return Err(SubstreamError::IoError(ErrorKind::PermissionDenied));
}
self.pending_out_bytes += item.len();
self.pending_out_frames.push_back(item);
}
ProtocolCodec::UnsignedVarint(max_size) => {
check_size!(max_size, item.len());
let len = {
let mut buffer = unsigned_varint::encode::usize_buffer();
let len = unsigned_varint::encode::usize(item.len(), &mut buffer);
BytesMut::from(len)
};
self.pending_out_bytes += len.len() + item.len();
self.pending_out_frames.push_back(len.freeze());
self.pending_out_frames.push_back(item);
}
ProtocolCodec::Unspecified => panic!("codec is unspecified"),
}
Ok(())
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// `MockSubstream` implements `Sink` so calls to `poll_flush()` must be delegated
delegate_poll_flush!(&mut self.substream, cx);
loop {
let mut pending_frame = match self.pending_out_frame.take() {
Some(frame) => frame,
None => match self.pending_out_frames.pop_front() {
Some(frame) => frame,
None => break,
},
};
match poll_write!(&mut self.substream, cx, &pending_frame) {
Poll::Ready(Err(error)) => return Poll::Ready(Err(error.into())),
Poll::Pending => {
self.pending_out_frame = Some(pending_frame);
break;
}
Poll::Ready(Ok(nwritten)) => {
pending_frame.advance(nwritten);
if !pending_frame.is_empty() {
self.pending_out_frame = Some(pending_frame);
}
}
}
}
poll_flush!(&mut self.substream, cx).map_err(From::from)
}
fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
poll_shutdown!(&mut self.substream, cx).map_err(From::from)
}
}
/// Substream set key.
pub trait SubstreamSetKey: Hash + Unpin + fmt::Debug + PartialEq + Eq + Copy {}
impl<K: Hash + Unpin + fmt::Debug + PartialEq + Eq + Copy> SubstreamSetKey for K {}
/// Substream set.
// TODO: https://github.com/paritytech/litep2p/issues/342 remove this.
#[derive(Debug, Default)]
pub struct SubstreamSet<K, S>
where
K: SubstreamSetKey,
S: Stream<Item = Result<BytesMut, SubstreamError>> + Unpin,
{
substreams: HashMap<K, S>,
}
impl<K, S> SubstreamSet<K, S>
where
K: SubstreamSetKey,
S: Stream<Item = Result<BytesMut, SubstreamError>> + Unpin,
{
/// Create new [`SubstreamSet`].
pub fn new() -> Self {
Self {
substreams: HashMap::new(),
}
}
/// Add new substream to the set.
pub fn insert(&mut self, key: K, substream: S) {
match self.substreams.entry(key) {
Entry::Vacant(entry) => {
entry.insert(substream);
}
Entry::Occupied(_) => {
tracing::error!(?key, "substream already exists");
debug_assert!(false);
}
}
}
/// Remove substream from the set.
pub fn remove(&mut self, key: &K) -> Option<S> {
self.substreams.remove(key)
}
/// Get mutable reference to stored substream.
#[cfg(test)]
pub fn get_mut(&mut self, key: &K) -> Option<&mut S> {
self.substreams.get_mut(key)
}
/// Get size of [`SubstreamSet`].
pub fn len(&self) -> usize {
self.substreams.len()
}
/// Check if [`SubstreamSet`] is empty.
pub fn is_empty(&self) -> bool {
self.substreams.is_empty()
}
}
impl<K, S> Stream for SubstreamSet<K, S>
where
K: SubstreamSetKey,
S: Stream<Item = Result<BytesMut, SubstreamError>> + Unpin,
{
type Item = (K, <S as Stream>::Item);
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let inner = Pin::into_inner(self);
for (key, mut substream) in inner.substreams.iter_mut() {
match Pin::new(&mut substream).poll_next(cx) {
Poll::Pending => continue,
Poll::Ready(Some(data)) => return Poll::Ready(Some((*key, data))),
Poll::Ready(None) =>
return Poll::Ready(Some((*key, Err(SubstreamError::ConnectionClosed)))),
}
}
Poll::Pending
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{mock::substream::MockSubstream, PeerId};
use futures::{SinkExt, StreamExt};
#[test]
fn add_substream() {
let mut set = SubstreamSet::<PeerId, MockSubstream>::new();
let peer = PeerId::random();
let substream = MockSubstream::new();
set.insert(peer, substream);
let peer = PeerId::random();
let substream = MockSubstream::new();
set.insert(peer, substream);
}
#[test]
#[should_panic]
#[cfg(debug_assertions)]
fn add_same_peer_twice() {
let mut set = SubstreamSet::<PeerId, MockSubstream>::new();
let peer = PeerId::random();
let substream1 = MockSubstream::new();
let substream2 = MockSubstream::new();
set.insert(peer, substream1);
set.insert(peer, substream2);
}
#[test]
fn remove_substream() {
let mut set = SubstreamSet::<PeerId, MockSubstream>::new();
let peer1 = PeerId::random();
let substream1 = MockSubstream::new();
set.insert(peer1, substream1);
let peer2 = PeerId::random();
let substream2 = MockSubstream::new();
set.insert(peer2, substream2);
assert!(set.remove(&peer1).is_some());
assert!(set.remove(&peer2).is_some());
assert!(set.remove(&PeerId::random()).is_none());
}
#[tokio::test]
async fn poll_data_from_substream() {
let mut set = SubstreamSet::<PeerId, MockSubstream>::new();
let peer = PeerId::random();
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/rsa.rs | src/crypto/rsa.rs | // Copyright 2025 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! RSA public key.
use crate::error::ParseError;
use ring::signature::{UnparsedPublicKey, RSA_PKCS1_2048_8192_SHA256};
use x509_parser::{prelude::FromDer, x509::SubjectPublicKeyInfo};
/// An RSA public key.
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct PublicKey(Vec<u8>);
impl PublicKey {
/// Decode an RSA public key from a DER-encoded X.509 SubjectPublicKeyInfo structure.
pub fn try_decode_x509(spki: &[u8]) -> Result<Self, ParseError> {
SubjectPublicKeyInfo::from_der(spki)
.map(|(_, spki)| Self(spki.subject_public_key.as_ref().to_vec()))
.map_err(|_| ParseError::InvalidPublicKey)
}
/// Verify the RSA signature on a message using the public key.
pub fn verify(&self, msg: &[u8], sig: &[u8]) -> bool {
let key = UnparsedPublicKey::new(&RSA_PKCS1_2048_8192_SHA256, &self.0);
key.verify(msg, sig).is_ok()
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/mod.rs | src/crypto/mod.rs | // Copyright 2023 Protocol Labs.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Crypto-related code.
use crate::{error::ParseError, peer_id::*};
pub mod ed25519;
#[cfg(feature = "rsa")]
pub mod rsa;
pub(crate) mod noise;
#[cfg(feature = "quic")]
pub(crate) mod tls;
pub(crate) mod keys_proto {
include!(concat!(env!("OUT_DIR"), "/keys_proto.rs"));
}
/// The public key of a node's identity keypair.
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum PublicKey {
/// A public Ed25519 key.
Ed25519(ed25519::PublicKey),
}
impl PublicKey {
/// Encode the public key into a protobuf structure for storage or
/// exchange with other nodes.
pub fn to_protobuf_encoding(&self) -> Vec<u8> {
use prost::Message;
let public_key = keys_proto::PublicKey::from(self);
let mut buf = Vec::with_capacity(public_key.encoded_len());
public_key.encode(&mut buf).expect("Vec<u8> provides capacity as needed");
buf
}
/// Convert the `PublicKey` into the corresponding `PeerId`.
pub fn to_peer_id(&self) -> PeerId {
self.into()
}
}
impl From<&PublicKey> for keys_proto::PublicKey {
fn from(key: &PublicKey) -> Self {
match key {
PublicKey::Ed25519(key) => keys_proto::PublicKey {
r#type: keys_proto::KeyType::Ed25519 as i32,
data: key.to_bytes().to_vec(),
},
}
}
}
impl TryFrom<keys_proto::PublicKey> for PublicKey {
type Error = ParseError;
fn try_from(pubkey: keys_proto::PublicKey) -> Result<Self, Self::Error> {
let key_type = keys_proto::KeyType::try_from(pubkey.r#type)
.map_err(|_| ParseError::UnknownKeyType(pubkey.r#type))?;
if key_type == keys_proto::KeyType::Ed25519 {
Ok(ed25519::PublicKey::try_from_bytes(&pubkey.data).map(PublicKey::Ed25519)?)
} else {
Err(ParseError::UnknownKeyType(key_type as i32))
}
}
}
impl From<ed25519::PublicKey> for PublicKey {
fn from(public_key: ed25519::PublicKey) -> Self {
PublicKey::Ed25519(public_key)
}
}
/// The public key of a remote node's identity keypair. Supports RSA keys additionally to ed25519.
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum RemotePublicKey {
/// A public Ed25519 key.
Ed25519(ed25519::PublicKey),
/// A public RSA key.
#[cfg(feature = "rsa")]
Rsa(rsa::PublicKey),
}
impl RemotePublicKey {
/// Verify a signature for a message using this public key, i.e. check
/// that the signature has been produced by the corresponding
/// private key (authenticity), and that the message has not been
/// tampered with (integrity).
#[must_use]
pub fn verify(&self, msg: &[u8], sig: &[u8]) -> bool {
use RemotePublicKey::*;
match self {
Ed25519(pk) => pk.verify(msg, sig),
#[cfg(feature = "rsa")]
Rsa(pk) => pk.verify(msg, sig),
}
}
/// Decode a public key from a protobuf structure, e.g. read from storage
/// or received from another node.
pub fn from_protobuf_encoding(bytes: &[u8]) -> Result<RemotePublicKey, ParseError> {
use prost::Message;
let pubkey = keys_proto::PublicKey::decode(bytes)?;
pubkey.try_into()
}
}
impl TryFrom<keys_proto::PublicKey> for RemotePublicKey {
type Error = ParseError;
fn try_from(pubkey: keys_proto::PublicKey) -> Result<Self, Self::Error> {
let key_type = keys_proto::KeyType::try_from(pubkey.r#type)
.map_err(|_| ParseError::UnknownKeyType(pubkey.r#type))?;
match key_type {
keys_proto::KeyType::Ed25519 =>
ed25519::PublicKey::try_from_bytes(&pubkey.data).map(RemotePublicKey::Ed25519),
#[cfg(feature = "rsa")]
keys_proto::KeyType::Rsa =>
rsa::PublicKey::try_decode_x509(&pubkey.data).map(RemotePublicKey::Rsa),
_ => Err(ParseError::UnknownKeyType(key_type as i32)),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/ed25519.rs | src/crypto/ed25519.rs | // Copyright 2019 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Ed25519 keys.
use crate::{
error::{Error, ParseError},
PeerId,
};
use ed25519_dalek::{self as ed25519, Signer as _, Verifier as _};
use std::fmt;
use zeroize::Zeroize;
/// An Ed25519 keypair.
#[derive(Clone)]
pub struct Keypair(ed25519::SigningKey);
impl Keypair {
/// Generate a new random Ed25519 keypair.
pub fn generate() -> Keypair {
Keypair::from(SecretKey::generate())
}
/// Convert the keypair into a byte array by concatenating the bytes
/// of the secret scalar and the compressed public point,
/// an informal standard for encoding Ed25519 keypairs.
pub fn to_bytes(&self) -> [u8; 64] {
self.0.to_keypair_bytes()
}
/// Try to parse a keypair from the [binary format](https://datatracker.ietf.org/doc/html/rfc8032#section-5.1.5)
/// produced by [`Keypair::to_bytes`], zeroing the input on success.
///
/// Note that this binary format is the same as `ed25519_dalek`'s and `ed25519_zebra`'s.
pub fn try_from_bytes(kp: &mut [u8]) -> Result<Keypair, Error> {
let bytes = <[u8; 64]>::try_from(&*kp)
.map_err(|e| Error::Other(format!("Failed to parse ed25519 keypair: {e}")))?;
ed25519::SigningKey::from_keypair_bytes(&bytes)
.map(|k| {
kp.zeroize();
Keypair(k)
})
.map_err(|e| Error::Other(format!("Failed to parse ed25519 keypair: {e}")))
}
/// Sign a message using the private key of this keypair.
pub fn sign(&self, msg: &[u8]) -> Vec<u8> {
self.0.sign(msg).to_bytes().to_vec()
}
/// Get the public key of this keypair.
pub fn public(&self) -> PublicKey {
PublicKey(self.0.verifying_key())
}
/// Get the secret key of this keypair.
pub fn secret(&self) -> SecretKey {
SecretKey(self.0.to_bytes())
}
}
impl fmt::Debug for Keypair {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Keypair").field("public", &self.0.verifying_key()).finish()
}
}
/// Demote an Ed25519 keypair to a secret key.
impl From<Keypair> for SecretKey {
fn from(kp: Keypair) -> SecretKey {
SecretKey(kp.0.to_bytes())
}
}
/// Promote an Ed25519 secret key into a keypair.
impl From<SecretKey> for Keypair {
fn from(sk: SecretKey) -> Keypair {
let signing = ed25519::SigningKey::from_bytes(&sk.0);
Keypair(signing)
}
}
/// An Ed25519 public key.
#[derive(Eq, Clone)]
pub struct PublicKey(ed25519::VerifyingKey);
impl fmt::Debug for PublicKey {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str("PublicKey(compressed): ")?;
for byte in self.0.as_bytes() {
write!(f, "{byte:x}")?;
}
Ok(())
}
}
impl PartialEq for PublicKey {
fn eq(&self, other: &Self) -> bool {
self.0.as_bytes().eq(other.0.as_bytes())
}
}
impl PublicKey {
/// Verify the Ed25519 signature on a message using the public key.
pub fn verify(&self, msg: &[u8], sig: &[u8]) -> bool {
ed25519::Signature::try_from(sig).and_then(|s| self.0.verify(msg, &s)).is_ok()
}
/// Convert the public key to a byte array in compressed form, i.e.
/// where one coordinate is represented by a single bit.
pub fn to_bytes(&self) -> [u8; 32] {
self.0.to_bytes()
}
/// Get the public key as a byte slice.
pub fn as_bytes(&self) -> &[u8] {
self.0.as_bytes()
}
/// Try to parse a public key from a byte array containing the actual key as produced by
/// `to_bytes`.
pub fn try_from_bytes(k: &[u8]) -> Result<PublicKey, ParseError> {
let k = <[u8; 32]>::try_from(k).map_err(|_| ParseError::InvalidPublicKey)?;
// The error type of the verifying key is deliberately opaque as to avoid side-channel
// leakage. We can't provide a more specific error type here.
ed25519::VerifyingKey::from_bytes(&k)
.map_err(|_| ParseError::InvalidPublicKey)
.map(PublicKey)
}
/// Convert public key to `PeerId`.
pub fn to_peer_id(&self) -> PeerId {
crate::crypto::PublicKey::Ed25519(self.clone()).into()
}
}
/// An Ed25519 secret key.
#[derive(Clone)]
pub struct SecretKey(ed25519::SecretKey);
/// View the bytes of the secret key.
impl AsRef<[u8]> for SecretKey {
fn as_ref(&self) -> &[u8] {
&self.0[..]
}
}
impl fmt::Debug for SecretKey {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "SecretKey")
}
}
impl SecretKey {
/// Generate a new Ed25519 secret key.
pub fn generate() -> SecretKey {
let signing = ed25519::SigningKey::generate(&mut rand::rngs::OsRng);
SecretKey(signing.to_bytes())
}
/// Try to parse an Ed25519 secret key from a byte slice
/// containing the actual key, zeroing the input on success.
/// If the bytes do not constitute a valid Ed25519 secret key, an error is
/// returned.
pub fn try_from_bytes(mut sk_bytes: impl AsMut<[u8]>) -> crate::Result<SecretKey> {
let sk_bytes = sk_bytes.as_mut();
let secret = <[u8; 32]>::try_from(&*sk_bytes)
.map_err(|e| Error::Other(format!("Failed to parse ed25519 secret key: {e}")))?;
sk_bytes.zeroize();
Ok(SecretKey(secret))
}
/// Convert this secret key to a byte array.
pub fn to_bytes(&self) -> [u8; 32] {
self.0
}
}
#[cfg(test)]
mod tests {
use super::*;
use quickcheck::*;
fn eq_keypairs(kp1: &Keypair, kp2: &Keypair) -> bool {
kp1.public() == kp2.public() && kp1.0.to_bytes() == kp2.0.to_bytes()
}
#[test]
fn ed25519_keypair_encode_decode() {
fn prop() -> bool {
let kp1 = Keypair::generate();
let mut kp1_enc = kp1.to_bytes();
let kp2 = Keypair::try_from_bytes(&mut kp1_enc).unwrap();
eq_keypairs(&kp1, &kp2) && kp1_enc.iter().all(|b| *b == 0)
}
QuickCheck::new().tests(10).quickcheck(prop as fn() -> _);
}
#[test]
fn ed25519_keypair_from_secret() {
fn prop() -> bool {
let kp1 = Keypair::generate();
let mut sk = kp1.0.to_bytes();
let kp2 = Keypair::from(SecretKey::try_from_bytes(&mut sk).unwrap());
eq_keypairs(&kp1, &kp2) && sk == [0u8; 32]
}
QuickCheck::new().tests(10).quickcheck(prop as fn() -> _);
}
#[test]
fn ed25519_signature() {
let kp = Keypair::generate();
let pk = kp.public();
let msg = "hello world".as_bytes();
let sig = kp.sign(msg);
assert!(pk.verify(msg, &sig));
let mut invalid_sig = sig.clone();
invalid_sig[3..6].copy_from_slice(&[10, 23, 42]);
assert!(!pk.verify(msg, &invalid_sig));
let invalid_msg = "h3ll0 w0rld".as_bytes();
assert!(!pk.verify(invalid_msg, &sig));
}
#[test]
fn secret_key() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let key = Keypair::generate();
tracing::trace!("keypair: {:?}", key);
tracing::trace!("secret: {:?}", key.secret());
tracing::trace!("public: {:?}", key.public());
let new_key = Keypair::from(key.secret());
assert_eq!(new_key.secret().as_ref(), key.secret().as_ref());
assert_eq!(new_key.public(), key.public());
let new_secret = SecretKey::from(new_key.clone());
assert_eq!(new_secret.as_ref(), new_key.secret().as_ref());
let cloned_secret = new_secret.clone();
assert_eq!(cloned_secret.as_ref(), new_secret.as_ref());
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/tls/certificate.rs | src/crypto/tls/certificate.rs | // Copyright 2021 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! X.509 certificate handling for libp2p
//!
//! This module handles generation, signing, and verification of certificates.
use crate::{
crypto::{ed25519::Keypair, RemotePublicKey},
PeerId,
};
// use libp2p_identity as identity;
// use libp2p_identity::PeerId;
use x509_parser::{prelude::*, signature_algorithm::SignatureAlgorithm};
/// The libp2p Public Key Extension is a X.509 extension
/// with the Object Identier 1.3.6.1.4.1.53594.1.1,
/// allocated by IANA to the libp2p project at Protocol Labs.
const P2P_EXT_OID: [u64; 9] = [1, 3, 6, 1, 4, 1, 53594, 1, 1];
/// The peer signs the concatenation of the string `libp2p-tls-handshake:`
/// and the public key that it used to generate the certificate carrying
/// the libp2p Public Key Extension, using its private host key.
/// This signature provides cryptographic proof that the peer was
/// in possession of the private host key at the time the certificate was signed.
const P2P_SIGNING_PREFIX: [u8; 21] = *b"libp2p-tls-handshake:";
// Certificates MUST use the NamedCurve encoding for elliptic curve parameters.
// Similarly, hash functions with an output length less than 256 bits MUST NOT be used.
static P2P_SIGNATURE_ALGORITHM: &rcgen::SignatureAlgorithm = &rcgen::PKCS_ECDSA_P256_SHA256;
/// Generates a self-signed TLS certificate that includes a libp2p-specific
/// certificate extension containing the public key of the given keypair.
pub fn generate(
identity_keypair: &Keypair,
) -> Result<(rustls::Certificate, rustls::PrivateKey), GenError> {
// Keypair used to sign the certificate.
// SHOULD NOT be related to the host's key.
// Endpoints MAY generate a new key and certificate
// for every connection attempt, or they MAY reuse the same key
// and certificate for multiple connections.
let certificate_keypair = rcgen::KeyPair::generate_for(P2P_SIGNATURE_ALGORITHM)?;
let rustls_key = rustls::PrivateKey(certificate_keypair.serialize_der());
let certificate = {
let mut params = rcgen::CertificateParams::new(vec![])?;
params.distinguished_name = rcgen::DistinguishedName::new();
params.custom_extensions.push(make_libp2p_extension(
identity_keypair,
&certificate_keypair,
)?);
params.self_signed(&certificate_keypair)?
};
let rustls_certificate = rustls::Certificate(certificate.der().to_vec());
Ok((rustls_certificate, rustls_key))
}
/// Attempts to parse the provided bytes as a [`P2pCertificate`].
///
/// For this to succeed, the certificate must contain the specified extension and the signature must
/// match the embedded public key.
pub fn parse(certificate: &rustls::Certificate) -> Result<P2pCertificate<'_>, ParseError> {
let certificate = parse_unverified(certificate.as_ref())?;
certificate.verify()?;
Ok(certificate)
}
/// An X.509 certificate with a libp2p-specific extension
/// is used to secure libp2p connections.
pub struct P2pCertificate<'a> {
certificate: X509Certificate<'a>,
/// This is a specific libp2p Public Key Extension with two values:
/// * the public host key
/// * a signature performed using the private host key
extension: P2pExtension,
}
/// The contents of the specific libp2p extension, containing the public host key
/// and a signature performed using the private host key.
pub struct P2pExtension {
public_key: RemotePublicKey,
/// This signature provides cryptographic proof that the peer was
/// in possession of the private host key at the time the certificate was signed.
signature: Vec<u8>,
/// PeerId derived from the public key. While not being part of the extension, we store it to
/// avoid the need to serialize the public key back to protobuf.
peer_id: PeerId,
}
#[derive(Debug, thiserror::Error)]
#[error(transparent)]
pub struct GenError(#[from] rcgen::Error);
#[derive(Debug, thiserror::Error)]
#[error(transparent)]
pub struct ParseError(#[from] pub(crate) webpki::Error);
#[derive(Debug, thiserror::Error)]
#[error(transparent)]
pub struct VerificationError(#[from] pub(crate) webpki::Error);
/// Internal function that only parses but does not verify the certificate.
///
/// Useful for testing but unsuitable for production.
fn parse_unverified<'a>(der_input: &'a [u8]) -> Result<P2pCertificate<'a>, webpki::Error> {
let x509 = X509Certificate::from_der(der_input)
.map(|(_rest_input, x509)| x509)
.map_err(|_| webpki::Error::BadDer)?;
let p2p_ext_oid = der_parser::oid::Oid::from(&P2P_EXT_OID)
.expect("This is a valid OID of p2p extension; qed");
let mut libp2p_extension = None;
for ext in x509.extensions() {
let oid = &ext.oid;
if oid == &p2p_ext_oid && libp2p_extension.is_some() {
// The extension was already parsed
return Err(webpki::Error::BadDer);
}
if oid == &p2p_ext_oid {
// The public host key and the signature are ANS.1-encoded
// into the SignedKey data structure, which is carried
// in the libp2p Public Key Extension.
// SignedKey ::= SEQUENCE {
// publicKey OCTET STRING,
// signature OCTET STRING
// }
let (public_key_protobuf, signature): (Vec<u8>, Vec<u8>) =
yasna::decode_der(ext.value).map_err(|_| webpki::Error::ExtensionValueInvalid)?;
// The publicKey field of SignedKey contains the public host key
// of the endpoint, encoded using the following protobuf:
// enum KeyType {
// RSA = 0;
// Ed25519 = 1;
// Secp256k1 = 2;
// ECDSA = 3;
// }
// message PublicKey {
// required KeyType Type = 1;
// required bytes Data = 2;
// }
let public_key = RemotePublicKey::from_protobuf_encoding(&public_key_protobuf)
.map_err(|_| webpki::Error::UnknownIssuer)?;
let peer_id = PeerId::from_public_key_protobuf(&public_key_protobuf);
let ext = P2pExtension {
public_key,
signature,
peer_id,
};
libp2p_extension = Some(ext);
continue;
}
if ext.critical {
// Endpoints MUST abort the connection attempt if the certificate
// contains critical extensions that the endpoint does not understand.
return Err(webpki::Error::UnsupportedCriticalExtension);
}
// Implementations MUST ignore non-critical extensions with unknown OIDs.
}
// The certificate MUST contain the libp2p Public Key Extension.
// If this extension is missing, endpoints MUST abort the connection attempt.
let extension = libp2p_extension.ok_or(webpki::Error::BadDer)?;
let certificate = P2pCertificate {
certificate: x509,
extension,
};
Ok(certificate)
}
fn make_libp2p_extension(
identity_keypair: &Keypair,
certificate_pubkey: &impl rcgen::PublicKeyData,
) -> Result<rcgen::CustomExtension, rcgen::Error> {
// The peer signs the concatenation of the string `libp2p-tls-handshake:`
// and the public key (in SPKI DER format) that it used to generate the certificate carrying
// the libp2p Public Key Extension, using its private host key.
let signature = {
let mut msg = vec![];
msg.extend(P2P_SIGNING_PREFIX);
msg.extend(certificate_pubkey.subject_public_key_info());
identity_keypair.sign(&msg)
};
// The public host key and the signature are ANS.1-encoded
// into the SignedKey data structure, which is carried
// in the libp2p Public Key Extension.
// SignedKey ::= SEQUENCE {
// publicKey OCTET STRING,
// signature OCTET STRING
// }
let extension_content = {
let serialized_pubkey =
crate::crypto::PublicKey::Ed25519(identity_keypair.public()).to_protobuf_encoding();
yasna::encode_der(&(serialized_pubkey, signature))
};
// This extension MAY be marked critical.
let mut ext = rcgen::CustomExtension::from_oid_content(&P2P_EXT_OID, extension_content);
ext.set_criticality(true);
Ok(ext)
}
impl P2pCertificate<'_> {
/// The [`PeerId`] of the remote peer.
pub fn peer_id(&self) -> PeerId {
self.extension.peer_id
}
/// Verify the `signature` of the `message` signed by the private key corresponding to the
/// public key stored in the certificate.
pub fn verify_signature(
&self,
signature_scheme: rustls::SignatureScheme,
message: &[u8],
signature: &[u8],
) -> Result<(), VerificationError> {
let pk = self.public_key(signature_scheme)?;
pk.verify(message, signature)
.map_err(|_| webpki::Error::InvalidSignatureForPublicKey)?;
Ok(())
}
/// Get a [`ring::signature::UnparsedPublicKey`] for this `signature_scheme`.
/// Return `Error` if the `signature_scheme` does not match the public key signature
/// and hashing algorithm or if the `signature_scheme` is not supported.
fn public_key(
&self,
signature_scheme: rustls::SignatureScheme,
) -> Result<ring::signature::UnparsedPublicKey<&[u8]>, webpki::Error> {
use ring::signature;
use rustls::SignatureScheme::*;
let current_signature_scheme = self.signature_scheme()?;
if signature_scheme != current_signature_scheme {
// This certificate was signed with a different signature scheme
return Err(webpki::Error::UnsupportedSignatureAlgorithmForPublicKey);
}
let verification_algorithm: &dyn signature::VerificationAlgorithm = match signature_scheme {
RSA_PKCS1_SHA256 => &signature::RSA_PKCS1_2048_8192_SHA256,
RSA_PKCS1_SHA384 => &signature::RSA_PKCS1_2048_8192_SHA384,
RSA_PKCS1_SHA512 => &signature::RSA_PKCS1_2048_8192_SHA512,
ECDSA_NISTP256_SHA256 => &signature::ECDSA_P256_SHA256_ASN1,
ECDSA_NISTP384_SHA384 => &signature::ECDSA_P384_SHA384_ASN1,
ECDSA_NISTP521_SHA512 => {
// See https://github.com/briansmith/ring/issues/824
return Err(webpki::Error::UnsupportedSignatureAlgorithm);
}
RSA_PSS_SHA256 => &signature::RSA_PSS_2048_8192_SHA256,
RSA_PSS_SHA384 => &signature::RSA_PSS_2048_8192_SHA384,
RSA_PSS_SHA512 => &signature::RSA_PSS_2048_8192_SHA512,
ED25519 => &signature::ED25519,
ED448 => {
// See https://github.com/briansmith/ring/issues/463
return Err(webpki::Error::UnsupportedSignatureAlgorithm);
}
// Similarly, hash functions with an output length less than 256 bits
// MUST NOT be used, due to the possibility of collision attacks.
// In particular, MD5 and SHA1 MUST NOT be used.
RSA_PKCS1_SHA1 => return Err(webpki::Error::UnsupportedSignatureAlgorithm),
ECDSA_SHA1_Legacy => return Err(webpki::Error::UnsupportedSignatureAlgorithm),
Unknown(_) => return Err(webpki::Error::UnsupportedSignatureAlgorithm),
};
let spki = &self.certificate.tbs_certificate.subject_pki;
let key = signature::UnparsedPublicKey::new(
verification_algorithm,
spki.subject_public_key.as_ref(),
);
Ok(key)
}
/// This method validates the certificate according to libp2p TLS 1.3 specs.
/// The certificate MUST:
/// 1. be valid at the time it is received by the peer;
/// 2. use the NamedCurve encoding;
/// 3. use hash functions with an output length not less than 256 bits;
/// 4. be self signed;
/// 5. contain a valid signature in the specific libp2p extension.
fn verify(&self) -> Result<(), webpki::Error> {
use webpki::Error;
// The certificate MUST have NotBefore and NotAfter fields set
// such that the certificate is valid at the time it is received by the peer.
if !self.certificate.validity().is_valid() {
return Err(Error::InvalidCertValidity);
}
// Certificates MUST use the NamedCurve encoding for elliptic curve parameters.
// Similarly, hash functions with an output length less than 256 bits
// MUST NOT be used, due to the possibility of collision attacks.
// In particular, MD5 and SHA1 MUST NOT be used.
// Endpoints MUST abort the connection attempt if it is not used.
let signature_scheme = self.signature_scheme()?;
// Endpoints MUST abort the connection attempt if the certificate’s
// self-signature is not valid.
let raw_certificate = self.certificate.tbs_certificate.as_ref();
let signature = self.certificate.signature_value.as_ref();
// check if self signed
self.verify_signature(signature_scheme, raw_certificate, signature)
.map_err(|_| Error::SignatureAlgorithmMismatch)?;
let subject_pki = self.certificate.public_key().raw;
// The peer signs the concatenation of the string `libp2p-tls-handshake:`
// and the public key that it used to generate the certificate carrying
// the libp2p Public Key Extension, using its private host key.
let mut msg = vec![];
msg.extend(P2P_SIGNING_PREFIX);
msg.extend(subject_pki);
// This signature provides cryptographic proof that the peer was in possession
// of the private host key at the time the certificate was signed.
// Peers MUST verify the signature, and abort the connection attempt
// if signature verification fails.
let user_owns_sk = self.extension.public_key.verify(&msg, &self.extension.signature);
if !user_owns_sk {
return Err(Error::UnknownIssuer);
}
Ok(())
}
/// Return the signature scheme corresponding to [`AlgorithmIdentifier`]s
/// of `subject_pki` and `signature_algorithm`
/// according to <https://www.rfc-editor.org/rfc/rfc8446.html#section-4.2.3>.
fn signature_scheme(&self) -> Result<rustls::SignatureScheme, webpki::Error> {
// Certificates MUST use the NamedCurve encoding for elliptic curve parameters.
// Endpoints MUST abort the connection attempt if it is not used.
use oid_registry::*;
use rustls::SignatureScheme::*;
let signature_algorithm = &self.certificate.signature_algorithm;
let pki_algorithm = &self.certificate.tbs_certificate.subject_pki.algorithm;
if pki_algorithm.algorithm == OID_PKCS1_RSAENCRYPTION {
if signature_algorithm.algorithm == OID_PKCS1_SHA256WITHRSA {
return Ok(RSA_PKCS1_SHA256);
}
if signature_algorithm.algorithm == OID_PKCS1_SHA384WITHRSA {
return Ok(RSA_PKCS1_SHA384);
}
if signature_algorithm.algorithm == OID_PKCS1_SHA512WITHRSA {
return Ok(RSA_PKCS1_SHA512);
}
if signature_algorithm.algorithm == OID_PKCS1_RSASSAPSS {
// According to https://datatracker.ietf.org/doc/html/rfc4055#section-3.1:
// Inside of params there shuld be a sequence of:
// - Hash Algorithm
// - Mask Algorithm
// - Salt Length
// - Trailer Field
// We are interested in Hash Algorithm only
if let Ok(SignatureAlgorithm::RSASSA_PSS(params)) =
SignatureAlgorithm::try_from(signature_algorithm)
{
let hash_oid = params.hash_algorithm_oid();
if hash_oid == &OID_NIST_HASH_SHA256 {
return Ok(RSA_PSS_SHA256);
}
if hash_oid == &OID_NIST_HASH_SHA384 {
return Ok(RSA_PSS_SHA384);
}
if hash_oid == &OID_NIST_HASH_SHA512 {
return Ok(RSA_PSS_SHA512);
}
}
// Default hash algo is SHA-1, however:
// In particular, MD5 and SHA1 MUST NOT be used.
return Err(webpki::Error::UnsupportedSignatureAlgorithm);
}
}
if pki_algorithm.algorithm == OID_KEY_TYPE_EC_PUBLIC_KEY {
let signature_param = pki_algorithm
.parameters
.as_ref()
.ok_or(webpki::Error::BadDer)?
.as_oid()
.map_err(|_| webpki::Error::BadDer)?;
if signature_param == OID_EC_P256
&& signature_algorithm.algorithm == OID_SIG_ECDSA_WITH_SHA256
{
return Ok(ECDSA_NISTP256_SHA256);
}
if signature_param == OID_NIST_EC_P384
&& signature_algorithm.algorithm == OID_SIG_ECDSA_WITH_SHA384
{
return Ok(ECDSA_NISTP384_SHA384);
}
if signature_param == OID_NIST_EC_P521
&& signature_algorithm.algorithm == OID_SIG_ECDSA_WITH_SHA512
{
return Ok(ECDSA_NISTP521_SHA512);
}
return Err(webpki::Error::UnsupportedSignatureAlgorithm);
}
if signature_algorithm.algorithm == OID_SIG_ED25519 {
return Ok(ED25519);
}
if signature_algorithm.algorithm == OID_SIG_ED448 {
return Ok(ED448);
}
Err(webpki::Error::UnsupportedSignatureAlgorithm)
}
}
#[cfg(test)]
mod tests {
use super::*;
use hex_literal::hex;
#[test]
fn sanity_check() {
// let keypair = identity::Keypair::generate_ed25519();
let keypair = crate::crypto::ed25519::Keypair::generate();
let (cert, _) = generate(&keypair).unwrap();
let parsed_cert = parse(&cert).unwrap();
assert!(parsed_cert.verify().is_ok());
assert_eq!(
crate::crypto::RemotePublicKey::Ed25519(keypair.public()),
parsed_cert.extension.public_key
);
}
macro_rules! check_cert {
($name:ident, $path:literal, $scheme:path) => {
#[test]
fn $name() {
let cert: &[u8] = include_bytes!($path);
let cert = parse_unverified(cert).unwrap();
assert!(cert.verify().is_err()); // Because p2p extension
// was not signed with the private key
// of the certificate.
assert_eq!(cert.signature_scheme(), Ok($scheme));
}
};
}
check_cert! {ed448, "./test_assets/ed448.der", rustls::SignatureScheme::ED448}
check_cert! {ed25519, "./test_assets/ed25519.der", rustls::SignatureScheme::ED25519}
check_cert! {rsa_pkcs1_sha256, "./test_assets/rsa_pkcs1_sha256.der", rustls::SignatureScheme::RSA_PKCS1_SHA256}
check_cert! {rsa_pkcs1_sha384, "./test_assets/rsa_pkcs1_sha384.der", rustls::SignatureScheme::RSA_PKCS1_SHA384}
check_cert! {rsa_pkcs1_sha512, "./test_assets/rsa_pkcs1_sha512.der", rustls::SignatureScheme::RSA_PKCS1_SHA512}
check_cert! {nistp256_sha256, "./test_assets/nistp256_sha256.der", rustls::SignatureScheme::ECDSA_NISTP256_SHA256}
check_cert! {nistp384_sha384, "./test_assets/nistp384_sha384.der", rustls::SignatureScheme::ECDSA_NISTP384_SHA384}
check_cert! {nistp521_sha512, "./test_assets/nistp521_sha512.der", rustls::SignatureScheme::ECDSA_NISTP521_SHA512}
#[test]
fn rsa_pss_sha384() {
let cert = rustls::Certificate(include_bytes!("./test_assets/rsa_pss_sha384.der").to_vec());
let cert = parse(&cert).unwrap();
assert_eq!(
cert.signature_scheme(),
Ok(rustls::SignatureScheme::RSA_PSS_SHA384)
);
}
#[test]
fn nistp384_sha256() {
let cert: &[u8] = include_bytes!("./test_assets/nistp384_sha256.der");
let cert = parse_unverified(cert).unwrap();
assert!(cert.signature_scheme().is_err());
}
#[test]
fn can_parse_certificate_with_ed25519_keypair() {
let certificate = rustls::Certificate(hex!("308201773082011ea003020102020900f5bd0debaa597f52300a06082a8648ce3d04030230003020170d3735303130313030303030305a180f34303936303130313030303030305a30003059301306072a8648ce3d020106082a8648ce3d030107034200046bf9871220d71dcb3483ecdfcbfcc7c103f8509d0974b3c18ab1f1be1302d643103a08f7a7722c1b247ba3876fe2c59e26526f479d7718a85202ddbe47562358a37f307d307b060a2b0601040183a25a01010101ff046a30680424080112207fda21856709c5ae12fd6e8450623f15f11955d384212b89f56e7e136d2e17280440aaa6bffabe91b6f30c35e3aa4f94b1188fed96b0ffdd393f4c58c1c047854120e674ce64c788406d1c2c4b116581fd7411b309881c3c7f20b46e54c7e6fe7f0f300a06082a8648ce3d040302034700304402207d1a1dbd2bda235ff2ec87daf006f9b04ba076a5a5530180cd9c2e8f6399e09d0220458527178c7e77024601dbb1b256593e9b96d961b96349d1f560114f61a87595").to_vec());
let peer_id = parse(&certificate).unwrap().peer_id();
assert_eq!(
"12D3KooWJRSrypvnpHgc6ZAgyCni4KcSmbV7uGRaMw5LgMKT18fq"
.parse::<PeerId>()
.unwrap(),
peer_id
);
}
#[test]
fn fails_to_parse_bad_certificate_with_ed25519_keypair() {
let certificate = rustls::Certificate(hex!("308201773082011da003020102020830a73c5d896a1109300a06082a8648ce3d04030230003020170d3735303130313030303030305a180f34303936303130313030303030305a30003059301306072a8648ce3d020106082a8648ce3d03010703420004bbe62df9a7c1c46b7f1f21d556deec5382a36df146fb29c7f1240e60d7d5328570e3b71d99602b77a65c9b3655f62837f8d66b59f1763b8c9beba3be07778043a37f307d307b060a2b0601040183a25a01010101ff046a3068042408011220ec8094573afb9728088860864f7bcea2d4fd412fef09a8e2d24d482377c20db60440ecabae8354afa2f0af4b8d2ad871e865cb5a7c0c8d3dbdbf42de577f92461a0ebb0a28703e33581af7d2a4f2270fc37aec6261fcc95f8af08f3f4806581c730a300a06082a8648ce3d040302034800304502202dfb17a6fa0f94ee0e2e6a3b9fb6e986f311dee27392058016464bd130930a61022100ba4b937a11c8d3172b81e7cd04aedb79b978c4379c2b5b24d565dd5d67d3cb3c").to_vec());
match parse(&certificate) {
Ok(_) => assert!(false),
Err(error) => {
assert_eq!(format!("{error}"), "UnknownIssuer");
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/tls/mod.rs | src/crypto/tls/mod.rs | // Copyright 2021 Parity Technologies (UK) Ltd.
// Copyright 2022 Protocol Labs.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! TLS configuration based on libp2p TLS specs.
//!
//! See <https://github.com/libp2p/specs/blob/master/tls/tls.md>.
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
use crate::{crypto::ed25519::Keypair, PeerId};
use std::sync::Arc;
pub mod certificate;
mod verifier;
const P2P_ALPN: [u8; 6] = *b"libp2p";
/// Create a TLS server configuration for litep2p.
pub fn make_server_config(
keypair: &Keypair,
) -> Result<rustls::ServerConfig, certificate::GenError> {
let (certificate, private_key) = certificate::generate(keypair)?;
let mut crypto = rustls::ServerConfig::builder()
.with_cipher_suites(verifier::CIPHERSUITES)
.with_safe_default_kx_groups()
.with_protocol_versions(verifier::PROTOCOL_VERSIONS)
.expect("Cipher suites and kx groups are configured; qed")
.with_client_cert_verifier(Arc::new(verifier::Libp2pCertificateVerifier::new()))
.with_single_cert(vec![certificate], private_key)
.expect("Server cert key DER is valid; qed");
crypto.alpn_protocols = vec![P2P_ALPN.to_vec()];
Ok(crypto)
}
/// Create a TLS client configuration for libp2p.
pub fn make_client_config(
keypair: &Keypair,
remote_peer_id: Option<PeerId>,
) -> Result<rustls::ClientConfig, certificate::GenError> {
let (certificate, private_key) = certificate::generate(keypair)?;
let mut crypto = rustls::ClientConfig::builder()
.with_cipher_suites(verifier::CIPHERSUITES)
.with_safe_default_kx_groups()
.with_protocol_versions(verifier::PROTOCOL_VERSIONS)
.expect("Cipher suites and kx groups are configured; qed")
.with_custom_certificate_verifier(Arc::new(
verifier::Libp2pCertificateVerifier::with_remote_peer_id(remote_peer_id),
))
.with_single_cert(vec![certificate], private_key)
.expect("Client cert key DER is valid; qed");
crypto.alpn_protocols = vec![P2P_ALPN.to_vec()];
Ok(crypto)
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/tls/verifier.rs | src/crypto/tls/verifier.rs | // Copyright 2021 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! TLS 1.3 certificates and handshakes handling for libp2p
//!
//! This module handles a verification of a client/server certificate chain
//! and signatures allegedly by the given certificates.
use crate::{crypto::tls::certificate, PeerId};
use rustls::{
cipher_suite::{
TLS13_AES_128_GCM_SHA256, TLS13_AES_256_GCM_SHA384, TLS13_CHACHA20_POLY1305_SHA256,
},
client::{HandshakeSignatureValid, ServerCertVerified, ServerCertVerifier},
internal::msgs::handshake::DigitallySignedStruct,
server::{ClientCertVerified, ClientCertVerifier},
Certificate, DistinguishedNames, SignatureScheme, SupportedCipherSuite,
SupportedProtocolVersion,
};
/// The protocol versions supported by this verifier.
///
/// The spec says:
///
/// > The libp2p handshake uses TLS 1.3 (and higher).
/// > Endpoints MUST NOT negotiate lower TLS versions.
pub static PROTOCOL_VERSIONS: &[&SupportedProtocolVersion] = &[&rustls::version::TLS13];
/// A list of the TLS 1.3 cipher suites supported by rustls.
// By default rustls creates client/server configs with both
// TLS 1.3 __and__ 1.2 cipher suites. But we don't need 1.2.
pub static CIPHERSUITES: &[SupportedCipherSuite] = &[
// TLS1.3 suites
TLS13_CHACHA20_POLY1305_SHA256,
TLS13_AES_256_GCM_SHA384,
TLS13_AES_128_GCM_SHA256,
];
/// Implementation of the `rustls` certificate verification traits for libp2p.
///
/// Only TLS 1.3 is supported. TLS 1.2 should be disabled in the configuration of `rustls`.
pub struct Libp2pCertificateVerifier {
/// The peer ID we intend to connect to
remote_peer_id: Option<PeerId>,
}
/// libp2p requires the following of X.509 server certificate chains:
///
/// - Exactly one certificate must be presented.
/// - The certificate must be self-signed.
/// - The certificate must have a valid libp2p extension that includes a signature of its public
/// key.
impl Libp2pCertificateVerifier {
pub fn new() -> Self {
Self {
remote_peer_id: None,
}
}
pub fn with_remote_peer_id(remote_peer_id: Option<PeerId>) -> Self {
Self { remote_peer_id }
}
/// Return the list of SignatureSchemes that this verifier will handle,
/// in `verify_tls12_signature` and `verify_tls13_signature` calls.
///
/// This should be in priority order, with the most preferred first.
fn verification_schemes() -> Vec<SignatureScheme> {
vec![
// TODO SignatureScheme::ECDSA_NISTP521_SHA512 is not supported by `ring` yet
SignatureScheme::ECDSA_NISTP384_SHA384,
SignatureScheme::ECDSA_NISTP256_SHA256,
// TODO SignatureScheme::ED448 is not supported by `ring` yet
SignatureScheme::ED25519,
// In particular, RSA SHOULD NOT be used unless
// no elliptic curve algorithms are supported.
SignatureScheme::RSA_PSS_SHA512,
SignatureScheme::RSA_PSS_SHA384,
SignatureScheme::RSA_PSS_SHA256,
SignatureScheme::RSA_PKCS1_SHA512,
SignatureScheme::RSA_PKCS1_SHA384,
SignatureScheme::RSA_PKCS1_SHA256,
]
}
}
impl ServerCertVerifier for Libp2pCertificateVerifier {
fn verify_server_cert(
&self,
end_entity: &Certificate,
intermediates: &[Certificate],
_server_name: &rustls::ServerName,
_scts: &mut dyn Iterator<Item = &[u8]>,
_ocsp_response: &[u8],
_now: std::time::SystemTime,
) -> Result<ServerCertVerified, rustls::Error> {
let peer_id = verify_presented_certs(end_entity, intermediates)?;
if let Some(remote_peer_id) = self.remote_peer_id {
// The public host key allows the peer to calculate the peer ID of the peer
// it is connecting to. Clients MUST verify that the peer ID derived from
// the certificate matches the peer ID they intended to connect to,
// and MUST abort the connection if there is a mismatch.
if remote_peer_id != peer_id {
return Err(rustls::Error::PeerMisbehavedError(
"Wrong peer ID in p2p extension".to_string(),
));
}
}
Ok(ServerCertVerified::assertion())
}
fn verify_tls12_signature(
&self,
_message: &[u8],
_cert: &Certificate,
_dss: &DigitallySignedStruct,
) -> Result<HandshakeSignatureValid, rustls::Error> {
unreachable!("`PROTOCOL_VERSIONS` only allows TLS 1.3")
}
fn verify_tls13_signature(
&self,
message: &[u8],
cert: &Certificate,
dss: &DigitallySignedStruct,
) -> Result<HandshakeSignatureValid, rustls::Error> {
verify_tls13_signature(cert, dss.scheme, message, dss.signature())
}
fn supported_verify_schemes(&self) -> Vec<SignatureScheme> {
Self::verification_schemes()
}
}
/// libp2p requires the following of X.509 client certificate chains:
///
/// - Exactly one certificate must be presented. In particular, client authentication is mandatory
/// in libp2p.
/// - The certificate must be self-signed.
/// - The certificate must have a valid libp2p extension that includes a signature of its public
/// key.
impl ClientCertVerifier for Libp2pCertificateVerifier {
fn offer_client_auth(&self) -> bool {
true
}
fn client_auth_root_subjects(&self) -> Option<DistinguishedNames> {
Some(vec![])
}
fn verify_client_cert(
&self,
end_entity: &Certificate,
intermediates: &[Certificate],
_now: std::time::SystemTime,
) -> Result<ClientCertVerified, rustls::Error> {
let _: PeerId = verify_presented_certs(end_entity, intermediates)?;
Ok(ClientCertVerified::assertion())
}
fn verify_tls12_signature(
&self,
_message: &[u8],
_cert: &Certificate,
_dss: &DigitallySignedStruct,
) -> Result<HandshakeSignatureValid, rustls::Error> {
unreachable!("`PROTOCOL_VERSIONS` only allows TLS 1.3")
}
fn verify_tls13_signature(
&self,
message: &[u8],
cert: &Certificate,
dss: &DigitallySignedStruct,
) -> Result<HandshakeSignatureValid, rustls::Error> {
verify_tls13_signature(cert, dss.scheme, message, dss.signature())
}
fn supported_verify_schemes(&self) -> Vec<SignatureScheme> {
Self::verification_schemes()
}
}
/// When receiving the certificate chain, an endpoint
/// MUST check these conditions and abort the connection attempt if
/// (a) the presented certificate is not yet valid, OR
/// (b) if it is expired.
/// Endpoints MUST abort the connection attempt if more than one certificate is received,
/// or if the certificate’s self-signature is not valid.
fn verify_presented_certs(
end_entity: &Certificate,
intermediates: &[Certificate],
) -> Result<PeerId, rustls::Error> {
if !intermediates.is_empty() {
return Err(rustls::Error::General(
"libp2p-tls requires exactly one certificate".into(),
));
}
let cert = certificate::parse(end_entity)?;
Ok(cert.peer_id())
}
fn verify_tls13_signature(
cert: &Certificate,
signature_scheme: SignatureScheme,
message: &[u8],
signature: &[u8],
) -> Result<HandshakeSignatureValid, rustls::Error> {
certificate::parse(cert)?.verify_signature(signature_scheme, message, signature)?;
Ok(HandshakeSignatureValid::assertion())
}
impl From<certificate::ParseError> for rustls::Error {
fn from(certificate::ParseError(e): certificate::ParseError) -> Self {
use webpki::Error::*;
match e {
BadDer => rustls::Error::InvalidCertificateEncoding,
e => rustls::Error::InvalidCertificateData(format!("invalid peer certificate: {e}")),
}
}
}
impl From<certificate::VerificationError> for rustls::Error {
fn from(certificate::VerificationError(e): certificate::VerificationError) -> Self {
use webpki::Error::*;
match e {
InvalidSignatureForPublicKey => rustls::Error::InvalidCertificateSignature,
UnsupportedSignatureAlgorithm | UnsupportedSignatureAlgorithmForPublicKey =>
rustls::Error::InvalidCertificateSignatureType,
e => rustls::Error::InvalidCertificateData(format!("invalid peer certificate: {e}")),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/tls/tests/smoke.rs | src/crypto/tls/tests/smoke.rs | use futures::{future, StreamExt};
use libp2p_core::multiaddr::Protocol;
use libp2p_core::transport::MemoryTransport;
use libp2p_core::upgrade::Version;
use libp2p_core::Transport;
use libp2p_swarm::{keep_alive, Swarm, SwarmBuilder, SwarmEvent};
#[tokio::test]
async fn can_establish_connection() {
let mut swarm1 = make_swarm();
let mut swarm2 = make_swarm();
let listen_address = {
let expected_listener_id = swarm1.listen_on(Protocol::Memory(0).into()).unwrap();
loop {
match swarm1.next().await.unwrap() {
SwarmEvent::NewListenAddr {
address,
listener_id,
} if listener_id == expected_listener_id => break address,
_ => continue,
};
}
};
swarm2.dial(listen_address).unwrap();
let await_inbound_connection = async {
loop {
match swarm1.next().await.unwrap() {
SwarmEvent::ConnectionEstablished { peer_id, .. } => break peer_id,
SwarmEvent::IncomingConnectionError { error, .. } => {
panic!("Incoming connection failed: {error}")
}
_ => continue,
};
}
};
let await_outbound_connection = async {
loop {
match swarm2.next().await.unwrap() {
SwarmEvent::ConnectionEstablished { peer_id, .. } => break peer_id,
SwarmEvent::OutgoingConnectionError { error, .. } => {
panic!("Failed to dial: {error}")
}
_ => continue,
};
}
};
let (inbound_peer_id, outbound_peer_id) =
future::join(await_inbound_connection, await_outbound_connection).await;
assert_eq!(&inbound_peer_id, swarm2.local_peer_id());
assert_eq!(&outbound_peer_id, swarm1.local_peer_id());
}
fn make_swarm() -> Swarm<keep_alive::Behaviour> {
let identity = libp2p_identity::Keypair::generate_ed25519();
let transport = MemoryTransport::default()
.upgrade(Version::V1)
.authenticate(libp2p_tls::Config::new(&identity).unwrap())
.multiplex(libp2p_yamux::YamuxConfig::default())
.boxed();
SwarmBuilder::without_executor(
transport,
keep_alive::Behaviour,
identity.public().to_peer_id(),
)
.build()
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/noise/x25519_spec.rs | src/crypto/noise/x25519_spec.rs | // Copyright 2019 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use rand::Rng;
use x25519_dalek::{x25519, X25519_BASEPOINT_BYTES};
use zeroize::Zeroize;
use crate::crypto::noise::protocol::{Keypair, PublicKey, SecretKey};
/// A X25519 key.
#[derive(Clone)]
pub struct X25519Spec([u8; 32]);
impl AsRef<[u8]> for X25519Spec {
fn as_ref(&self) -> &[u8] {
self.0.as_ref()
}
}
impl Zeroize for X25519Spec {
fn zeroize(&mut self) {
self.0.zeroize()
}
}
impl Keypair<X25519Spec> {
/// An "empty" keypair as a starting state for DH computations in `snow`,
/// which get manipulated through the `snow::types::Dh` interface.
pub(super) fn default() -> Self {
Keypair {
secret: SecretKey(X25519Spec([0u8; 32])),
public: PublicKey(X25519Spec([0u8; 32])),
}
}
/// Create a new X25519 keypair.
pub fn new() -> Keypair<X25519Spec> {
let mut sk_bytes = [0u8; 32];
rand::thread_rng().fill(&mut sk_bytes);
let sk = SecretKey(X25519Spec(sk_bytes)); // Copy
sk_bytes.zeroize();
Self::from(sk)
}
}
impl Default for Keypair<X25519Spec> {
fn default() -> Self {
Self::new()
}
}
/// Promote a X25519 secret key into a keypair.
impl From<SecretKey<X25519Spec>> for Keypair<X25519Spec> {
fn from(secret: SecretKey<X25519Spec>) -> Keypair<X25519Spec> {
let public = PublicKey(X25519Spec(x25519((secret.0).0, X25519_BASEPOINT_BYTES)));
Keypair { secret, public }
}
}
impl snow::types::Dh for Keypair<X25519Spec> {
fn name(&self) -> &'static str {
"25519"
}
fn pub_len(&self) -> usize {
32
}
fn priv_len(&self) -> usize {
32
}
fn pubkey(&self) -> &[u8] {
self.public.as_ref()
}
fn privkey(&self) -> &[u8] {
self.secret.as_ref()
}
fn set(&mut self, sk: &[u8]) {
let mut secret = [0u8; 32];
secret.copy_from_slice(sk);
self.secret = SecretKey(X25519Spec(secret));
self.public = PublicKey(X25519Spec(x25519(secret, X25519_BASEPOINT_BYTES)));
secret.zeroize();
}
fn generate(&mut self, rng: &mut dyn snow::types::Random) {
let mut secret = [0u8; 32];
rng.fill_bytes(&mut secret);
self.secret = SecretKey(X25519Spec(secret));
self.public = PublicKey(X25519Spec(x25519(secret, X25519_BASEPOINT_BYTES)));
secret.zeroize();
}
fn dh(&self, pk: &[u8], shared_secret: &mut [u8]) -> Result<(), snow::Error> {
let mut p = [0; 32];
p.copy_from_slice(&pk[..32]);
let ss = x25519((self.secret.0).0, p);
shared_secret[..32].copy_from_slice(&ss[..]);
Ok(())
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/noise/mod.rs | src/crypto/noise/mod.rs | // Copyright 2019 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Noise handshake and transport implementations.
use crate::{
config::Role,
crypto::{ed25519::Keypair, PublicKey, RemotePublicKey},
error::{NegotiationError, ParseError},
PeerId,
};
use bytes::{Buf, Bytes, BytesMut};
use futures::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt};
use prost::Message;
use snow::{Builder, HandshakeState, TransportState};
use std::{
fmt, io,
pin::Pin,
task::{Context, Poll},
};
mod protocol;
mod x25519_spec;
mod handshake_schema {
include!(concat!(env!("OUT_DIR"), "/noise.rs"));
}
/// Noise parameters.
const NOISE_PARAMETERS: &str = "Noise_XX_25519_ChaChaPoly_SHA256";
/// Prefix of static key signatures for domain separation.
pub(crate) const STATIC_KEY_DOMAIN: &str = "noise-libp2p-static-key:";
/// Maximum Noise message size.
const MAX_NOISE_MSG_LEN: usize = 65536;
/// Space given to the encryption buffer to hold key material.
const NOISE_EXTRA_ENCRYPT_SPACE: usize = 16;
/// Max read ahead factor for the noise socket.
///
/// Specifies how many multiples of `MAX_NOISE_MESSAGE_LEN` are read from the socket
/// using one call to `poll_read()`.
pub(crate) const MAX_READ_AHEAD_FACTOR: usize = 5;
/// Maximum write buffer size.
pub(crate) const MAX_WRITE_BUFFER_SIZE: usize = 2;
/// Max. length for Noise protocol message payloads.
pub const MAX_FRAME_LEN: usize = MAX_NOISE_MSG_LEN - NOISE_EXTRA_ENCRYPT_SPACE;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::crypto::noise";
#[derive(Debug)]
#[allow(clippy::large_enum_variant)]
enum NoiseState {
Handshake(HandshakeState),
Transport(TransportState),
}
pub struct NoiseContext {
keypair: snow::Keypair,
noise: NoiseState,
role: Role,
pub payload: Vec<u8>,
}
impl fmt::Debug for NoiseContext {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("NoiseContext")
.field("public", &self.noise)
.field("payload", &self.payload)
.field("role", &self.role)
.finish()
}
}
impl NoiseContext {
/// Assemble Noise payload and return [`NoiseContext`].
fn assemble(
noise: snow::HandshakeState,
keypair: snow::Keypair,
id_keys: &Keypair,
role: Role,
) -> Result<Self, NegotiationError> {
let noise_payload = handshake_schema::NoiseHandshakePayload {
identity_key: Some(PublicKey::Ed25519(id_keys.public()).to_protobuf_encoding()),
identity_sig: Some(
id_keys.sign(&[STATIC_KEY_DOMAIN.as_bytes(), keypair.public.as_ref()].concat()),
),
..Default::default()
};
let mut payload = Vec::with_capacity(noise_payload.encoded_len());
noise_payload.encode(&mut payload).map_err(ParseError::from)?;
Ok(Self {
noise: NoiseState::Handshake(noise),
keypair,
payload,
role,
})
}
pub fn new(keypair: &Keypair, role: Role) -> Result<Self, NegotiationError> {
tracing::trace!(target: LOG_TARGET, ?role, "create new noise configuration");
let builder: Builder<'_> = Builder::with_resolver(
NOISE_PARAMETERS.parse().expect("qed; Valid noise pattern"),
Box::new(protocol::Resolver),
);
let dh_keypair = builder.generate_keypair()?;
let static_key = &dh_keypair.private;
let noise = match role {
Role::Dialer => builder.local_private_key(static_key).build_initiator()?,
Role::Listener => builder.local_private_key(static_key).build_responder()?,
};
Self::assemble(noise, dh_keypair, keypair, role)
}
/// Create new [`NoiseContext`] with prologue.
#[cfg(feature = "webrtc")]
pub fn with_prologue(id_keys: &Keypair, prologue: Vec<u8>) -> Result<Self, NegotiationError> {
let noise: Builder<'_> = Builder::with_resolver(
NOISE_PARAMETERS.parse().expect("qed; Valid noise pattern"),
Box::new(protocol::Resolver),
);
let keypair = noise.generate_keypair()?;
let noise = noise
.local_private_key(&keypair.private)
.prologue(&prologue)
.build_initiator()?;
Self::assemble(noise, keypair, id_keys, Role::Dialer)
}
/// Get remote peer ID from the received Noise payload.
#[cfg(feature = "webrtc")]
pub fn get_remote_peer_id(&mut self, reply: &[u8]) -> Result<PeerId, NegotiationError> {
if reply.len() < 2 {
tracing::error!(target: LOG_TARGET, "reply too short to contain length prefix");
return Err(NegotiationError::ParseError(ParseError::InvalidReplyLength));
}
let (len_slice, reply) = reply.split_at(2);
let len = u16::from_be_bytes(
len_slice
.try_into()
.map_err(|_| NegotiationError::ParseError(ParseError::InvalidPublicKey))?,
) as usize;
let mut buffer = vec![0u8; len];
let NoiseState::Handshake(ref mut noise) = self.noise else {
tracing::error!(target: LOG_TARGET, "invalid state to read the second handshake message");
debug_assert!(false);
return Err(NegotiationError::StateMismatch);
};
let res = noise.read_message(reply, &mut buffer)?;
buffer.truncate(res);
let payload = handshake_schema::NoiseHandshakePayload::decode(buffer.as_slice())
.map_err(|err| NegotiationError::ParseError(err.into()))?;
let identity = payload.identity_key.ok_or(NegotiationError::PeerIdMissing)?;
Ok(PeerId::from_public_key_protobuf(&identity))
}
/// Get first message.
///
/// Listener only sends one message (the payload)
pub fn first_message(&mut self, role: Role) -> Result<Vec<u8>, NegotiationError> {
match role {
Role::Dialer => {
tracing::trace!(target: LOG_TARGET, "get noise dialer first message");
let NoiseState::Handshake(ref mut noise) = self.noise else {
tracing::error!(target: LOG_TARGET, "invalid state to read the first handshake message");
debug_assert!(false);
return Err(NegotiationError::StateMismatch);
};
let mut buffer = vec![0u8; 256];
let nwritten = noise.write_message(&[], &mut buffer)?;
buffer.truncate(nwritten);
let size = nwritten as u16;
let mut size = size.to_be_bytes().to_vec();
size.append(&mut buffer);
Ok(size)
}
Role::Listener => self.second_message(),
}
}
/// Get second message.
///
/// Only the dialer sends the second message.
pub fn second_message(&mut self) -> Result<Vec<u8>, NegotiationError> {
tracing::trace!(target: LOG_TARGET, "get noise paylod message");
let NoiseState::Handshake(ref mut noise) = self.noise else {
tracing::error!(target: LOG_TARGET, "invalid state to read the first handshake message");
debug_assert!(false);
return Err(NegotiationError::StateMismatch);
};
let mut buffer = vec![0u8; 2048];
let nwritten = noise.write_message(&self.payload, &mut buffer)?;
buffer.truncate(nwritten);
let size = nwritten as u16;
let mut size = size.to_be_bytes().to_vec();
size.append(&mut buffer);
Ok(size)
}
/// Read handshake message.
async fn read_handshake_message<T: AsyncRead + AsyncWrite + Unpin>(
&mut self,
io: &mut T,
) -> Result<Bytes, NegotiationError> {
let mut size = BytesMut::zeroed(2);
io.read_exact(&mut size).await?;
let size = size.get_u16();
let mut message = BytesMut::zeroed(size as usize);
io.read_exact(&mut message).await?;
// TODO: https://github.com/paritytech/litep2p/issues/332 use correct overhead.
let mut out = BytesMut::new();
out.resize(message.len() + 200, 0u8);
let NoiseState::Handshake(ref mut noise) = self.noise else {
tracing::error!(target: LOG_TARGET, "invalid state to read handshake message");
debug_assert!(false);
return Err(NegotiationError::StateMismatch);
};
let nread = noise.read_message(&message, &mut out)?;
out.truncate(nread);
Ok(out.freeze())
}
fn read_message(&mut self, message: &[u8], out: &mut [u8]) -> Result<usize, snow::Error> {
match self.noise {
NoiseState::Handshake(ref mut noise) => noise.read_message(message, out),
NoiseState::Transport(ref mut noise) => noise.read_message(message, out),
}
}
fn write_message(&mut self, message: &[u8], out: &mut [u8]) -> Result<usize, snow::Error> {
match self.noise {
NoiseState::Handshake(ref mut noise) => noise.write_message(message, out),
NoiseState::Transport(ref mut noise) => noise.write_message(message, out),
}
}
fn get_handshake_dh_remote_pubkey(&self) -> Result<&[u8], NegotiationError> {
let NoiseState::Handshake(ref noise) = self.noise else {
tracing::error!(target: LOG_TARGET, "invalid state to get remote public key");
return Err(NegotiationError::StateMismatch);
};
let Some(dh_remote_pubkey) = noise.get_remote_static() else {
tracing::error!(target: LOG_TARGET, "expected remote public key at the end of XX session");
return Err(NegotiationError::IoError(std::io::ErrorKind::InvalidData));
};
Ok(dh_remote_pubkey)
}
/// Convert Noise into transport mode.
fn into_transport(self) -> Result<NoiseContext, NegotiationError> {
let transport = match self.noise {
NoiseState::Handshake(noise) => noise.into_transport_mode()?,
NoiseState::Transport(_) => return Err(NegotiationError::StateMismatch),
};
Ok(NoiseContext {
keypair: self.keypair,
payload: self.payload,
role: self.role,
noise: NoiseState::Transport(transport),
})
}
}
enum ReadState {
ReadData {
max_read: usize,
},
ReadFrameLen,
ProcessNextFrame {
pending: Option<Vec<u8>>,
offset: usize,
size: usize,
frame_size: usize,
},
}
enum WriteState {
Ready {
offset: usize,
size: usize,
encrypted_size: usize,
},
WriteFrame {
offset: usize,
size: usize,
encrypted_size: usize,
},
}
pub struct NoiseSocket<S: AsyncRead + AsyncWrite + Unpin> {
io: S,
noise: NoiseContext,
current_frame_size: Option<usize>,
write_state: WriteState,
encrypt_buffer: Vec<u8>,
offset: usize,
nread: usize,
read_state: ReadState,
read_buffer: Vec<u8>,
canonical_max_read: usize,
decrypt_buffer: Option<Vec<u8>>,
peer: PeerId,
ty: HandshakeTransport,
}
impl<S: AsyncRead + AsyncWrite + Unpin> NoiseSocket<S> {
fn new(
io: S,
noise: NoiseContext,
max_read_ahead_factor: usize,
max_write_buffer_size: usize,
peer: PeerId,
ty: HandshakeTransport,
) -> Self {
Self {
io,
noise,
read_buffer: vec![
0u8;
max_read_ahead_factor * MAX_NOISE_MSG_LEN + (2 + MAX_NOISE_MSG_LEN)
],
nread: 0usize,
offset: 0usize,
current_frame_size: None,
write_state: WriteState::Ready {
offset: 0usize,
size: 0usize,
encrypted_size: 0usize,
},
encrypt_buffer: vec![0u8; max_write_buffer_size * (MAX_NOISE_MSG_LEN + 2)],
decrypt_buffer: Some(vec![0u8; MAX_FRAME_LEN]),
read_state: ReadState::ReadData {
max_read: max_read_ahead_factor * MAX_NOISE_MSG_LEN,
},
canonical_max_read: max_read_ahead_factor * MAX_NOISE_MSG_LEN,
peer,
ty,
}
}
fn reset_read_state(&mut self, remaining: usize) {
match remaining {
0 => {
self.nread = 0;
}
1 => {
self.read_buffer[0] = self.read_buffer[self.nread - 1];
self.nread = 1;
}
_ => panic!("invalid state"),
}
self.offset = 0;
self.read_state = ReadState::ReadData {
max_read: self.canonical_max_read,
};
}
}
impl<S: AsyncRead + AsyncWrite + Unpin> AsyncRead for NoiseSocket<S> {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut [u8],
) -> Poll<io::Result<usize>> {
let this = Pin::into_inner(self);
loop {
match this.read_state {
ReadState::ReadData { max_read } => {
let nread = match Pin::new(&mut this.io)
.poll_read(cx, &mut this.read_buffer[this.nread..max_read])
{
Poll::Pending => return Poll::Pending,
Poll::Ready(Err(error)) => return Poll::Ready(Err(error)),
Poll::Ready(Ok(nread)) => match nread == 0 {
true => return Poll::Ready(Err(io::ErrorKind::UnexpectedEof.into())),
false => nread,
},
};
tracing::trace!(
target: LOG_TARGET,
?nread,
ty = ?this.ty,
peer = ?this.peer,
"read data from socket"
);
this.nread += nread;
this.read_state = ReadState::ReadFrameLen;
}
ReadState::ReadFrameLen => {
let mut remaining = match this.nread.checked_sub(this.offset) {
Some(remaining) => remaining,
None => {
tracing::error!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
nread = ?this.nread,
offset = ?this.offset,
"offset is larger than the number of bytes read"
);
return Poll::Ready(Err(io::ErrorKind::PermissionDenied.into()));
}
};
if remaining < 2 {
tracing::trace!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
"reset read buffer"
);
this.reset_read_state(remaining);
continue;
}
// get frame size, either from current or previous iteration
let frame_size = match this.current_frame_size.take() {
Some(frame_size) => frame_size,
None => {
let frame_size = (this.read_buffer[this.offset] as u16) << 8
| this.read_buffer[this.offset + 1] as u16;
this.offset += 2;
remaining -= 2;
frame_size as usize
}
};
tracing::trace!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
"current frame size = {frame_size}"
);
if remaining < frame_size {
// `read_buffer` can fit the full frame size.
if this.nread + frame_size < this.canonical_max_read {
tracing::trace!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
max_size = ?this.canonical_max_read,
next_frame_size = ?(this.nread + frame_size),
"read buffer can fit the full frame",
);
this.current_frame_size = Some(frame_size);
this.read_state = ReadState::ReadData {
max_read: this.canonical_max_read,
};
continue;
}
tracing::trace!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
"use auxiliary buffer extension"
);
// use the auxiliary memory at the end of the read buffer for reading the
// frame
this.current_frame_size = Some(frame_size);
this.read_state = ReadState::ReadData {
max_read: this.nread + frame_size - remaining,
};
continue;
}
if frame_size <= NOISE_EXTRA_ENCRYPT_SPACE {
tracing::error!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
?frame_size,
max_size = ?NOISE_EXTRA_ENCRYPT_SPACE,
"invalid frame size",
);
return Poll::Ready(Err(io::ErrorKind::InvalidData.into()));
}
this.current_frame_size = Some(frame_size);
this.read_state = ReadState::ProcessNextFrame {
pending: None,
offset: 0usize,
size: 0usize,
frame_size: 0usize,
};
}
ReadState::ProcessNextFrame {
ref mut pending,
offset,
size,
frame_size,
} => match pending.take() {
Some(pending) => match buf.len() >= pending[offset..size].len() {
true => {
let copy_size = pending[offset..size].len();
buf[..copy_size].copy_from_slice(&pending[offset..copy_size + offset]);
this.read_state = ReadState::ReadFrameLen;
this.decrypt_buffer = Some(pending);
this.offset += frame_size;
return Poll::Ready(Ok(copy_size));
}
false => {
buf.copy_from_slice(&pending[offset..buf.len() + offset]);
this.read_state = ReadState::ProcessNextFrame {
pending: Some(pending),
offset: offset + buf.len(),
size,
frame_size,
};
return Poll::Ready(Ok(buf.len()));
}
},
None => {
let frame_size =
this.current_frame_size.take().expect("`frame_size` to exist");
match buf.len() >= frame_size - NOISE_EXTRA_ENCRYPT_SPACE {
true => match this.noise.read_message(
&this.read_buffer[this.offset..this.offset + frame_size],
buf,
) {
Err(error) => {
tracing::error!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
buf_len = ?buf.len(),
frame_size = ?frame_size,
?error,
"failed to decrypt message"
);
return Poll::Ready(Err(io::ErrorKind::InvalidData.into()));
}
Ok(nread) => {
this.offset += frame_size;
this.read_state = ReadState::ReadFrameLen;
return Poll::Ready(Ok(nread));
}
},
false => {
let mut buffer =
this.decrypt_buffer.take().expect("buffer to exist");
match this.noise.read_message(
&this.read_buffer[this.offset..this.offset + frame_size],
&mut buffer,
) {
Err(error) => {
tracing::error!(
target: LOG_TARGET,
ty = ?this.ty,
peer = ?this.peer,
buf_len = ?buf.len(),
frame_size = ?frame_size,
?error,
"failed to decrypt message for smaller buffer"
);
return Poll::Ready(Err(io::ErrorKind::InvalidData.into()));
}
Ok(nread) => {
buf.copy_from_slice(&buffer[..buf.len()]);
this.read_state = ReadState::ProcessNextFrame {
pending: Some(buffer),
offset: buf.len(),
size: nread,
frame_size,
};
return Poll::Ready(Ok(buf.len()));
}
}
}
}
}
},
}
}
}
}
impl<S: AsyncRead + AsyncWrite + Unpin> AsyncWrite for NoiseSocket<S> {
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
let this = Pin::into_inner(self);
let mut chunks = buf.chunks(MAX_FRAME_LEN).peekable();
loop {
match this.write_state {
WriteState::Ready {
offset,
size,
encrypted_size,
} => {
let Some(chunk) = chunks.next() else {
break;
};
match this.noise.write_message(chunk, &mut this.encrypt_buffer[offset + 2..]) {
Err(error) => {
tracing::error!(
target: LOG_TARGET,
?error,
ty = ?this.ty,
peer = ?this.peer,
"failed to encrypt message"
);
return Poll::Ready(Err(io::ErrorKind::InvalidData.into()));
}
Ok(nwritten) => {
this.encrypt_buffer[offset] = (nwritten >> 8) as u8;
this.encrypt_buffer[offset + 1] = (nwritten & 0xff) as u8;
if let Some(next_chunk) = chunks.peek() {
if next_chunk.len() + NOISE_EXTRA_ENCRYPT_SPACE + 2
<= this.encrypt_buffer[offset + nwritten + 2..].len()
{
this.write_state = WriteState::Ready {
offset: offset + nwritten + 2,
size: size + chunk.len(),
encrypted_size: encrypted_size + nwritten + 2,
};
continue;
}
}
this.write_state = WriteState::WriteFrame {
offset: 0usize,
size: size + chunk.len(),
encrypted_size: encrypted_size + nwritten + 2,
};
}
}
}
WriteState::WriteFrame {
ref mut offset,
size,
encrypted_size,
} => loop {
match futures::ready!(Pin::new(&mut this.io)
.poll_write(cx, &this.encrypt_buffer[*offset..encrypted_size]))
{
Ok(nwritten) => {
*offset += nwritten;
if offset == &encrypted_size {
this.write_state = WriteState::Ready {
offset: 0usize,
size: 0usize,
encrypted_size: 0usize,
};
return Poll::Ready(Ok(size));
}
}
Err(error) => return Poll::Ready(Err(error)),
}
},
}
}
Poll::Ready(Ok(0))
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Pin::new(&mut self.io).poll_flush(cx)
}
fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
Pin::new(&mut self.io).poll_close(cx)
}
}
/// Parse the `PeerId` from received `NoiseHandshakePayload` and verify the payload signature.
fn parse_and_verify_peer_id(
payload: handshake_schema::NoiseHandshakePayload,
dh_remote_pubkey: &[u8],
) -> Result<PeerId, NegotiationError> {
let identity = payload.identity_key.ok_or(NegotiationError::PeerIdMissing)?;
let remote_public_key = RemotePublicKey::from_protobuf_encoding(&identity)?;
let remote_key_signature =
payload.identity_sig.ok_or(NegotiationError::BadSignature).inspect_err(|_err| {
tracing::debug!(target: LOG_TARGET, "payload without signature");
})?;
let peer_id = PeerId::from_public_key_protobuf(&identity);
if !remote_public_key.verify(
&[STATIC_KEY_DOMAIN.as_bytes(), dh_remote_pubkey].concat(),
&remote_key_signature,
) {
tracing::debug!(
target: LOG_TARGET,
?peer_id,
"failed to verify remote public key signature"
);
return Err(NegotiationError::BadSignature);
}
Ok(peer_id)
}
/// The type of the transport used for the crypto/noise protocol.
///
/// This is used for logging purposes.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HandshakeTransport {
Tcp,
#[cfg(feature = "websocket")]
WebSocket,
}
/// Perform Noise handshake.
pub async fn handshake<S: AsyncRead + AsyncWrite + Unpin>(
mut io: S,
keypair: &Keypair,
role: Role,
max_read_ahead_factor: usize,
max_write_buffer_size: usize,
timeout: std::time::Duration,
ty: HandshakeTransport,
) -> Result<(NoiseSocket<S>, PeerId), NegotiationError> {
let handle_handshake = async move {
tracing::debug!(target: LOG_TARGET, ?role, ?ty, "start noise handshake");
let mut noise = NoiseContext::new(keypair, role)?;
let payload = match role {
Role::Dialer => {
// write initial message
let first_message = noise.first_message(Role::Dialer)?;
let _ = io.write(&first_message).await?;
io.flush().await?;
// read back response which contains the remote peer id
let message = noise.read_handshake_message(&mut io).await?;
// Decode the remote identity message.
let payload = handshake_schema::NoiseHandshakePayload::decode(message)
.map_err(ParseError::from)
.map_err(|err| {
tracing::error!(target: LOG_TARGET, ?err, ?ty, "failed to decode remote identity message");
err
})?;
// send the final message which contains local peer id
let second_message = noise.second_message()?;
let _ = io.write(&second_message).await?;
io.flush().await?;
payload
}
Role::Listener => {
// read remote's first message
let _ = noise.read_handshake_message(&mut io).await?;
// send local peer id.
let second_message = noise.second_message()?;
let _ = io.write(&second_message).await?;
io.flush().await?;
// read remote's second message which contains their peer id
let message = noise.read_handshake_message(&mut io).await?;
// Decode the remote identity message.
handshake_schema::NoiseHandshakePayload::decode(message)
.map_err(ParseError::from)?
}
};
let dh_remote_pubkey = noise.get_handshake_dh_remote_pubkey()?;
let peer = parse_and_verify_peer_id(payload, dh_remote_pubkey)?;
Ok((
NoiseSocket::new(
io,
noise.into_transport()?,
max_read_ahead_factor,
max_write_buffer_size,
peer,
ty,
),
peer,
))
};
match tokio::time::timeout(timeout, handle_handshake).await {
Err(_) => Err(NegotiationError::Timeout),
Ok(result) => result,
}
}
// TODO: https://github.com/paritytech/litep2p/issues/125 add more tests
#[cfg(test)]
mod tests {
use super::*;
use std::net::SocketAddr;
use tokio::net::{TcpListener, TcpStream};
use tokio_util::compat::{TokioAsyncReadCompatExt, TokioAsyncWriteCompatExt};
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/crypto/noise/protocol.rs | src/crypto/noise/protocol.rs | // Copyright 2019 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::crypto::noise::x25519_spec;
use rand::SeedableRng;
use zeroize::Zeroize;
/// DH keypair.
#[derive(Clone)]
pub struct Keypair<T: Zeroize> {
pub secret: SecretKey<T>,
pub public: PublicKey<T>,
}
/// DH secret key.
#[derive(Clone)]
pub struct SecretKey<T: Zeroize>(pub T);
impl<T: Zeroize> Drop for SecretKey<T> {
fn drop(&mut self) {
self.0.zeroize()
}
}
impl<T: AsRef<[u8]> + Zeroize> AsRef<[u8]> for SecretKey<T> {
fn as_ref(&self) -> &[u8] {
self.0.as_ref()
}
}
/// DH public key.
#[derive(Clone)]
pub struct PublicKey<T>(pub T);
impl<T: AsRef<[u8]>> PartialEq for PublicKey<T> {
fn eq(&self, other: &PublicKey<T>) -> bool {
self.as_ref() == other.as_ref()
}
}
impl<T: AsRef<[u8]>> Eq for PublicKey<T> {}
impl<T: AsRef<[u8]>> AsRef<[u8]> for PublicKey<T> {
fn as_ref(&self) -> &[u8] {
self.0.as_ref()
}
}
/// Custom `snow::CryptoResolver` which delegates to either the
/// `RingResolver` on native or the `DefaultResolver` on wasm
/// for hash functions and symmetric ciphers, while using x25519-dalek
/// for Curve25519 DH.
pub struct Resolver;
impl snow::resolvers::CryptoResolver for Resolver {
fn resolve_rng(&self) -> Option<Box<dyn snow::types::Random>> {
Some(Box::new(Rng(rand::rngs::StdRng::from_entropy())))
}
fn resolve_dh(&self, choice: &snow::params::DHChoice) -> Option<Box<dyn snow::types::Dh>> {
if let snow::params::DHChoice::Curve25519 = choice {
Some(Box::new(Keypair::<x25519_spec::X25519Spec>::default()))
} else {
None
}
}
fn resolve_hash(
&self,
choice: &snow::params::HashChoice,
) -> Option<Box<dyn snow::types::Hash>> {
snow::resolvers::RingResolver.resolve_hash(choice)
}
fn resolve_cipher(
&self,
choice: &snow::params::CipherChoice,
) -> Option<Box<dyn snow::types::Cipher>> {
snow::resolvers::RingResolver.resolve_cipher(choice)
}
}
/// Wrapper around a CSPRNG to implement `snow::Random` trait for.
struct Rng(rand::rngs::StdRng);
impl rand::RngCore for Rng {
fn next_u32(&mut self) -> u32 {
self.0.next_u32()
}
fn next_u64(&mut self) -> u64 {
self.0.next_u64()
}
fn fill_bytes(&mut self, dest: &mut [u8]) {
self.0.fill_bytes(dest)
}
fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), rand::Error> {
self.0.try_fill_bytes(dest)
}
}
impl rand::CryptoRng for Rng {}
impl snow::types::Random for Rng {}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/listener_select.rs | src/multistream_select/listener_select.rs | // Copyright 2017 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Protocol negotiation strategies for the peer acting as the listener
//! in a multistream-select protocol negotiation.
use crate::{
codec::unsigned_varint::UnsignedVarint,
error::{self, Error},
multistream_select::{
drain_trailing_protocols,
protocol::{
webrtc_encode_multistream_message, HeaderLine, Message, MessageIO, Protocol,
ProtocolError, PROTO_MULTISTREAM_1_0,
},
Negotiated, NegotiationError,
},
types::protocol::ProtocolName,
};
use bytes::{Bytes, BytesMut};
use futures::prelude::*;
use smallvec::SmallVec;
use std::{
convert::TryFrom as _,
iter::FromIterator,
mem,
pin::Pin,
task::{Context, Poll},
};
const LOG_TARGET: &str = "litep2p::multistream-select";
/// Returns a `Future` that negotiates a protocol on the given I/O stream
/// for a peer acting as the _listener_ (or _responder_).
///
/// This function is given an I/O stream and a list of protocols and returns a
/// computation that performs the protocol negotiation with the remote. The
/// returned `Future` resolves with the name of the negotiated protocol and
/// a [`Negotiated`] I/O stream.
pub fn listener_select_proto<R, I>(inner: R, protocols: I) -> ListenerSelectFuture<R, I::Item>
where
R: AsyncRead + AsyncWrite,
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let protocols = protocols.into_iter().filter_map(|n| match Protocol::try_from(n.as_ref()) {
Ok(p) => Some((n, p)),
Err(e) => {
tracing::warn!(
target: LOG_TARGET,
"Listener: Ignoring invalid protocol: {} due to {}",
String::from_utf8_lossy(n.as_ref()),
e
);
None
}
});
ListenerSelectFuture {
protocols: SmallVec::from_iter(protocols),
state: State::RecvHeader {
io: MessageIO::new(inner),
},
last_sent_na: false,
}
}
/// The `Future` returned by [`listener_select_proto`] that performs a
/// multistream-select protocol negotiation on an underlying I/O stream.
#[pin_project::pin_project]
pub struct ListenerSelectFuture<R, N> {
protocols: SmallVec<[(N, Protocol); 8]>,
state: State<R, N>,
/// Whether the last message sent was a protocol rejection (i.e. `na\n`).
///
/// If the listener reads garbage or EOF after such a rejection,
/// the dialer is likely using `V1Lazy` and negotiation must be
/// considered failed, but not with a protocol violation or I/O
/// error.
last_sent_na: bool,
}
enum State<R, N> {
RecvHeader {
io: MessageIO<R>,
},
SendHeader {
io: MessageIO<R>,
},
RecvMessage {
io: MessageIO<R>,
},
SendMessage {
io: MessageIO<R>,
message: Message,
protocol: Option<N>,
},
Flush {
io: MessageIO<R>,
protocol: Option<N>,
},
Done,
}
impl<R, N> Future for ListenerSelectFuture<R, N>
where
// The Unpin bound here is required because we
// produce a `Negotiated<R>` as the output.
// It also makes the implementation considerably
// easier to write.
R: AsyncRead + AsyncWrite + Unpin,
N: AsRef<[u8]> + Clone,
{
type Output = Result<(N, Negotiated<R>), NegotiationError>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
loop {
match mem::replace(this.state, State::Done) {
State::RecvHeader { mut io } => {
match io.poll_next_unpin(cx) {
Poll::Ready(Some(Ok(Message::Header(h)))) => match h {
HeaderLine::V1 => *this.state = State::SendHeader { io },
},
Poll::Ready(Some(Ok(_))) =>
return Poll::Ready(Err(ProtocolError::InvalidMessage.into())),
Poll::Ready(Some(Err(err))) => return Poll::Ready(Err(From::from(err))),
// Treat EOF error as [`NegotiationError::Failed`], not as
// [`NegotiationError::ProtocolError`], allowing dropping or closing an I/O
// stream as a permissible way to "gracefully" fail a negotiation.
Poll::Ready(None) => return Poll::Ready(Err(NegotiationError::Failed)),
Poll::Pending => {
*this.state = State::RecvHeader { io };
return Poll::Pending;
}
}
}
State::SendHeader { mut io } => {
match Pin::new(&mut io).poll_ready(cx) {
Poll::Pending => {
*this.state = State::SendHeader { io };
return Poll::Pending;
}
Poll::Ready(Ok(())) => {}
Poll::Ready(Err(err)) => return Poll::Ready(Err(From::from(err))),
}
let msg = Message::Header(HeaderLine::V1);
if let Err(err) = Pin::new(&mut io).start_send(msg) {
return Poll::Ready(Err(From::from(err)));
}
*this.state = State::Flush { io, protocol: None };
}
State::RecvMessage { mut io } => {
let msg = match Pin::new(&mut io).poll_next(cx) {
Poll::Ready(Some(Ok(msg))) => msg,
// Treat EOF error as [`NegotiationError::Failed`], not as
// [`NegotiationError::ProtocolError`], allowing dropping or closing an I/O
// stream as a permissible way to "gracefully" fail a negotiation.
//
// This is e.g. important when a listener rejects a protocol with
// [`Message::NotAvailable`] and the dialer does not have alternative
// protocols to propose. Then the dialer will stop the negotiation and drop
// the corresponding stream. As a listener this EOF should be interpreted as
// a failed negotiation.
Poll::Ready(None) => return Poll::Ready(Err(NegotiationError::Failed)),
Poll::Pending => {
*this.state = State::RecvMessage { io };
return Poll::Pending;
}
Poll::Ready(Some(Err(err))) => {
if *this.last_sent_na {
// When we read garbage or EOF after having already rejected a
// protocol, the dialer is most likely using `V1Lazy` and has
// optimistically settled on this protocol, so this is really a
// failed negotiation, not a protocol violation. In this case
// the dialer also raises `NegotiationError::Failed` when finally
// reading the `N/A` response.
if let ProtocolError::InvalidMessage = &err {
tracing::trace!(
target: LOG_TARGET,
"Listener: Negotiation failed with invalid \
message after protocol rejection."
);
return Poll::Ready(Err(NegotiationError::Failed));
}
if let ProtocolError::IoError(e) = &err {
if e.kind() == std::io::ErrorKind::UnexpectedEof {
tracing::trace!(
target: LOG_TARGET,
"Listener: Negotiation failed with EOF \
after protocol rejection."
);
return Poll::Ready(Err(NegotiationError::Failed));
}
}
}
return Poll::Ready(Err(From::from(err)));
}
};
match msg {
Message::ListProtocols => {
let supported =
this.protocols.iter().map(|(_, p)| p).cloned().collect();
let message = Message::Protocols(supported);
*this.state = State::SendMessage {
io,
message,
protocol: None,
}
}
Message::Protocol(p) => {
let protocol = this.protocols.iter().find_map(|(name, proto)| {
if &p == proto {
Some(name.clone())
} else {
None
}
});
let message = if protocol.is_some() {
tracing::debug!("Listener: confirming protocol: {}", p);
Message::Protocol(p.clone())
} else {
tracing::debug!(
"Listener: rejecting protocol: {}",
String::from_utf8_lossy(p.as_ref())
);
Message::NotAvailable
};
*this.state = State::SendMessage {
io,
message,
protocol,
};
}
_ => return Poll::Ready(Err(ProtocolError::InvalidMessage.into())),
}
}
State::SendMessage {
mut io,
message,
protocol,
} => {
match Pin::new(&mut io).poll_ready(cx) {
Poll::Pending => {
*this.state = State::SendMessage {
io,
message,
protocol,
};
return Poll::Pending;
}
Poll::Ready(Ok(())) => {}
Poll::Ready(Err(err)) => return Poll::Ready(Err(From::from(err))),
}
if let Message::NotAvailable = &message {
*this.last_sent_na = true;
} else {
*this.last_sent_na = false;
}
if let Err(err) = Pin::new(&mut io).start_send(message) {
return Poll::Ready(Err(From::from(err)));
}
*this.state = State::Flush { io, protocol };
}
State::Flush { mut io, protocol } => {
match Pin::new(&mut io).poll_flush(cx) {
Poll::Pending => {
*this.state = State::Flush { io, protocol };
return Poll::Pending;
}
Poll::Ready(Ok(())) => {
// If a protocol has been selected, finish negotiation.
// Otherwise expect to receive another message.
match protocol {
Some(protocol) => {
tracing::debug!(
"Listener: sent confirmed protocol: {}",
String::from_utf8_lossy(protocol.as_ref())
);
let io = Negotiated::completed(io.into_inner());
return Poll::Ready(Ok((protocol, io)));
}
None => *this.state = State::RecvMessage { io },
}
}
Poll::Ready(Err(err)) => return Poll::Ready(Err(From::from(err))),
}
}
State::Done => panic!("State::poll called after completion"),
}
}
}
}
/// Result of [`webrtc_listener_negotiate()`].
#[derive(Debug)]
pub enum ListenerSelectResult {
/// Requested protocol is available and substream can be accepted.
Accepted {
/// Protocol that is confirmed.
protocol: ProtocolName,
/// `multistream-select` message.
message: BytesMut,
},
/// Requested protocol is not available.
Rejected {
/// `multistream-select` message.
message: BytesMut,
},
}
/// Negotiate protocols for listener.
///
/// Parse protocols offered by the remote peer and check if any of the offered protocols match
/// locally available protocols. If a match is found, return an encoded multistream-select
/// response and the negotiated protocol. If parsing fails or no match is found, return an error.
pub fn webrtc_listener_negotiate<'a>(
supported_protocols: &'a mut impl Iterator<Item = &'a ProtocolName>,
mut payload: Bytes,
) -> crate::Result<ListenerSelectResult> {
let protocols = drain_trailing_protocols(payload)?;
let mut protocol_iter = protocols.into_iter();
// skip the multistream-select header because it's not part of user protocols but verify it's
// present
if protocol_iter.next() != Some(PROTO_MULTISTREAM_1_0) {
return Err(Error::NegotiationError(
error::NegotiationError::MultistreamSelectError(NegotiationError::Failed),
));
}
for protocol in protocol_iter {
tracing::trace!(
target: LOG_TARGET,
protocol = ?std::str::from_utf8(protocol.as_ref()),
"listener: checking protocol",
);
for supported in &mut *supported_protocols {
if protocol.as_ref() == supported.as_bytes() {
return Ok(ListenerSelectResult::Accepted {
protocol: supported.clone(),
message: webrtc_encode_multistream_message(std::iter::once(
Message::Protocol(protocol),
))?,
});
}
}
}
tracing::trace!(
target: LOG_TARGET,
"listener: handshake rejected, no supported protocol found",
);
Ok(ListenerSelectResult::Rejected {
message: webrtc_encode_multistream_message(std::iter::once(Message::NotAvailable))?,
})
}
#[cfg(test)]
mod tests {
use super::*;
use crate::error;
use bytes::BufMut;
#[test]
fn webrtc_listener_negotiate_works() {
let mut local_protocols = [
ProtocolName::from("/13371338/proto/1"),
ProtocolName::from("/sup/proto/1"),
ProtocolName::from("/13371338/proto/2"),
ProtocolName::from("/13371338/proto/3"),
ProtocolName::from("/13371338/proto/4"),
];
let message = webrtc_encode_multistream_message(vec![
Message::Protocol(Protocol::try_from(&b"/13371338/proto/1"[..]).unwrap()),
Message::Protocol(Protocol::try_from(&b"/sup/proto/1"[..]).unwrap()),
])
.unwrap()
.freeze();
match webrtc_listener_negotiate(&mut local_protocols.iter(), message) {
Err(error) => panic!("error received: {error:?}"),
Ok(ListenerSelectResult::Rejected { .. }) => panic!("message rejected"),
Ok(ListenerSelectResult::Accepted { protocol, message }) => {
assert_eq!(protocol, ProtocolName::from("/13371338/proto/1"));
}
}
}
#[test]
fn invalid_message() {
let mut local_protocols = [
ProtocolName::from("/13371338/proto/1"),
ProtocolName::from("/sup/proto/1"),
ProtocolName::from("/13371338/proto/2"),
ProtocolName::from("/13371338/proto/3"),
ProtocolName::from("/13371338/proto/4"),
];
// The invalid message is really two multistream-select messages inside one `WebRtcMessage`:
// 1. the multistream-select header
// 2. an "ls response" message (that does not contain another header)
//
// This is invalid for two reasons:
// 1. It is malformed. Either the header is followed by one or more `Message::Protocol`
// instances or the header is part of the "ls response".
// 2. This sequence of messages is not spec compliant. A listener receives one of the
// following on an inbound substream:
// - a multistream-select header followed by a `Message::Protocol` instance
// - a multistream-select header followed by an "ls" message (<length prefix><ls><\n>)
//
// `webrtc_listener_negotiate()` should reject this invalid message. The error can either be
// `InvalidData` because the message is malformed or `StateMismatch` because the message is
// not expected at this point in the protocol.
let message = webrtc_encode_multistream_message(std::iter::once(Message::Protocols(vec![
Protocol::try_from(&b"/13371338/proto/1"[..]).unwrap(),
Protocol::try_from(&b"/sup/proto/1"[..]).unwrap(),
])))
.unwrap()
.freeze();
match webrtc_listener_negotiate(&mut local_protocols.iter(), message) {
Err(error) => assert!(std::matches!(
error,
// something has gone off the rails here...
Error::NegotiationError(error::NegotiationError::ParseError(
error::ParseError::InvalidData
)),
)),
_ => panic!("invalid event"),
}
}
#[test]
fn only_header_line_received() {
let mut local_protocols = [
ProtocolName::from("/13371338/proto/1"),
ProtocolName::from("/sup/proto/1"),
ProtocolName::from("/13371338/proto/2"),
ProtocolName::from("/13371338/proto/3"),
ProtocolName::from("/13371338/proto/4"),
];
// send only header line
let mut bytes = BytesMut::with_capacity(32);
let message = Message::Header(HeaderLine::V1);
message.encode(&mut bytes).map_err(|_| Error::InvalidData).unwrap();
match webrtc_listener_negotiate(&mut local_protocols.iter(), bytes.freeze()) {
Err(error) => assert!(std::matches!(
error,
Error::NegotiationError(error::NegotiationError::ParseError(
error::ParseError::InvalidData
)),
)),
event => panic!("invalid event: {event:?}"),
}
}
#[test]
fn header_line_missing() {
let mut local_protocols = [
ProtocolName::from("/13371338/proto/1"),
ProtocolName::from("/sup/proto/1"),
ProtocolName::from("/13371338/proto/2"),
ProtocolName::from("/13371338/proto/3"),
ProtocolName::from("/13371338/proto/4"),
];
// header line missing
let mut bytes = BytesMut::with_capacity(256);
vec![&b"/13371338/proto/1"[..], &b"/sup/proto/1"[..]]
.into_iter()
.for_each(|proto| {
bytes.put_u8((proto.len() + 1) as u8);
Message::Protocol(Protocol::try_from(proto).unwrap())
.encode(&mut bytes)
.unwrap();
});
match webrtc_listener_negotiate(&mut local_protocols.iter(), bytes.freeze()) {
Err(error) => assert!(std::matches!(
error,
Error::NegotiationError(error::NegotiationError::MultistreamSelectError(
NegotiationError::Failed
))
)),
event => panic!("invalid event: {event:?}"),
}
}
#[test]
fn protocol_not_supported() {
let mut local_protocols = [
ProtocolName::from("/13371338/proto/1"),
ProtocolName::from("/sup/proto/1"),
ProtocolName::from("/13371338/proto/2"),
ProtocolName::from("/13371338/proto/3"),
ProtocolName::from("/13371338/proto/4"),
];
let message = webrtc_encode_multistream_message(vec![Message::Protocol(
Protocol::try_from(&b"/13371339/proto/1"[..]).unwrap(),
)])
.unwrap()
.freeze();
match webrtc_listener_negotiate(&mut local_protocols.iter(), message) {
Err(error) => panic!("error received: {error:?}"),
Ok(ListenerSelectResult::Rejected { message }) => {
assert_eq!(
message,
webrtc_encode_multistream_message(std::iter::once(Message::NotAvailable))
.unwrap()
);
}
Ok(ListenerSelectResult::Accepted { protocol, message }) => panic!("message accepted"),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/negotiated.rs | src/multistream_select/negotiated.rs | // Copyright 2019 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::multistream_select::protocol::{
HeaderLine, Message, MessageReader, Protocol, ProtocolError,
};
use futures::{
io::{IoSlice, IoSliceMut},
prelude::*,
ready,
};
use pin_project::pin_project;
use std::{
error::Error,
fmt, io, mem,
pin::Pin,
task::{Context, Poll},
};
const LOG_TARGET: &str = "litep2p::multistream-select";
/// An I/O stream that has settled on an (application-layer) protocol to use.
///
/// A `Negotiated` represents an I/O stream that has _settled_ on a protocol
/// to use. In particular, it is not implied that all of the protocol negotiation
/// frames have yet been sent and / or received, just that the selected protocol
/// is fully determined. This is to allow the last protocol negotiation frames
/// sent by a peer to be combined in a single write, possibly piggy-backing
/// data from the negotiated protocol on top.
///
/// Reading from a `Negotiated` I/O stream that still has pending negotiation
/// protocol data to send implicitly triggers flushing of all yet unsent data.
#[pin_project]
#[derive(Debug)]
pub struct Negotiated<TInner> {
#[pin]
state: State<TInner>,
}
/// A `Future` that waits on the completion of protocol negotiation.
#[derive(Debug)]
pub struct NegotiatedComplete<TInner> {
inner: Option<Negotiated<TInner>>,
}
impl<TInner> Future for NegotiatedComplete<TInner>
where
// `Unpin` is required not because of
// implementation details but because we produce
// the `Negotiated` as the output of the
// future.
TInner: AsyncRead + AsyncWrite + Unpin,
{
type Output = Result<Negotiated<TInner>, NegotiationError>;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let mut io = self.inner.take().expect("NegotiatedFuture called after completion.");
match Negotiated::poll(Pin::new(&mut io), cx) {
Poll::Pending => {
self.inner = Some(io);
Poll::Pending
}
Poll::Ready(Ok(())) => Poll::Ready(Ok(io)),
Poll::Ready(Err(err)) => {
self.inner = Some(io);
Poll::Ready(Err(err))
}
}
}
}
impl<TInner> Negotiated<TInner> {
/// Creates a `Negotiated` in state [`State::Completed`].
pub(crate) fn completed(io: TInner) -> Self {
Negotiated {
state: State::Completed { io },
}
}
/// Creates a `Negotiated` in state [`State::Expecting`] that is still
/// expecting confirmation of the given `protocol`.
pub(crate) fn expecting(
io: MessageReader<TInner>,
protocol: Protocol,
header: Option<HeaderLine>,
) -> Self {
Negotiated {
state: State::Expecting {
io,
protocol,
header,
},
}
}
pub fn inner(self) -> TInner {
match self.state {
State::Completed { io } => io,
_ => panic!("stream is not negotiated"),
}
}
/// Polls the `Negotiated` for completion.
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), NegotiationError>>
where
TInner: AsyncRead + AsyncWrite + Unpin,
{
// Flush any pending negotiation data.
match self.as_mut().poll_flush(cx) {
Poll::Ready(Ok(())) => {}
Poll::Pending => return Poll::Pending,
Poll::Ready(Err(e)) => {
// If the remote closed the stream, it is important to still
// continue reading the data that was sent, if any.
if e.kind() != io::ErrorKind::WriteZero {
return Poll::Ready(Err(e.into()));
}
}
}
let mut this = self.project();
if let StateProj::Completed { .. } = this.state.as_mut().project() {
return Poll::Ready(Ok(()));
}
// Read outstanding protocol negotiation messages.
loop {
match mem::replace(&mut *this.state, State::Invalid) {
State::Expecting {
mut io,
header,
protocol,
} => {
let msg = match Pin::new(&mut io).poll_next(cx)? {
Poll::Ready(Some(msg)) => msg,
Poll::Pending => {
*this.state = State::Expecting {
io,
header,
protocol,
};
return Poll::Pending;
}
Poll::Ready(None) => {
return Poll::Ready(Err(ProtocolError::IoError(
io::ErrorKind::UnexpectedEof.into(),
)
.into()));
}
};
if let Message::Header(h) = &msg {
if Some(h) == header.as_ref() {
*this.state = State::Expecting {
io,
protocol,
header: None,
};
continue;
} else {
// If we received a header message but it doesn't match the expected
// one, or we have already received the message return an error.
return Poll::Ready(Err(ProtocolError::InvalidMessage.into()));
}
}
if let Message::Protocol(p) = &msg {
if p.as_ref() == protocol.as_ref() {
tracing::debug!(
target: LOG_TARGET,
"Negotiated: Received confirmation for protocol: {}",
p
);
*this.state = State::Completed {
io: io.into_inner(),
};
return Poll::Ready(Ok(()));
}
}
return Poll::Ready(Err(NegotiationError::Failed));
}
_ => panic!("Negotiated: Invalid state"),
}
}
}
/// Returns a [`NegotiatedComplete`] future that waits for protocol
/// negotiation to complete.
pub fn complete(self) -> NegotiatedComplete<TInner> {
NegotiatedComplete { inner: Some(self) }
}
}
/// The states of a `Negotiated` I/O stream.
#[pin_project(project = StateProj)]
#[derive(Debug)]
enum State<R> {
/// In this state, a `Negotiated` is still expecting to
/// receive confirmation of the protocol it has optimistically
/// settled on.
Expecting {
/// The underlying I/O stream.
#[pin]
io: MessageReader<R>,
/// The expected negotiation header/preamble (i.e. multistream-select version),
/// if one is still expected to be received.
header: Option<HeaderLine>,
/// The expected application protocol (i.e. name and version).
protocol: Protocol,
},
/// In this state, a protocol has been agreed upon and I/O
/// on the underlying stream can commence.
Completed {
#[pin]
io: R,
},
/// Temporary state while moving the `io` resource from
/// `Expecting` to `Completed`.
Invalid,
}
impl<TInner> AsyncRead for Negotiated<TInner>
where
TInner: AsyncRead + AsyncWrite + Unpin,
{
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut [u8],
) -> Poll<Result<usize, io::Error>> {
loop {
if let StateProj::Completed { io } = self.as_mut().project().state.project() {
// If protocol negotiation is complete, commence with reading.
return io.poll_read(cx, buf);
}
// Poll the `Negotiated`, driving protocol negotiation to completion,
// including flushing of any remaining data.
match self.as_mut().poll(cx) {
Poll::Ready(Ok(())) => {}
Poll::Pending => return Poll::Pending,
Poll::Ready(Err(err)) => return Poll::Ready(Err(From::from(err))),
}
}
}
fn poll_read_vectored(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
bufs: &mut [IoSliceMut<'_>],
) -> Poll<Result<usize, io::Error>> {
loop {
if let StateProj::Completed { io } = self.as_mut().project().state.project() {
// If protocol negotiation is complete, commence with reading.
return io.poll_read_vectored(cx, bufs);
}
// Poll the `Negotiated`, driving protocol negotiation to completion,
// including flushing of any remaining data.
match self.as_mut().poll(cx) {
Poll::Ready(Ok(())) => {}
Poll::Pending => return Poll::Pending,
Poll::Ready(Err(err)) => return Poll::Ready(Err(From::from(err))),
}
}
}
}
impl<TInner> AsyncWrite for Negotiated<TInner>
where
TInner: AsyncWrite + AsyncRead + Unpin,
{
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, io::Error>> {
match self.project().state.project() {
StateProj::Completed { io } => io.poll_write(cx, buf),
StateProj::Expecting { io, .. } => io.poll_write(cx, buf),
StateProj::Invalid => panic!("Negotiated: Invalid state"),
}
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> {
match self.project().state.project() {
StateProj::Completed { io } => io.poll_flush(cx),
StateProj::Expecting { io, .. } => io.poll_flush(cx),
StateProj::Invalid => panic!("Negotiated: Invalid state"),
}
}
fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> {
// Ensure all data has been flushed, including optimistic multistream-select messages.
ready!(self.as_mut().poll_flush(cx).map_err(Into::<io::Error>::into)?);
// Continue with the shutdown of the underlying I/O stream.
match self.project().state.project() {
StateProj::Completed { io, .. } => io.poll_close(cx),
StateProj::Expecting { io, .. } => {
let close_poll = io.poll_close(cx);
if let Poll::Ready(Ok(())) = close_poll {
tracing::debug!(
target: LOG_TARGET,
"Stream closed. Confirmation from remote for optimstic protocol negotiation still pending."
);
}
close_poll
}
StateProj::Invalid => panic!("Negotiated: Invalid state"),
}
}
fn poll_write_vectored(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
bufs: &[IoSlice<'_>],
) -> Poll<Result<usize, io::Error>> {
match self.project().state.project() {
StateProj::Completed { io } => io.poll_write_vectored(cx, bufs),
StateProj::Expecting { io, .. } => io.poll_write_vectored(cx, bufs),
StateProj::Invalid => panic!("Negotiated: Invalid state"),
}
}
}
/// Error that can happen when negotiating a protocol with the remote.
#[derive(Debug, thiserror::Error, PartialEq)]
pub enum NegotiationError {
/// A protocol error occurred during the negotiation.
#[error("A protocol error occurred during the negotiation: `{0:?}`")]
ProtocolError(#[from] ProtocolError),
/// Protocol negotiation failed because no protocol could be agreed upon.
#[error("Protocol negotiation failed.")]
Failed,
}
impl From<io::Error> for NegotiationError {
fn from(err: io::Error) -> NegotiationError {
ProtocolError::from(err).into()
}
}
impl From<NegotiationError> for io::Error {
fn from(err: NegotiationError) -> io::Error {
if let NegotiationError::ProtocolError(e) = err {
return e.into();
}
io::Error::other(err)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/length_delimited.rs | src/multistream_select/length_delimited.rs | // Copyright 2017 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use bytes::{Buf as _, BufMut as _, Bytes, BytesMut};
use futures::{io::IoSlice, prelude::*};
use std::{
convert::TryFrom as _,
io,
pin::Pin,
task::{Context, Poll},
};
const MAX_LEN_BYTES: u16 = 2;
const MAX_FRAME_SIZE: u16 = (1 << (MAX_LEN_BYTES * 8 - MAX_LEN_BYTES)) - 1;
const DEFAULT_BUFFER_SIZE: usize = 64;
const LOG_TARGET: &str = "litep2p::multistream-select";
/// A `Stream` and `Sink` for unsigned-varint length-delimited frames,
/// wrapping an underlying `AsyncRead + AsyncWrite` I/O resource.
///
/// We purposely only support a frame sizes up to 16KiB (2 bytes unsigned varint
/// frame length). Frames mostly consist in a short protocol name, which is highly
/// unlikely to be more than 16KiB long.
#[pin_project::pin_project]
#[derive(Debug)]
pub struct LengthDelimited<R> {
/// The inner I/O resource.
#[pin]
inner: R,
/// Read buffer for a single incoming unsigned-varint length-delimited frame.
read_buffer: BytesMut,
/// Write buffer for outgoing unsigned-varint length-delimited frames.
write_buffer: BytesMut,
/// The current read state, alternating between reading a frame
/// length and reading a frame payload.
read_state: ReadState,
}
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
enum ReadState {
/// We are currently reading the length of the next frame of data.
ReadLength {
buf: [u8; MAX_LEN_BYTES as usize],
pos: usize,
},
/// We are currently reading the frame of data itself.
ReadData { len: u16, pos: usize },
}
impl Default for ReadState {
fn default() -> Self {
ReadState::ReadLength {
buf: [0; MAX_LEN_BYTES as usize],
pos: 0,
}
}
}
impl<R> LengthDelimited<R> {
/// Creates a new I/O resource for reading and writing unsigned-varint
/// length delimited frames.
pub fn new(inner: R) -> LengthDelimited<R> {
LengthDelimited {
inner,
read_state: ReadState::default(),
read_buffer: BytesMut::with_capacity(DEFAULT_BUFFER_SIZE),
write_buffer: BytesMut::with_capacity(DEFAULT_BUFFER_SIZE + MAX_LEN_BYTES as usize),
}
}
/// Drops the [`LengthDelimited`] resource, yielding the underlying I/O stream.
///
/// # Panic
///
/// Will panic if called while there is data in the read or write buffer.
/// The read buffer is guaranteed to be empty whenever `Stream::poll` yields
/// a new `Bytes` frame. The write buffer is guaranteed to be empty after
/// flushing.
pub fn into_inner(self) -> R {
assert!(self.read_buffer.is_empty());
assert!(self.write_buffer.is_empty());
self.inner
}
/// Converts the [`LengthDelimited`] into a [`LengthDelimitedReader`], dropping the
/// uvi-framed `Sink` in favour of direct `AsyncWrite` access to the underlying
/// I/O stream.
///
/// This is typically done if further uvi-framed messages are expected to be
/// received but no more such messages are written, allowing the writing of
/// follow-up protocol data to commence.
pub fn into_reader(self) -> LengthDelimitedReader<R> {
LengthDelimitedReader { inner: self }
}
/// Writes all buffered frame data to the underlying I/O stream,
/// _without flushing it_.
///
/// After this method returns `Poll::Ready`, the write buffer of frames
/// submitted to the `Sink` is guaranteed to be empty.
pub fn poll_write_buffer(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), io::Error>>
where
R: AsyncWrite,
{
let mut this = self.project();
while !this.write_buffer.is_empty() {
match this.inner.as_mut().poll_write(cx, this.write_buffer) {
Poll::Pending => return Poll::Pending,
Poll::Ready(Ok(0)) =>
return Poll::Ready(Err(io::Error::new(
io::ErrorKind::WriteZero,
"Failed to write buffered frame.",
))),
Poll::Ready(Ok(n)) => this.write_buffer.advance(n),
Poll::Ready(Err(err)) => return Poll::Ready(Err(err)),
}
}
Poll::Ready(Ok(()))
}
}
impl<R> Stream for LengthDelimited<R>
where
R: AsyncRead,
{
type Item = Result<Bytes, io::Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let mut this = self.project();
loop {
match this.read_state {
ReadState::ReadLength { buf, pos } => {
match this.inner.as_mut().poll_read(cx, &mut buf[*pos..*pos + 1]) {
Poll::Ready(Ok(0)) =>
if *pos == 0 {
return Poll::Ready(None);
} else {
return Poll::Ready(Some(Err(io::ErrorKind::UnexpectedEof.into())));
},
Poll::Ready(Ok(n)) => {
debug_assert_eq!(n, 1);
*pos += n;
}
Poll::Ready(Err(err)) => return Poll::Ready(Some(Err(err))),
Poll::Pending => return Poll::Pending,
};
if (buf[*pos - 1] & 0x80) == 0 {
// MSB is not set, indicating the end of the length prefix.
let (len, _) = unsigned_varint::decode::u16(buf).map_err(|e| {
tracing::debug!(target: LOG_TARGET, "invalid length prefix: {}", e);
io::Error::new(io::ErrorKind::InvalidData, "invalid length prefix")
})?;
if len >= 1 {
*this.read_state = ReadState::ReadData { len, pos: 0 };
this.read_buffer.resize(len as usize, 0);
} else {
debug_assert_eq!(len, 0);
*this.read_state = ReadState::default();
return Poll::Ready(Some(Ok(Bytes::new())));
}
} else if *pos == MAX_LEN_BYTES as usize {
// MSB signals more length bytes but we have already read the maximum.
// See the module documentation about the max frame len.
return Poll::Ready(Some(Err(io::Error::new(
io::ErrorKind::InvalidData,
"Maximum frame length exceeded",
))));
}
}
ReadState::ReadData { len, pos } => {
match this.inner.as_mut().poll_read(cx, &mut this.read_buffer[*pos..]) {
Poll::Ready(Ok(0)) =>
return Poll::Ready(Some(Err(io::ErrorKind::UnexpectedEof.into()))),
Poll::Ready(Ok(n)) => *pos += n,
Poll::Pending => return Poll::Pending,
Poll::Ready(Err(err)) => return Poll::Ready(Some(Err(err))),
};
if *pos == *len as usize {
// Finished reading the frame.
let frame = this.read_buffer.split_off(0).freeze();
*this.read_state = ReadState::default();
return Poll::Ready(Some(Ok(frame)));
}
}
}
}
}
}
impl<R> Sink<Bytes> for LengthDelimited<R>
where
R: AsyncWrite,
{
type Error = io::Error;
fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// Use the maximum frame length also as a (soft) upper limit
// for the entire write buffer. The actual (hard) limit is thus
// implied to be roughly 2 * MAX_FRAME_SIZE.
if self.as_mut().project().write_buffer.len() >= MAX_FRAME_SIZE as usize {
match self.as_mut().poll_write_buffer(cx) {
Poll::Ready(Ok(())) => {}
Poll::Ready(Err(err)) => return Poll::Ready(Err(err)),
Poll::Pending => return Poll::Pending,
}
debug_assert!(self.as_mut().project().write_buffer.is_empty());
}
Poll::Ready(Ok(()))
}
fn start_send(self: Pin<&mut Self>, item: Bytes) -> Result<(), Self::Error> {
let this = self.project();
let len = match u16::try_from(item.len()) {
Ok(len) if len <= MAX_FRAME_SIZE => len,
_ =>
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"Maximum frame size exceeded.",
)),
};
let mut uvi_buf = unsigned_varint::encode::u16_buffer();
let uvi_len = unsigned_varint::encode::u16(len, &mut uvi_buf);
this.write_buffer.reserve(len as usize + uvi_len.len());
this.write_buffer.put(uvi_len);
this.write_buffer.put(item);
Ok(())
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// Write all buffered frame data to the underlying I/O stream.
match LengthDelimited::poll_write_buffer(self.as_mut(), cx) {
Poll::Ready(Ok(())) => {}
Poll::Ready(Err(err)) => return Poll::Ready(Err(err)),
Poll::Pending => return Poll::Pending,
}
let this = self.project();
debug_assert!(this.write_buffer.is_empty());
// Flush the underlying I/O stream.
this.inner.poll_flush(cx)
}
fn poll_close(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
// Write all buffered frame data to the underlying I/O stream.
match LengthDelimited::poll_write_buffer(self.as_mut(), cx) {
Poll::Ready(Ok(())) => {}
Poll::Ready(Err(err)) => return Poll::Ready(Err(err)),
Poll::Pending => return Poll::Pending,
}
let this = self.project();
debug_assert!(this.write_buffer.is_empty());
// Close the underlying I/O stream.
this.inner.poll_close(cx)
}
}
/// A `LengthDelimitedReader` implements a `Stream` of uvi-length-delimited
/// frames on an underlying I/O resource combined with direct `AsyncWrite` access.
#[pin_project::pin_project]
#[derive(Debug)]
pub struct LengthDelimitedReader<R> {
#[pin]
inner: LengthDelimited<R>,
}
impl<R> LengthDelimitedReader<R> {
/// Destroys the `LengthDelimitedReader` and returns the underlying I/O stream.
///
/// This method is guaranteed not to drop any data read from or not yet
/// submitted to the underlying I/O stream.
///
/// # Panic
///
/// Will panic if called while there is data in the read or write buffer.
/// The read buffer is guaranteed to be empty whenever [`Stream::poll_next`]
/// yield a new `Message`. The write buffer is guaranteed to be empty whenever
/// [`LengthDelimited::poll_write_buffer`] yields [`Poll::Ready`] or after
/// the [`Sink`] has been completely flushed via [`Sink::poll_flush`].
pub fn into_inner(self) -> R {
self.inner.into_inner()
}
}
impl<R> Stream for LengthDelimitedReader<R>
where
R: AsyncRead,
{
type Item = Result<Bytes, io::Error>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.project().inner.poll_next(cx)
}
}
impl<R> AsyncWrite for LengthDelimitedReader<R>
where
R: AsyncWrite,
{
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, io::Error>> {
// `this` here designates the `LengthDelimited`.
let mut this = self.project().inner;
// We need to flush any data previously written with the `LengthDelimited`.
match LengthDelimited::poll_write_buffer(this.as_mut(), cx) {
Poll::Ready(Ok(())) => {}
Poll::Ready(Err(err)) => return Poll::Ready(Err(err)),
Poll::Pending => return Poll::Pending,
}
debug_assert!(this.write_buffer.is_empty());
this.project().inner.poll_write(cx, buf)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> {
self.project().inner.poll_flush(cx)
}
fn poll_close(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> {
self.project().inner.poll_close(cx)
}
fn poll_write_vectored(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
bufs: &[IoSlice<'_>],
) -> Poll<Result<usize, io::Error>> {
// `this` here designates the `LengthDelimited`.
let mut this = self.project().inner;
// We need to flush any data previously written with the `LengthDelimited`.
match LengthDelimited::poll_write_buffer(this.as_mut(), cx) {
Poll::Ready(Ok(())) => {}
Poll::Ready(Err(err)) => return Poll::Ready(Err(err)),
Poll::Pending => return Poll::Pending,
}
debug_assert!(this.write_buffer.is_empty());
this.project().inner.poll_write_vectored(cx, bufs)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/mod.rs | src/multistream_select/mod.rs | // Copyright 2017 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#![allow(unused)]
#![allow(clippy::derivable_impls)]
//! # Multistream-select Protocol Negotiation
//!
//! This crate implements the `multistream-select` protocol, which is the protocol
//! used by libp2p to negotiate which application-layer protocol to use with the
//! remote on a connection or substream.
//!
//! > **Note**: This crate is used primarily by core components of *libp2p* and it
//! > is usually not used directly on its own.
//!
//! ## Roles
//!
//! Two peers using the multistream-select negotiation protocol on an I/O stream
//! are distinguished by their role as a _dialer_ (or _initiator_) or as a _listener_
//! (or _responder_). Thereby the dialer plays the active part, driving the protocol,
//! whereas the listener reacts to the messages received.
//!
//! The dialer has two options: it can either pick a protocol from the complete list
//! of protocols that the listener supports, or it can directly suggest a protocol.
//! Either way, a selected protocol is sent to the listener who can either accept (by
//! echoing the same protocol) or reject (by responding with a message stating
//! "not available"). If a suggested protocol is not available, the dialer may
//! suggest another protocol. This process continues until a protocol is agreed upon,
//! yielding a [`Negotiated`] stream, or the dialer has run out of
//! alternatives.
//!
//! See [`dialer_select_proto`] and [`listener_select_proto`].
//!
//! ## [`Negotiated`]
//!
//! A `Negotiated` represents an I/O stream that has settled on a protocol
//! to use. By default, with [`Version::V1`], protocol negotiation is always
//! at least one dedicated round-trip message exchange, before application
//! data for the negotiated protocol can be sent by the dialer. There is
//! a variant [`Version::V1Lazy`] that permits 0-RTT negotiation if the
//! dialer only supports a single protocol. In that case, when a dialer
//! settles on a protocol to use, the [`DialerSelectFuture`] yields a
//! [`Negotiated`] I/O stream before the negotiation
//! data has been flushed. It is then expecting confirmation for that protocol
//! as the first messages read from the stream. This behaviour allows the dialer
//! to immediately send data relating to the negotiated protocol together with the
//! remaining negotiation message(s). Note, however, that a dialer that performs
//! multiple 0-RTT negotiations in sequence for different protocols layered on
//! top of each other may trigger undesirable behaviour for a listener not
//! supporting one of the intermediate protocols. See
//! [`dialer_select_proto`] and the documentation of [`Version::V1Lazy`] for further details.
#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
mod dialer_select;
mod length_delimited;
mod listener_select;
mod negotiated;
mod protocol;
use crate::error::{self, ParseError};
pub use crate::multistream_select::{
dialer_select::{dialer_select_proto, DialerSelectFuture, HandshakeResult, WebRtcDialerState},
listener_select::{
listener_select_proto, webrtc_listener_negotiate, ListenerSelectFuture,
ListenerSelectResult,
},
negotiated::{Negotiated, NegotiatedComplete, NegotiationError},
protocol::{HeaderLine, Message, Protocol, ProtocolError, PROTO_MULTISTREAM_1_0},
};
use bytes::Bytes;
const LOG_TARGET: &str = "litep2p::multistream-select";
/// Supported multistream-select versions.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Version {
/// Version 1 of the multistream-select protocol. See [1] and [2].
///
/// [1]: https://github.com/libp2p/specs/blob/master/connections/README.md#protocol-negotiation
/// [2]: https://github.com/multiformats/multistream-select
V1,
/// A "lazy" variant of version 1 that is identical on the wire but whereby
/// the dialer delays flushing protocol negotiation data in order to combine
/// it with initial application data, thus performing 0-RTT negotiation.
///
/// This strategy is only applicable for the node with the role of "dialer"
/// in the negotiation and only if the dialer supports just a single
/// application protocol. In that case the dialer immedidately "settles"
/// on that protocol, buffering the negotiation messages to be sent
/// with the first round of application protocol data (or an attempt
/// is made to read from the `Negotiated` I/O stream).
///
/// A listener will behave identically to `V1`. This ensures interoperability with `V1`.
/// Notably, it will immediately send the multistream header as well as the protocol
/// confirmation, resulting in multiple frames being sent on the underlying transport.
/// Nevertheless, if the listener supports the protocol that the dialer optimistically
/// settled on, it can be a 0-RTT negotiation.
///
/// > **Note**: `V1Lazy` is specific to `rust-libp2p`. The wire protocol is identical to `V1`
/// > and generally interoperable with peers only supporting `V1`. Nevertheless, there is a
/// > pitfall that is rarely encountered: When nesting multiple protocol negotiations, the
/// > listener should either be known to support all of the dialer's optimistically chosen
/// > protocols or there is must be no intermediate protocol without a payload and none of
/// > the protocol payloads must have the potential for being mistaken for a multistream-select
/// > protocol message. This avoids rare edge-cases whereby the listener may not recognize
/// > upgrade boundaries and erroneously process a request despite not supporting one of
/// > the intermediate protocols that the dialer committed to. See [1] and [2].
///
/// [1]: https://github.com/multiformats/go-multistream/issues/20
/// [2]: https://github.com/libp2p/rust-libp2p/pull/1212
V1Lazy,
// Draft: https://github.com/libp2p/specs/pull/95
// V2,
}
impl Default for Version {
fn default() -> Self {
Version::V1
}
}
// This function is only used in the WebRTC transport. It expects one or more multistream-select
// messages in `remaining` and returns a list of protocols that were decoded from them.
fn drain_trailing_protocols(
mut remaining: Bytes,
) -> Result<Vec<Protocol>, error::NegotiationError> {
let mut protocols = vec![];
loop {
if remaining.is_empty() {
break;
}
let (len, tail) = unsigned_varint::decode::usize(&remaining).map_err(|error| {
tracing::debug!(
target: LOG_TARGET,
?error,
message = ?remaining,
"Failed to decode length-prefix in multistream message");
error::NegotiationError::ParseError(ParseError::InvalidData)
})?;
if len > tail.len() {
tracing::debug!(
target: LOG_TARGET,
message = ?tail,
length_prefix = len,
actual_length = tail.len(),
"Truncated multistream message",
);
return Err(error::NegotiationError::ParseError(ParseError::InvalidData));
}
let len_size = remaining.len() - tail.len();
let payload = remaining.slice(len_size..len_size + len);
let res = Message::decode(payload);
match res {
Ok(Message::Header(HeaderLine::V1)) => protocols.push(PROTO_MULTISTREAM_1_0),
Ok(Message::Protocol(protocol)) => protocols.push(protocol),
Ok(Message::Protocols(_)) =>
return Err(error::NegotiationError::ParseError(ParseError::InvalidData)),
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?error,
message = ?tail[..len],
"Failed to decode multistream message",
);
return Err(error::NegotiationError::ParseError(ParseError::InvalidData));
}
_ => return Err(error::NegotiationError::StateMismatch),
}
remaining = remaining.slice(len_size + len..);
}
Ok(protocols)
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/dialer_select.rs | src/multistream_select/dialer_select.rs | // Copyright 2017 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Protocol negotiation strategies for the peer acting as the dialer.
use crate::{
codec::unsigned_varint::UnsignedVarint,
error::{self, Error, ParseError, SubstreamError},
multistream_select::{
drain_trailing_protocols,
protocol::{
webrtc_encode_multistream_message, HeaderLine, Message, MessageIO, Protocol,
ProtocolError, PROTO_MULTISTREAM_1_0,
},
Negotiated, NegotiationError, Version,
},
types::protocol::ProtocolName,
};
use bytes::{Bytes, BytesMut};
use futures::prelude::*;
use std::{
convert::TryFrom as _,
iter, mem,
pin::Pin,
task::{Context, Poll},
};
const LOG_TARGET: &str = "litep2p::multistream-select";
/// Returns a `Future` that negotiates a protocol on the given I/O stream
/// for a peer acting as the _dialer_ (or _initiator_).
///
/// This function is given an I/O stream and a list of protocols and returns a
/// computation that performs the protocol negotiation with the remote. The
/// returned `Future` resolves with the name of the negotiated protocol and
/// a [`Negotiated`] I/O stream.
///
/// Within the scope of this library, a dialer always commits to a specific
/// multistream-select [`Version`], whereas a listener always supports
/// all versions supported by this library. Frictionless multistream-select
/// protocol upgrades may thus proceed by deployments with updated listeners,
/// eventually followed by deployments of dialers choosing the newer protocol.
pub fn dialer_select_proto<R, I>(
inner: R,
protocols: I,
version: Version,
) -> DialerSelectFuture<R, I::IntoIter>
where
R: AsyncRead + AsyncWrite,
I: IntoIterator,
I::Item: AsRef<[u8]>,
{
let protocols = protocols.into_iter().peekable();
DialerSelectFuture {
version,
protocols,
state: State::SendHeader {
io: MessageIO::new(inner),
},
}
}
/// A `Future` returned by [`dialer_select_proto`] which negotiates
/// a protocol iteratively by considering one protocol after the other.
#[pin_project::pin_project]
pub struct DialerSelectFuture<R, I: Iterator> {
protocols: iter::Peekable<I>,
state: State<R, I::Item>,
version: Version,
}
enum State<R, N> {
SendHeader {
io: MessageIO<R>,
},
SendProtocol {
io: MessageIO<R>,
protocol: N,
header_received: bool,
},
FlushProtocol {
io: MessageIO<R>,
protocol: N,
header_received: bool,
},
AwaitProtocol {
io: MessageIO<R>,
protocol: N,
header_received: bool,
},
Done,
}
impl<R, I> Future for DialerSelectFuture<R, I>
where
// The Unpin bound here is required because we produce
// a `Negotiated<R>` as the output. It also makes
// the implementation considerably easier to write.
R: AsyncRead + AsyncWrite + Unpin,
I: Iterator,
I::Item: AsRef<[u8]>,
{
type Output = Result<(I::Item, Negotiated<R>), NegotiationError>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
loop {
match mem::replace(this.state, State::Done) {
State::SendHeader { mut io } => {
match Pin::new(&mut io).poll_ready(cx)? {
Poll::Ready(()) => {}
Poll::Pending => {
*this.state = State::SendHeader { io };
return Poll::Pending;
}
}
let h = HeaderLine::from(*this.version);
if let Err(err) = Pin::new(&mut io).start_send(Message::Header(h)) {
return Poll::Ready(Err(From::from(err)));
}
let protocol = this.protocols.next().ok_or(NegotiationError::Failed)?;
// The dialer always sends the header and the first protocol
// proposal in one go for efficiency.
*this.state = State::SendProtocol {
io,
protocol,
header_received: false,
};
}
State::SendProtocol {
mut io,
protocol,
header_received,
} => {
match Pin::new(&mut io).poll_ready(cx)? {
Poll::Ready(()) => {}
Poll::Pending => {
*this.state = State::SendProtocol {
io,
protocol,
header_received,
};
return Poll::Pending;
}
}
let p = Protocol::try_from(protocol.as_ref())?;
if let Err(err) = Pin::new(&mut io).start_send(Message::Protocol(p.clone())) {
return Poll::Ready(Err(From::from(err)));
}
tracing::debug!(target: LOG_TARGET, "Dialer: Proposed protocol: {}", p);
if this.protocols.peek().is_some() {
*this.state = State::FlushProtocol {
io,
protocol,
header_received,
}
} else {
match this.version {
Version::V1 =>
*this.state = State::FlushProtocol {
io,
protocol,
header_received,
},
// This is the only effect that `V1Lazy` has compared to `V1`:
// Optimistically settling on the only protocol that
// the dialer supports for this negotiation. Notably,
// the dialer expects a regular `V1` response.
Version::V1Lazy => {
tracing::debug!(
target: LOG_TARGET,
"Dialer: Expecting proposed protocol: {}",
p
);
let hl = HeaderLine::from(Version::V1Lazy);
let io = Negotiated::expecting(io.into_reader(), p, Some(hl));
return Poll::Ready(Ok((protocol, io)));
}
}
}
}
State::FlushProtocol {
mut io,
protocol,
header_received,
} => match Pin::new(&mut io).poll_flush(cx)? {
Poll::Ready(()) =>
*this.state = State::AwaitProtocol {
io,
protocol,
header_received,
},
Poll::Pending => {
*this.state = State::FlushProtocol {
io,
protocol,
header_received,
};
return Poll::Pending;
}
},
State::AwaitProtocol {
mut io,
protocol,
header_received,
} => {
let msg = match Pin::new(&mut io).poll_next(cx)? {
Poll::Ready(Some(msg)) => msg,
Poll::Pending => {
*this.state = State::AwaitProtocol {
io,
protocol,
header_received,
};
return Poll::Pending;
}
// Treat EOF error as [`NegotiationError::Failed`], not as
// [`NegotiationError::ProtocolError`], allowing dropping or closing an I/O
// stream as a permissible way to "gracefully" fail a negotiation.
Poll::Ready(None) => return Poll::Ready(Err(NegotiationError::Failed)),
};
match msg {
Message::Header(v)
if v == HeaderLine::from(*this.version) && !header_received =>
{
*this.state = State::AwaitProtocol {
io,
protocol,
header_received: true,
};
}
Message::Protocol(ref p) if p.as_ref() == protocol.as_ref() => {
tracing::debug!(
target: LOG_TARGET,
"Dialer: Received confirmation for protocol: {}",
p
);
let io = Negotiated::completed(io.into_inner());
return Poll::Ready(Ok((protocol, io)));
}
Message::NotAvailable => {
tracing::debug!(
target: LOG_TARGET,
"Dialer: Received rejection of protocol: {}",
String::from_utf8_lossy(protocol.as_ref())
);
let protocol = this.protocols.next().ok_or(NegotiationError::Failed)?;
*this.state = State::SendProtocol {
io,
protocol,
header_received,
}
}
_ => return Poll::Ready(Err(ProtocolError::InvalidMessage.into())),
}
}
State::Done => panic!("State::poll called after completion"),
}
}
}
}
/// `multistream-select` handshake result for dialer.
#[derive(Debug, PartialEq, Eq)]
pub enum HandshakeResult {
/// Handshake is not complete, data missing.
NotReady,
/// Handshake has succeeded.
///
/// The returned tuple contains the negotiated protocol and response
/// that must be sent to remote peer.
Succeeded(ProtocolName),
}
/// Handshake state.
#[derive(Debug)]
enum HandshakeState {
/// Waiting to receive any response from remote peer.
WaitingResponse,
/// Waiting to receive the actual application protocol from remote peer.
WaitingProtocol,
}
/// `multistream-select` dialer handshake state.
#[derive(Debug)]
pub struct WebRtcDialerState {
/// Proposed main protocol.
protocol: ProtocolName,
/// Fallback names of the main protocol.
fallback_names: Vec<ProtocolName>,
/// Dialer handshake state.
state: HandshakeState,
}
impl WebRtcDialerState {
/// Propose protocol to remote peer.
///
/// Return [`WebRtcDialerState`] which is used to drive forward the negotiation and an encoded
/// `multistream-select` message that contains the protocol proposal for the substream.
pub fn propose(
protocol: ProtocolName,
fallback_names: Vec<ProtocolName>,
) -> crate::Result<(Self, Vec<u8>)> {
let message = webrtc_encode_multistream_message(
std::iter::once(protocol.clone())
.chain(fallback_names.clone())
.filter_map(|protocol| Protocol::try_from(protocol.as_ref()).ok())
.map(Message::Protocol),
)?
.freeze()
.to_vec();
Ok((
Self {
protocol,
fallback_names,
state: HandshakeState::WaitingResponse,
},
message,
))
}
/// Register response to [`WebRtcDialerState`].
pub fn register_response(
&mut self,
payload: Vec<u8>,
) -> Result<HandshakeResult, crate::error::NegotiationError> {
// All multistream-select messages are length-prefixed. Since this code path is not using
// multistream_select::protocol::MessageIO, we need to decode and remove the length here.
let remaining: &[u8] = &payload;
let (len, tail) = unsigned_varint::decode::usize(remaining).map_err(|error| {
tracing::debug!(
target: LOG_TARGET,
?error,
message = ?payload,
"Failed to decode length-prefix in multistream message");
error::NegotiationError::ParseError(ParseError::InvalidData)
})?;
let len_size = remaining.len() - tail.len();
let bytes = Bytes::from(payload);
let payload = bytes.slice(len_size..len_size + len);
let remaining = bytes.slice(len_size + len..);
let message = Message::decode(payload);
tracing::trace!(
target: LOG_TARGET,
?message,
"Decoded message while registering response",
);
let mut protocols = match message {
Ok(Message::Header(HeaderLine::V1)) => {
vec![PROTO_MULTISTREAM_1_0]
}
Ok(Message::Protocol(protocol)) => vec![protocol],
Ok(Message::Protocols(protocols)) => protocols,
Ok(Message::NotAvailable) =>
return match &self.state {
HandshakeState::WaitingProtocol => Err(
error::NegotiationError::MultistreamSelectError(NegotiationError::Failed),
),
_ => Err(error::NegotiationError::StateMismatch),
},
Ok(Message::ListProtocols) => return Err(error::NegotiationError::StateMismatch),
Err(_) => return Err(error::NegotiationError::ParseError(ParseError::InvalidData)),
};
match drain_trailing_protocols(remaining) {
Ok(protos) => protocols.extend(protos),
Err(error) => return Err(error),
}
let mut protocol_iter = protocols.into_iter();
loop {
match (&self.state, protocol_iter.next()) {
(HandshakeState::WaitingResponse, None) =>
return Err(crate::error::NegotiationError::StateMismatch),
(HandshakeState::WaitingResponse, Some(protocol)) => {
if protocol == PROTO_MULTISTREAM_1_0 {
self.state = HandshakeState::WaitingProtocol;
} else {
return Err(crate::error::NegotiationError::MultistreamSelectError(
NegotiationError::Failed,
));
}
}
(HandshakeState::WaitingProtocol, Some(protocol)) => {
if protocol == PROTO_MULTISTREAM_1_0 {
return Err(crate::error::NegotiationError::StateMismatch);
}
if self.protocol.as_bytes() == protocol.as_ref() {
return Ok(HandshakeResult::Succeeded(self.protocol.clone()));
}
for fallback in &self.fallback_names {
if fallback.as_bytes() == protocol.as_ref() {
return Ok(HandshakeResult::Succeeded(fallback.clone()));
}
}
return Err(crate::error::NegotiationError::MultistreamSelectError(
NegotiationError::Failed,
));
}
(HandshakeState::WaitingProtocol, None) => {
return Ok(HandshakeResult::NotReady);
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::multistream_select::{listener_select_proto, protocol::MSG_MULTISTREAM_1_0};
use bytes::BufMut;
use std::time::Duration;
#[tokio::test]
async fn select_proto_basic() {
async fn run(version: Version) {
let (client_connection, server_connection) = futures_ringbuf::Endpoint::pair(100, 100);
let server: tokio::task::JoinHandle<Result<(), ()>> = tokio::spawn(async move {
let protos = vec!["/proto1", "/proto2"];
let (proto, mut io) =
listener_select_proto(server_connection, protos).await.unwrap();
assert_eq!(proto, "/proto2");
let mut out = vec![0; 32];
let n = io.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, b"ping");
io.write_all(b"pong").await.unwrap();
io.flush().await.unwrap();
Ok(())
});
let client: tokio::task::JoinHandle<Result<(), ()>> = tokio::spawn(async move {
let protos = vec!["/proto3", "/proto2"];
let (proto, mut io) =
dialer_select_proto(client_connection, protos, version).await.unwrap();
assert_eq!(proto, "/proto2");
io.write_all(b"ping").await.unwrap();
io.flush().await.unwrap();
let mut out = vec![0; 32];
let n = io.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, b"pong");
Ok(())
});
server.await.unwrap();
client.await.unwrap();
}
run(Version::V1).await;
run(Version::V1Lazy).await;
}
/// Tests the expected behaviour of failed negotiations.
#[tokio::test]
async fn negotiation_failed() {
async fn run(
version: Version,
dial_protos: Vec<&'static str>,
dial_payload: Vec<u8>,
listen_protos: Vec<&'static str>,
) {
let (client_connection, server_connection) = futures_ringbuf::Endpoint::pair(100, 100);
let server: tokio::task::JoinHandle<Result<(), ()>> = tokio::spawn(async move {
let io = match tokio::time::timeout(
Duration::from_secs(2),
listener_select_proto(server_connection, listen_protos),
)
.await
.unwrap()
{
Ok((_, io)) => io,
Err(NegotiationError::Failed) => return Ok(()),
Err(NegotiationError::ProtocolError(e)) => {
panic!("Unexpected protocol error {e}")
}
};
match io.complete().await {
Err(NegotiationError::Failed) => {}
_ => panic!(),
}
Ok(())
});
let client: tokio::task::JoinHandle<Result<(), ()>> = tokio::spawn(async move {
let mut io = match tokio::time::timeout(
Duration::from_secs(2),
dialer_select_proto(client_connection, dial_protos, version),
)
.await
.unwrap()
{
Err(NegotiationError::Failed) => return Ok(()),
Ok((_, io)) => io,
Err(_) => panic!(),
};
// The dialer may write a payload that is even sent before it
// got confirmation of the last proposed protocol, when `V1Lazy`
// is used.
io.write_all(&dial_payload).await.unwrap();
match io.complete().await {
Err(NegotiationError::Failed) => {}
_ => panic!(),
}
Ok(())
});
server.await.unwrap();
client.await.unwrap();
}
// Incompatible protocols.
run(Version::V1, vec!["/proto1"], vec![1], vec!["/proto2"]).await;
run(Version::V1Lazy, vec!["/proto1"], vec![1], vec!["/proto2"]).await;
}
#[tokio::test]
async fn v1_lazy_do_not_wait_for_negotiation_on_poll_close() {
let (client_connection, _server_connection) =
futures_ringbuf::Endpoint::pair(1024 * 1024, 1);
let client = tokio::spawn(async move {
// Single protocol to allow for lazy (or optimistic) protocol negotiation.
let protos = vec!["/proto1"];
let (proto, mut io) =
dialer_select_proto(client_connection, protos, Version::V1Lazy).await.unwrap();
assert_eq!(proto, "/proto1");
// In Libp2p the lazy negotation of protocols can be closed at any time,
// even if the negotiation is not yet done.
// However, for the Litep2p the negotation must conclude before closing the
// lazy negotation of protocol. We'll wait for the close until the
// server has produced a message, in this test that means forever.
io.close().await.unwrap();
});
assert!(tokio::time::timeout(Duration::from_secs(10), client).await.is_ok());
}
#[tokio::test]
async fn low_level_negotiate() {
async fn run(version: Version) {
let (client_connection, mut server_connection) =
futures_ringbuf::Endpoint::pair(100, 100);
let server = tokio::spawn(async move {
let protos = ["/proto2"];
let multistream = b"/multistream/1.0.0\n";
let len = multistream.len();
let proto = b"/proto2\n";
let proto_len = proto.len();
// Check that our implementation writes optimally
// the multistream ++ protocol in a single message.
let mut expected_message = Vec::new();
expected_message.push(len as u8);
expected_message.extend_from_slice(multistream);
expected_message.push(proto_len as u8);
expected_message.extend_from_slice(proto);
if version == Version::V1Lazy {
expected_message.extend_from_slice(b"ping");
}
let mut out = vec![0; 64];
let n = server_connection.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, expected_message);
// We must send the back the multistream packet.
let mut send_message = Vec::new();
send_message.push(len as u8);
send_message.extend_from_slice(multistream);
server_connection.write_all(&mut send_message).await.unwrap();
let mut send_message = Vec::new();
send_message.push(proto_len as u8);
send_message.extend_from_slice(proto);
server_connection.write_all(&mut send_message).await.unwrap();
// Handle handshake.
match version {
Version::V1 => {
let mut out = vec![0; 64];
let n = server_connection.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, b"ping");
server_connection.write_all(b"pong").await.unwrap();
}
Version::V1Lazy => {
// Ping (handshake) payload expected in the initial message.
server_connection.write_all(b"pong").await.unwrap();
}
}
});
let client = tokio::spawn(async move {
let protos = vec!["/proto2"];
let (proto, mut io) =
dialer_select_proto(client_connection, protos, version).await.unwrap();
assert_eq!(proto, "/proto2");
io.write_all(b"ping").await.unwrap();
io.flush().await.unwrap();
let mut out = vec![0; 32];
let n = io.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, b"pong");
});
server.await.unwrap();
client.await.unwrap();
}
run(Version::V1).await;
run(Version::V1Lazy).await;
}
#[tokio::test]
async fn v1_low_level_negotiate_multiple_headers() {
let (client_connection, mut server_connection) = futures_ringbuf::Endpoint::pair(100, 100);
let server = tokio::spawn(async move {
let protos = ["/proto2"];
let multistream = b"/multistream/1.0.0\n";
let len = multistream.len();
let proto = b"/proto2\n";
let proto_len = proto.len();
// Check that our implementation writes optimally
// the multistream ++ protocol in a single message.
let mut expected_message = Vec::new();
expected_message.push(len as u8);
expected_message.extend_from_slice(multistream);
expected_message.push(proto_len as u8);
expected_message.extend_from_slice(proto);
let mut out = vec![0; 64];
let n = server_connection.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, expected_message);
// We must send the back the multistream packet.
let mut send_message = Vec::new();
send_message.push(len as u8);
send_message.extend_from_slice(multistream);
server_connection.write_all(&mut send_message).await.unwrap();
// We must send the back the multistream packet again.
let mut send_message = Vec::new();
send_message.push(len as u8);
send_message.extend_from_slice(multistream);
server_connection.write_all(&mut send_message).await.unwrap();
});
let client = tokio::spawn(async move {
let protos = vec!["/proto2"];
// Negotiation fails because the protocol receives the `/multistream/1.0.0` header
// multiple times.
let result =
dialer_select_proto(client_connection, protos, Version::V1).await.unwrap_err();
match result {
NegotiationError::ProtocolError(ProtocolError::InvalidMessage) => {}
_ => panic!("unexpected error: {:?}", result),
};
});
server.await.unwrap();
client.await.unwrap();
}
#[tokio::test]
async fn v1_lazy_low_level_negotiate_multiple_headers() {
let (client_connection, mut server_connection) = futures_ringbuf::Endpoint::pair(100, 100);
let server = tokio::spawn(async move {
let protos = ["/proto2"];
let multistream = b"/multistream/1.0.0\n";
let len = multistream.len();
let proto = b"/proto2\n";
let proto_len = proto.len();
// Check that our implementation writes optimally
// the multistream ++ protocol in a single message.
let mut expected_message = Vec::new();
expected_message.push(len as u8);
expected_message.extend_from_slice(multistream);
expected_message.push(proto_len as u8);
expected_message.extend_from_slice(proto);
let mut out = vec![0; 64];
let n = server_connection.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, expected_message);
// We must send the back the multistream packet.
let mut send_message = Vec::new();
send_message.push(len as u8);
send_message.extend_from_slice(multistream);
server_connection.write_all(&mut send_message).await.unwrap();
// We must send the back the multistream packet again.
let mut send_message = Vec::new();
send_message.push(len as u8);
send_message.extend_from_slice(multistream);
server_connection.write_all(&mut send_message).await.unwrap();
});
let client = tokio::spawn(async move {
let protos = vec!["/proto2"];
// Negotiation fails because the protocol receives the `/multistream/1.0.0` header
// multiple times.
let (proto, to_negociate) =
dialer_select_proto(client_connection, protos, Version::V1Lazy).await.unwrap();
assert_eq!(proto, "/proto2");
let result = to_negociate.complete().await.unwrap_err();
match result {
NegotiationError::ProtocolError(ProtocolError::InvalidMessage) => {}
_ => panic!("unexpected error: {:?}", result),
};
});
server.await.unwrap();
client.await.unwrap();
}
#[test]
fn propose() {
let (mut dialer_state, message) =
WebRtcDialerState::propose(ProtocolName::from("/13371338/proto/1"), vec![]).unwrap();
let mut bytes = BytesMut::with_capacity(32);
bytes.put_u8(MSG_MULTISTREAM_1_0.len() as u8);
let _ = Message::Header(HeaderLine::V1).encode(&mut bytes).unwrap();
let proto = Protocol::try_from(&b"/13371338/proto/1"[..]).expect("valid protocol name");
bytes.put_u8((proto.as_ref().len() + 1) as u8); // + 1 for \n
let _ = Message::Protocol(proto).encode(&mut bytes).unwrap();
let expected_message = bytes.freeze().to_vec();
assert_eq!(message, expected_message);
}
#[test]
fn propose_with_fallback() {
let (mut dialer_state, message) = WebRtcDialerState::propose(
ProtocolName::from("/13371338/proto/1"),
vec![ProtocolName::from("/sup/proto/1")],
)
.unwrap();
let mut bytes = BytesMut::with_capacity(32);
bytes.put_u8(MSG_MULTISTREAM_1_0.len() as u8);
let _ = Message::Header(HeaderLine::V1).encode(&mut bytes).unwrap();
let proto1 = Protocol::try_from(&b"/13371338/proto/1"[..]).expect("valid protocol name");
bytes.put_u8((proto1.as_ref().len() + 1) as u8); // + 1 for \n
let _ = Message::Protocol(proto1).encode(&mut bytes).unwrap();
let proto2 = Protocol::try_from(&b"/sup/proto/1"[..]).expect("valid protocol name");
bytes.put_u8((proto2.as_ref().len() + 1) as u8); // + 1 for \n
let _ = Message::Protocol(proto2).encode(&mut bytes).unwrap();
let expected_message = bytes.freeze().to_vec();
assert_eq!(message, expected_message);
}
#[test]
fn register_response_header_only() {
let mut bytes = BytesMut::with_capacity(32);
bytes.put_u8(MSG_MULTISTREAM_1_0.len() as u8);
let message = Message::Header(HeaderLine::V1);
message.encode(&mut bytes).map_err(|_| Error::InvalidData).unwrap();
let (mut dialer_state, _message) =
WebRtcDialerState::propose(ProtocolName::from("/13371338/proto/1"), vec![]).unwrap();
match dialer_state.register_response(bytes.freeze().to_vec()) {
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/protocol.rs | src/multistream_select/protocol.rs | // Copyright 2017 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Multistream-select protocol messages an I/O operations for
//! constructing protocol negotiation flows.
//!
//! A protocol negotiation flow is constructed by using the
//! `Stream` and `Sink` implementations of `MessageIO` and
//! `MessageReader`.
use crate::{
codec::unsigned_varint::UnsignedVarint,
error::Error as Litep2pError,
multistream_select::{
length_delimited::{LengthDelimited, LengthDelimitedReader},
Version,
},
};
use bytes::{BufMut, Bytes, BytesMut};
use futures::{io::IoSlice, prelude::*, ready};
use std::{
convert::TryFrom,
error::Error,
fmt, io,
pin::Pin,
task::{Context, Poll},
};
use unsigned_varint as uvi;
/// The maximum number of supported protocols that can be processed.
const MAX_PROTOCOLS: usize = 1000;
/// The encoded form of a multistream-select 1.0.0 header message.
pub const MSG_MULTISTREAM_1_0: &[u8] = b"/multistream/1.0.0\n";
/// The encoded form of a multistream-select 'na' message.
const MSG_PROTOCOL_NA: &[u8] = b"na\n";
/// The encoded form of a multistream-select 'ls' message.
const MSG_LS: &[u8] = b"ls\n";
/// A Protocol instance for the `/multistream/1.0.0` header line.
pub const PROTO_MULTISTREAM_1_0: Protocol = Protocol(Bytes::from_static(b"/multistream/1.0.0"));
/// Logging target.
const LOG_TARGET: &str = "litep2p::multistream-select";
/// The multistream-select header lines preceeding negotiation.
///
/// Every [`Version`] has a corresponding header line.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum HeaderLine {
/// The `/multistream/1.0.0` header line.
V1,
}
impl From<Version> for HeaderLine {
fn from(v: Version) -> HeaderLine {
match v {
Version::V1 | Version::V1Lazy => HeaderLine::V1,
}
}
}
/// A protocol (name) exchanged during protocol negotiation.
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct Protocol(Bytes);
impl AsRef<[u8]> for Protocol {
fn as_ref(&self) -> &[u8] {
self.0.as_ref()
}
}
impl TryFrom<Bytes> for Protocol {
type Error = ProtocolError;
fn try_from(value: Bytes) -> Result<Self, Self::Error> {
if !value.as_ref().starts_with(b"/") {
return Err(ProtocolError::InvalidProtocol);
}
Ok(Protocol(value))
}
}
impl TryFrom<&[u8]> for Protocol {
type Error = ProtocolError;
fn try_from(value: &[u8]) -> Result<Self, Self::Error> {
Self::try_from(Bytes::copy_from_slice(value))
}
}
impl fmt::Display for Protocol {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", String::from_utf8_lossy(&self.0))
}
}
/// A multistream-select protocol message.
///
/// Multistream-select protocol messages are exchanged with the goal
/// of agreeing on a application-layer protocol to use on an I/O stream.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Message {
/// A header message identifies the multistream-select protocol
/// that the sender wishes to speak.
Header(HeaderLine),
/// A protocol message identifies a protocol request or acknowledgement.
Protocol(Protocol),
/// A message through which a peer requests the complete list of
/// supported protocols from the remote.
ListProtocols,
/// A message listing all supported protocols of a peer.
Protocols(Vec<Protocol>),
/// A message signaling that a requested protocol is not available.
NotAvailable,
}
impl Message {
/// Encodes a `Message` into its byte representation.
pub fn encode(&self, dest: &mut BytesMut) -> Result<(), ProtocolError> {
match self {
Message::Header(HeaderLine::V1) => {
dest.reserve(MSG_MULTISTREAM_1_0.len());
dest.put(MSG_MULTISTREAM_1_0);
Ok(())
}
Message::Protocol(p) => {
let len = p.0.as_ref().len() + 1; // + 1 for \n
dest.reserve(len);
dest.put(p.0.as_ref());
dest.put_u8(b'\n');
Ok(())
}
Message::ListProtocols => {
dest.reserve(MSG_LS.len());
dest.put(MSG_LS);
Ok(())
}
Message::Protocols(ps) => {
let mut buf = uvi::encode::usize_buffer();
let mut encoded = Vec::with_capacity(ps.len());
for p in ps {
encoded.extend(uvi::encode::usize(p.0.as_ref().len() + 1, &mut buf)); // +1 for '\n'
encoded.extend_from_slice(p.0.as_ref());
encoded.push(b'\n')
}
encoded.push(b'\n');
dest.reserve(encoded.len());
dest.put(encoded.as_ref());
Ok(())
}
Message::NotAvailable => {
dest.reserve(MSG_PROTOCOL_NA.len());
dest.put(MSG_PROTOCOL_NA);
Ok(())
}
}
}
/// Decodes a `Message` from its byte representation.
pub fn decode(mut msg: Bytes) -> Result<Message, ProtocolError> {
if msg == MSG_MULTISTREAM_1_0 {
return Ok(Message::Header(HeaderLine::V1));
}
if msg == MSG_PROTOCOL_NA {
return Ok(Message::NotAvailable);
}
if msg == MSG_LS {
return Ok(Message::ListProtocols);
}
// If it starts with a `/`, ends with a line feed without any
// other line feeds in-between, it must be a protocol name.
if msg.first() == Some(&b'/')
&& msg.last() == Some(&b'\n')
&& !msg[..msg.len() - 1].contains(&b'\n')
{
let p = Protocol::try_from(msg.split_to(msg.len() - 1))?;
return Ok(Message::Protocol(p));
}
// At this point, it must be an `ls` response, i.e. one or more
// length-prefixed, newline-delimited protocol names.
let mut protocols = Vec::new();
let mut remaining: &[u8] = &msg;
loop {
// A well-formed message must be terminated with a newline.
if remaining == [b'\n'] {
break;
} else if protocols.len() == MAX_PROTOCOLS {
return Err(ProtocolError::TooManyProtocols);
}
// Decode the length of the next protocol name and check that
// it ends with a line feed.
let (len, tail) = uvi::decode::usize(remaining)?;
if len == 0 || len > tail.len() || tail[len - 1] != b'\n' {
return Err(ProtocolError::InvalidMessage);
}
// Parse the protocol name.
let p = Protocol::try_from(Bytes::copy_from_slice(&tail[..len - 1]))?;
protocols.push(p);
// Skip ahead to the next protocol.
remaining = &tail[len..];
}
Ok(Message::Protocols(protocols))
}
}
/// Create `multistream-select` message from an iterator of `Message`s.
///
/// # Note
///
/// This implementation may not be compliant with the multistream-select protocol spec.
/// The only purpose of this was to get the `multistream-select` protocol working with smoldot.
pub fn webrtc_encode_multistream_message(
messages: impl IntoIterator<Item = Message>,
) -> crate::Result<BytesMut> {
// encode `/multistream-select/1.0.0` header
let mut bytes = BytesMut::with_capacity(32);
let message = Message::Header(HeaderLine::V1);
message.encode(&mut bytes).map_err(|_| Litep2pError::InvalidData)?;
let mut header = UnsignedVarint::encode(bytes)?;
// encode each message
for message in messages {
let mut proto_bytes = BytesMut::with_capacity(256);
message.encode(&mut proto_bytes).map_err(|_| Litep2pError::InvalidData)?;
let mut proto_bytes = UnsignedVarint::encode(proto_bytes)?;
header.append(&mut proto_bytes);
}
Ok(BytesMut::from(&header[..]))
}
/// A `MessageIO` implements a [`Stream`] and [`Sink`] of [`Message`]s.
#[pin_project::pin_project]
pub struct MessageIO<R> {
#[pin]
inner: LengthDelimited<R>,
}
impl<R> MessageIO<R> {
/// Constructs a new `MessageIO` resource wrapping the given I/O stream.
pub fn new(inner: R) -> MessageIO<R>
where
R: AsyncRead + AsyncWrite,
{
Self {
inner: LengthDelimited::new(inner),
}
}
/// Converts the [`MessageIO`] into a [`MessageReader`], dropping the
/// [`Message`]-oriented `Sink` in favour of direct `AsyncWrite` access
/// to the underlying I/O stream.
///
/// This is typically done if further negotiation messages are expected to be
/// received but no more messages are written, allowing the writing of
/// follow-up protocol data to commence.
pub fn into_reader(self) -> MessageReader<R> {
MessageReader {
inner: self.inner.into_reader(),
}
}
/// Drops the [`MessageIO`] resource, yielding the underlying I/O stream.
///
/// # Panics
///
/// Panics if the read buffer or write buffer is not empty, meaning that an incoming
/// protocol negotiation frame has been partially read or an outgoing frame
/// has not yet been flushed. The read buffer is guaranteed to be empty whenever
/// `MessageIO::poll` returned a message. The write buffer is guaranteed to be empty
/// when the sink has been flushed.
pub fn into_inner(self) -> R {
self.inner.into_inner()
}
}
impl<R> Sink<Message> for MessageIO<R>
where
R: AsyncWrite,
{
type Error = ProtocolError;
fn poll_ready(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.project().inner.poll_ready(cx).map_err(From::from)
}
fn start_send(self: Pin<&mut Self>, item: Message) -> Result<(), Self::Error> {
let mut buf = BytesMut::new();
item.encode(&mut buf)?;
self.project().inner.start_send(buf.freeze()).map_err(From::from)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.project().inner.poll_flush(cx).map_err(From::from)
}
fn poll_close(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.project().inner.poll_close(cx).map_err(From::from)
}
}
impl<R> Stream for MessageIO<R>
where
R: AsyncRead,
{
type Item = Result<Message, ProtocolError>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
match poll_stream(self.project().inner, cx) {
Poll::Pending => Poll::Pending,
Poll::Ready(None) => Poll::Ready(None),
Poll::Ready(Some(Ok(m))) => Poll::Ready(Some(Ok(m))),
Poll::Ready(Some(Err(err))) => Poll::Ready(Some(Err(err))),
}
}
}
/// A `MessageReader` implements a `Stream` of `Message`s on an underlying
/// I/O resource combined with direct `AsyncWrite` access.
#[pin_project::pin_project]
#[derive(Debug)]
pub struct MessageReader<R> {
#[pin]
inner: LengthDelimitedReader<R>,
}
impl<R> MessageReader<R> {
/// Drops the `MessageReader` resource, yielding the underlying I/O stream
/// together with the remaining write buffer containing the protocol
/// negotiation frame data that has not yet been written to the I/O stream.
///
/// # Panics
///
/// Panics if the read buffer or write buffer is not empty, meaning that either
/// an incoming protocol negotiation frame has been partially read, or an
/// outgoing frame has not yet been flushed. The read buffer is guaranteed to
/// be empty whenever `MessageReader::poll` returned a message. The write
/// buffer is guaranteed to be empty whenever the sink has been flushed.
pub fn into_inner(self) -> R {
self.inner.into_inner()
}
}
impl<R> Stream for MessageReader<R>
where
R: AsyncRead,
{
type Item = Result<Message, ProtocolError>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
poll_stream(self.project().inner, cx)
}
}
impl<TInner> AsyncWrite for MessageReader<TInner>
where
TInner: AsyncWrite,
{
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, io::Error>> {
self.project().inner.poll_write(cx, buf)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> {
self.project().inner.poll_flush(cx)
}
fn poll_close(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), io::Error>> {
self.project().inner.poll_close(cx)
}
fn poll_write_vectored(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
bufs: &[IoSlice<'_>],
) -> Poll<Result<usize, io::Error>> {
self.project().inner.poll_write_vectored(cx, bufs)
}
}
fn poll_stream<S>(
stream: Pin<&mut S>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Message, ProtocolError>>>
where
S: Stream<Item = Result<Bytes, io::Error>>,
{
let msg = if let Some(msg) = ready!(stream.poll_next(cx)?) {
match Message::decode(msg) {
Ok(m) => m,
Err(err) => return Poll::Ready(Some(Err(err))),
}
} else {
return Poll::Ready(None);
};
tracing::trace!(target: LOG_TARGET, "Received message: {:?}", msg);
Poll::Ready(Some(Ok(msg)))
}
/// A protocol error.
#[derive(Debug, thiserror::Error)]
pub enum ProtocolError {
/// I/O error.
#[error("I/O error: `{0}`")]
IoError(#[from] io::Error),
/// Received an invalid message from the remote.
#[error("Received an invalid message from the remote.")]
InvalidMessage,
/// A protocol (name) is invalid.
#[error("A protocol (name) is invalid.")]
InvalidProtocol,
/// Too many protocols have been returned by the remote.
#[error("Too many protocols have been returned by the remote.")]
TooManyProtocols,
/// The protocol is not supported.
#[error("The protocol is not supported.")]
ProtocolNotSupported,
}
impl PartialEq for ProtocolError {
fn eq(&self, other: &Self) -> bool {
match (self, other) {
(ProtocolError::IoError(lhs), ProtocolError::IoError(rhs)) => lhs.kind() == rhs.kind(),
_ => std::mem::discriminant(self) == std::mem::discriminant(other),
}
}
}
impl From<ProtocolError> for io::Error {
fn from(err: ProtocolError) -> Self {
if let ProtocolError::IoError(e) = err {
return e;
}
io::ErrorKind::InvalidData.into()
}
}
impl From<uvi::decode::Error> for ProtocolError {
fn from(err: uvi::decode::Error) -> ProtocolError {
Self::from(io::Error::new(io::ErrorKind::InvalidData, err.to_string()))
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_decode_main_messages() {
// Decode main messages.
let bytes = Bytes::from_static(MSG_MULTISTREAM_1_0);
assert_eq!(
Message::decode(bytes).unwrap(),
Message::Header(HeaderLine::V1)
);
let bytes = Bytes::from_static(MSG_PROTOCOL_NA);
assert_eq!(Message::decode(bytes).unwrap(), Message::NotAvailable);
let bytes = Bytes::from_static(MSG_LS);
assert_eq!(Message::decode(bytes).unwrap(), Message::ListProtocols);
}
#[test]
fn test_decode_empty_message() {
// Empty message should decode to an IoError, not Header::Protocols.
let bytes = Bytes::from_static(b"");
match Message::decode(bytes).unwrap_err() {
ProtocolError::IoError(io) => assert_eq!(io.kind(), io::ErrorKind::InvalidData),
err => panic!("Unexpected error: {:?}", err),
};
}
#[test]
fn test_decode_protocols() {
// Single protocol.
let bytes = Bytes::from_static(b"/protocol-v1\n");
assert_eq!(
Message::decode(bytes).unwrap(),
Message::Protocol(Protocol::try_from(Bytes::from_static(b"/protocol-v1")).unwrap())
);
// Multiple protocols.
let expected = Message::Protocols(vec![
Protocol::try_from(Bytes::from_static(b"/protocol-v1")).unwrap(),
Protocol::try_from(Bytes::from_static(b"/protocol-v2")).unwrap(),
]);
let mut encoded = BytesMut::new();
expected.encode(&mut encoded).unwrap();
// `\r` is the length of the protocol names.
let bytes = Bytes::from_static(b"\r/protocol-v1\n\r/protocol-v2\n\n");
assert_eq!(encoded, bytes);
assert_eq!(
Message::decode(bytes).unwrap(),
Message::Protocols(vec![
Protocol::try_from(Bytes::from_static(b"/protocol-v1")).unwrap(),
Protocol::try_from(Bytes::from_static(b"/protocol-v2")).unwrap(),
])
);
// Check invalid length.
let bytes = Bytes::from_static(b"\r/v1\n\n");
assert_eq!(
Message::decode(bytes).unwrap_err(),
ProtocolError::InvalidMessage
);
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/tests/dialer_select.rs | src/multistream_select/tests/dialer_select.rs | // Copyright 2017 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Integration tests for protocol negotiation.
use async_std::net::{TcpListener, TcpStream};
use futures::prelude::*;
use multistream_select::{dialer_select_proto, listener_select_proto, NegotiationError, Version};
#[test]
fn select_proto_basic() {
async fn run(version: Version) {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let listener_addr = listener.local_addr().unwrap();
let server = async_std::task::spawn(async move {
let connec = listener.accept().await.unwrap().0;
let protos = vec![b"/proto1", b"/proto2"];
let (proto, mut io) = listener_select_proto(connec, protos).await.unwrap();
assert_eq!(proto, b"/proto2");
let mut out = vec![0; 32];
let n = io.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, b"ping");
io.write_all(b"pong").await.unwrap();
io.flush().await.unwrap();
});
let client = async_std::task::spawn(async move {
let connec = TcpStream::connect(&listener_addr).await.unwrap();
let protos = vec![b"/proto3", b"/proto2"];
let (proto, mut io) = dialer_select_proto(connec, protos.into_iter(), version)
.await
.unwrap();
assert_eq!(proto, b"/proto2");
io.write_all(b"ping").await.unwrap();
io.flush().await.unwrap();
let mut out = vec![0; 32];
let n = io.read(&mut out).await.unwrap();
out.truncate(n);
assert_eq!(out, b"pong");
});
server.await;
client.await;
}
async_std::task::block_on(run(Version::V1));
async_std::task::block_on(run(Version::V1Lazy));
}
/// Tests the expected behaviour of failed negotiations.
#[test]
fn negotiation_failed() {
let _ = env_logger::try_init();
async fn run(
Test {
version,
listen_protos,
dial_protos,
dial_payload,
}: Test,
) {
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let listener_addr = listener.local_addr().unwrap();
let server = async_std::task::spawn(async move {
let connec = listener.accept().await.unwrap().0;
let io = match listener_select_proto(connec, listen_protos).await {
Ok((_, io)) => io,
Err(NegotiationError::Failed) => return,
Err(NegotiationError::ProtocolError(e)) => {
panic!("Unexpected protocol error {e}")
}
};
match io.complete().await {
Err(NegotiationError::Failed) => {}
_ => panic!(),
}
});
let client = async_std::task::spawn(async move {
let connec = TcpStream::connect(&listener_addr).await.unwrap();
let mut io = match dialer_select_proto(connec, dial_protos.into_iter(), version).await {
Err(NegotiationError::Failed) => return,
Ok((_, io)) => io,
Err(_) => panic!(),
};
// The dialer may write a payload that is even sent before it
// got confirmation of the last proposed protocol, when `V1Lazy`
// is used.
io.write_all(&dial_payload).await.unwrap();
match io.complete().await {
Err(NegotiationError::Failed) => {}
_ => panic!(),
}
});
server.await;
client.await;
}
/// Parameters for a single test run.
#[derive(Clone)]
struct Test {
version: Version,
listen_protos: Vec<&'static str>,
dial_protos: Vec<&'static str>,
dial_payload: Vec<u8>,
}
// Disjunct combinations of listen and dial protocols to test.
//
// The choices here cover the main distinction between a single
// and multiple protocols.
let protos = vec![
(vec!["/proto1"], vec!["/proto2"]),
(vec!["/proto1", "/proto2"], vec!["/proto3", "/proto4"]),
];
// The payloads that the dialer sends after "successful" negotiation,
// which may be sent even before the dialer got protocol confirmation
// when `V1Lazy` is used.
//
// The choices here cover the specific situations that can arise with
// `V1Lazy` and which must nevertheless behave identically to `V1` w.r.t.
// the outcome of the negotiation.
let payloads = vec![
// No payload, in which case all versions should behave identically
// in any case, i.e. the baseline test.
vec![],
// With this payload and `V1Lazy`, the listener interprets the first
// `1` as a message length and encounters an invalid message (the
// second `1`). The listener is nevertheless expected to fail
// negotiation normally, just like with `V1`.
vec![1, 1],
// With this payload and `V1Lazy`, the listener interprets the first
// `42` as a message length and encounters unexpected EOF trying to
// read a message of that length. The listener is nevertheless expected
// to fail negotiation normally, just like with `V1`
vec![42, 1],
];
for (listen_protos, dial_protos) in protos {
for dial_payload in payloads.clone() {
for &version in &[Version::V1, Version::V1Lazy] {
async_std::task::block_on(run(Test {
version,
listen_protos: listen_protos.clone(),
dial_protos: dial_protos.clone(),
dial_payload: dial_payload.clone(),
}))
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/multistream_select/tests/transport.rs | src/multistream_select/tests/transport.rs | // Copyright 2020 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use futures::{channel::oneshot, prelude::*, ready};
use libp2p_core::{
multiaddr::Protocol,
muxing::StreamMuxerBox,
transport::{self, MemoryTransport},
upgrade, Multiaddr, Transport,
};
use libp2p_identity as identity;
use libp2p_identity::PeerId;
use libp2p_mplex::MplexConfig;
use libp2p_plaintext::PlainText2Config;
use libp2p_swarm::{dummy, SwarmBuilder, SwarmEvent};
use rand::random;
use std::task::Poll;
type TestTransport = transport::Boxed<(PeerId, StreamMuxerBox)>;
fn mk_transport(up: upgrade::Version) -> (PeerId, TestTransport) {
let keys = identity::Keypair::generate_ed25519();
let id = keys.public().to_peer_id();
(
id,
MemoryTransport::default()
.upgrade(up)
.authenticate(PlainText2Config {
local_public_key: keys.public(),
})
.multiplex(MplexConfig::default())
.boxed(),
)
}
/// Tests the transport upgrade process with all supported
/// upgrade protocol versions.
#[test]
fn transport_upgrade() {
let _ = env_logger::try_init();
fn run(up: upgrade::Version) {
let (dialer_id, dialer_transport) = mk_transport(up);
let (listener_id, listener_transport) = mk_transport(up);
let listen_addr = Multiaddr::from(Protocol::Memory(random::<u64>()));
let mut dialer =
SwarmBuilder::with_async_std_executor(dialer_transport, dummy::Behaviour, dialer_id)
.build();
let mut listener = SwarmBuilder::with_async_std_executor(
listener_transport,
dummy::Behaviour,
listener_id,
)
.build();
listener.listen_on(listen_addr).unwrap();
let (addr_sender, addr_receiver) = oneshot::channel();
let client = async move {
let addr = addr_receiver.await.unwrap();
dialer.dial(addr).unwrap();
futures::future::poll_fn(move |cx| loop {
if let SwarmEvent::ConnectionEstablished { .. } =
ready!(dialer.poll_next_unpin(cx)).unwrap()
{
return Poll::Ready(());
}
})
.await
};
let mut addr_sender = Some(addr_sender);
let server = futures::future::poll_fn(move |cx| loop {
match ready!(listener.poll_next_unpin(cx)).unwrap() {
SwarmEvent::NewListenAddr { address, .. } => {
addr_sender.take().unwrap().send(address).unwrap();
}
SwarmEvent::IncomingConnection { .. } => {}
SwarmEvent::ConnectionEstablished { .. } => return Poll::Ready(()),
_ => {}
}
});
async_std::task::block_on(future::select(Box::pin(server), Box::pin(client)));
}
run(upgrade::Version::V1);
run(upgrade::Version::V1Lazy);
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/protocol_set.rs | src/protocol/protocol_set.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
codec::ProtocolCodec,
error::{Error, NegotiationError, SubstreamError},
multistream_select::{
NegotiationError as MultiStreamNegotiationError, ProtocolError as MultiStreamProtocolError,
},
protocol::{
connection::{ConnectionHandle, Permit},
Direction, TransportEvent,
},
substream::Substream,
transport::{
manager::{ProtocolContext, TransportManagerEvent},
Endpoint,
},
types::{protocol::ProtocolName, ConnectionId, SubstreamId},
PeerId,
};
use futures::{stream::FuturesUnordered, Stream, StreamExt};
use multiaddr::Multiaddr;
use tokio::sync::mpsc::{channel, Receiver, Sender};
#[cfg(any(feature = "quic", feature = "webrtc", feature = "websocket"))]
use std::sync::atomic::Ordering;
use std::{
collections::HashMap,
fmt::Debug,
pin::Pin,
sync::{atomic::AtomicUsize, Arc},
task::{Context, Poll},
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::protocol-set";
/// Events emitted by the underlying transport protocols.
#[derive(Debug)]
pub enum InnerTransportEvent {
/// Connection established to `peer`.
ConnectionEstablished {
/// Peer ID.
peer: PeerId,
/// Connection ID.
connection: ConnectionId,
/// Endpoint.
endpoint: Endpoint,
/// Handle for communicating with the connection.
sender: ConnectionHandle,
},
/// Connection closed.
ConnectionClosed {
/// Peer ID.
peer: PeerId,
/// Connection ID.
connection: ConnectionId,
},
/// Failed to dial peer.
///
/// This is reported to that protocol which initiated the connection.
DialFailure {
/// Peer ID.
peer: PeerId,
/// Dialed addresses.
addresses: Vec<Multiaddr>,
},
/// Substream opened for `peer`.
SubstreamOpened {
/// Peer ID.
peer: PeerId,
/// Protocol name.
///
/// One protocol handler may handle multiple sub-protocols (such as `/ipfs/identify/1.0.0`
/// and `/ipfs/identify/push/1.0.0`) or it may have aliases which should be handled by
/// the same protocol handler. When the substream is sent from transport to the protocol
/// handler, the protocol name that was used to negotiate the substream is also sent so
/// the protocol can handle the substream appropriately.
protocol: ProtocolName,
/// Fallback name.
///
/// If the substream was negotiated using a fallback name of the main protocol,
/// `fallback` is `Some`.
fallback: Option<ProtocolName>,
/// Substream direction.
///
/// Informs the protocol whether the substream is inbound (opened by the remote node)
/// or outbound (opened by the local node). This allows the protocol to distinguish
/// between the two types of substreams and execute correct code for the substream.
///
/// Outbound substreams also contain the substream ID which allows the protocol to
/// distinguish between different outbound substreams.
direction: Direction,
/// Connection ID.
connection_id: ConnectionId,
/// Substream.
substream: Substream,
},
/// Failed to open substream.
///
/// Substream open failures are reported only for outbound substreams.
SubstreamOpenFailure {
/// Substream ID.
substream: SubstreamId,
/// Error that occurred when the substream was being opened.
error: SubstreamError,
},
}
impl From<InnerTransportEvent> for TransportEvent {
fn from(event: InnerTransportEvent) -> Self {
match event {
InnerTransportEvent::DialFailure { peer, addresses } =>
TransportEvent::DialFailure { peer, addresses },
InnerTransportEvent::SubstreamOpened {
peer,
protocol,
fallback,
direction,
substream,
..
} => TransportEvent::SubstreamOpened {
peer,
protocol,
fallback,
direction,
substream,
},
InnerTransportEvent::SubstreamOpenFailure { substream, error } =>
TransportEvent::SubstreamOpenFailure { substream, error },
event => panic!("cannot convert {event:?}"),
}
}
}
/// Events emitted by the installed protocols to transport.
#[derive(Debug, Clone)]
pub enum ProtocolCommand {
/// Open substream.
OpenSubstream {
/// Protocol name.
protocol: ProtocolName,
/// Fallback names.
///
/// If the protocol has changed its name but wishes to support the old name(s), it must
/// provide the old protocol names in `fallback_names`. These are fed into
/// `multistream-select` which them attempts to negotiate a protocol for the substream
/// using one of the provided names and if the substream is negotiated successfully, will
/// report back the actual protocol name that was negotiated, in case the protocol
/// needs to deal with the old version of the protocol in different way compared to
/// the new version.
fallback_names: Vec<ProtocolName>,
/// Substream ID.
///
/// Protocol allocates an ephemeral ID for outbound substreams which allows it to track
/// the state of its pending substream. The ID is given back to protocol in
/// [`TransportEvent::SubstreamOpened`]/[`TransportEvent::SubstreamOpenFailure`].
///
/// This allows the protocol to distinguish inbound substreams from outbound substreams
/// and associate incoming substreams with whatever logic it has.
substream_id: SubstreamId,
/// Connection ID.
connection_id: ConnectionId,
/// Connection permit.
///
/// `Permit` allows the connection to be kept open while the permit is held and it is given
/// to the substream to hold once it has been opened. When the substream is dropped, the
/// permit is dropped and the connection may be closed if no other permit is being
/// held.
permit: Permit,
},
/// Forcibly close the connection, even if other protocols have substreams open over it.
ForceClose,
}
/// Supported protocol information.
///
/// Each connection gets a copy of [`ProtocolSet`] which allows it to interact
/// directly with installed protocols.
pub struct ProtocolSet {
/// Installed protocols.
pub(crate) protocols: HashMap<ProtocolName, ProtocolContext>,
mgr_tx: Sender<TransportManagerEvent>,
connection: ConnectionHandle,
rx: Receiver<ProtocolCommand>,
#[allow(unused)]
next_substream_id: Arc<AtomicUsize>,
fallback_names: HashMap<ProtocolName, ProtocolName>,
}
impl ProtocolSet {
pub fn new(
connection_id: ConnectionId,
mgr_tx: Sender<TransportManagerEvent>,
next_substream_id: Arc<AtomicUsize>,
protocols: HashMap<ProtocolName, ProtocolContext>,
) -> Self {
let (tx, rx) = channel(256);
let fallback_names = protocols
.iter()
.flat_map(|(protocol, context)| {
context
.fallback_names
.iter()
.map(|fallback| (fallback.clone(), protocol.clone()))
.collect::<HashMap<_, _>>()
})
.collect();
ProtocolSet {
rx,
mgr_tx,
protocols,
next_substream_id,
fallback_names,
connection: ConnectionHandle::new(connection_id, tx),
}
}
/// Try to acquire permit to keep the connection open.
pub fn try_get_permit(&mut self) -> Option<Permit> {
self.connection.try_get_permit()
}
/// Get next substream ID.
#[cfg(any(feature = "quic", feature = "webrtc", feature = "websocket"))]
pub fn next_substream_id(&self) -> SubstreamId {
SubstreamId::from(self.next_substream_id.fetch_add(1usize, Ordering::Relaxed))
}
/// Get the list of all supported protocols.
pub fn protocols(&self) -> Vec<ProtocolName> {
self.protocols
.keys()
.cloned()
.chain(self.fallback_names.keys().cloned())
.collect()
}
/// Report to `protocol` that substream was opened for `peer`.
pub async fn report_substream_open(
&mut self,
peer: PeerId,
protocol: ProtocolName,
direction: Direction,
substream: Substream,
) -> Result<(), SubstreamError> {
tracing::debug!(target: LOG_TARGET, %protocol, ?peer, ?direction, "substream opened");
let (protocol, fallback) = match self.fallback_names.get(&protocol) {
Some(main_protocol) => (main_protocol.clone(), Some(protocol)),
None => (protocol, None),
};
let Some(protocol_context) = self.protocols.get(&protocol) else {
return Err(NegotiationError::MultistreamSelectError(
MultiStreamNegotiationError::ProtocolError(
MultiStreamProtocolError::ProtocolNotSupported,
),
)
.into());
};
let event = InnerTransportEvent::SubstreamOpened {
peer,
protocol: protocol.clone(),
fallback,
direction,
substream,
connection_id: *self.connection.connection_id(),
};
protocol_context
.tx
.send(event)
.await
.map_err(|_| SubstreamError::ConnectionClosed)
}
/// Get codec used by the protocol.
pub fn protocol_codec(&self, protocol: &ProtocolName) -> ProtocolCodec {
// NOTE: `protocol` must exist in `self.protocol` as it was negotiated
// using the protocols from this set
self.protocols
.get(self.fallback_names.get(protocol).map_or(protocol, |protocol| protocol))
.expect("protocol to exist")
.codec
}
/// Report to `protocol` that connection failed to open substream for `peer`.
pub async fn report_substream_open_failure(
&mut self,
protocol: ProtocolName,
substream: SubstreamId,
error: SubstreamError,
) -> crate::Result<()> {
tracing::debug!(
target: LOG_TARGET,
%protocol,
?substream,
?error,
"failed to open substream",
);
self.protocols
.get_mut(&protocol)
.ok_or(Error::ProtocolNotSupported(protocol.to_string()))?
.tx
.send(InnerTransportEvent::SubstreamOpenFailure { substream, error })
.await
.map_err(From::from)
}
/// Report to protocols that a connection was established.
pub(crate) async fn report_connection_established(
&mut self,
peer: PeerId,
endpoint: Endpoint,
) -> crate::Result<()> {
let connection_handle = self.connection.downgrade();
let mut futures = self
.protocols
.values()
.map(|sender| {
let endpoint = endpoint.clone();
let connection_handle = connection_handle.clone();
async move {
sender
.tx
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: endpoint.connection_id(),
endpoint,
sender: connection_handle,
})
.await
}
})
.collect::<FuturesUnordered<_>>();
while !futures.is_empty() {
if let Some(Err(error)) = futures.next().await {
return Err(error.into());
}
}
Ok(())
}
/// Report to protocols that a connection was closed.
pub(crate) async fn report_connection_closed(
&mut self,
peer: PeerId,
connection_id: ConnectionId,
) -> crate::Result<()> {
let mut futures = self
.protocols
.values()
.map(|sender| async move {
sender
.tx
.send(InnerTransportEvent::ConnectionClosed {
peer,
connection: connection_id,
})
.await
})
.collect::<FuturesUnordered<_>>();
while !futures.is_empty() {
if let Some(Err(error)) = futures.next().await {
return Err(error.into());
}
}
self.mgr_tx
.send(TransportManagerEvent::ConnectionClosed {
peer,
connection: connection_id,
})
.await
.map_err(From::from)
}
}
impl Stream for ProtocolSet {
type Item = ProtocolCommand;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.rx.poll_recv(cx)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::mock::substream::MockSubstream;
use std::collections::HashSet;
#[tokio::test]
async fn fallback_is_provided() {
let (tx, _rx) = channel(64);
let (tx1, _rx1) = channel(64);
let mut protocol_set = ProtocolSet::new(
ConnectionId::from(0usize),
tx,
Default::default(),
HashMap::from_iter([(
ProtocolName::from("/notif/1"),
ProtocolContext {
tx: tx1,
codec: ProtocolCodec::Identity(32),
fallback_names: vec![
ProtocolName::from("/notif/1/fallback/1"),
ProtocolName::from("/notif/1/fallback/2"),
],
},
)]),
);
let expected_protocols = HashSet::from([
ProtocolName::from("/notif/1"),
ProtocolName::from("/notif/1/fallback/1"),
ProtocolName::from("/notif/1/fallback/2"),
]);
for protocol in protocol_set.protocols().iter() {
assert!(expected_protocols.contains(protocol));
}
protocol_set
.report_substream_open(
PeerId::random(),
ProtocolName::from("/notif/1/fallback/2"),
Direction::Inbound,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
)
.await
.unwrap();
}
#[tokio::test]
async fn main_protocol_reported_if_main_protocol_negotiated() {
let (tx, _rx) = channel(64);
let (tx1, mut rx1) = channel(64);
let mut protocol_set = ProtocolSet::new(
ConnectionId::from(0usize),
tx,
Default::default(),
HashMap::from_iter([(
ProtocolName::from("/notif/1"),
ProtocolContext {
tx: tx1,
codec: ProtocolCodec::Identity(32),
fallback_names: vec![
ProtocolName::from("/notif/1/fallback/1"),
ProtocolName::from("/notif/1/fallback/2"),
],
},
)]),
);
protocol_set
.report_substream_open(
PeerId::random(),
ProtocolName::from("/notif/1"),
Direction::Inbound,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
)
.await
.unwrap();
match rx1.recv().await.unwrap() {
InnerTransportEvent::SubstreamOpened {
protocol, fallback, ..
} => {
assert!(fallback.is_none());
assert_eq!(protocol, ProtocolName::from("/notif/1"));
}
_ => panic!("invalid event received"),
}
}
#[tokio::test]
async fn fallback_is_reported_to_protocol() {
let (tx, _rx) = channel(64);
let (tx1, mut rx1) = channel(64);
let mut protocol_set = ProtocolSet::new(
ConnectionId::from(0usize),
tx,
Default::default(),
HashMap::from_iter([(
ProtocolName::from("/notif/1"),
ProtocolContext {
tx: tx1,
codec: ProtocolCodec::Identity(32),
fallback_names: vec![
ProtocolName::from("/notif/1/fallback/1"),
ProtocolName::from("/notif/1/fallback/2"),
],
},
)]),
);
protocol_set
.report_substream_open(
PeerId::random(),
ProtocolName::from("/notif/1/fallback/2"),
Direction::Inbound,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
)
.await
.unwrap();
match rx1.recv().await.unwrap() {
InnerTransportEvent::SubstreamOpened {
protocol, fallback, ..
} => {
assert_eq!(fallback, Some(ProtocolName::from("/notif/1/fallback/2")));
assert_eq!(protocol, ProtocolName::from("/notif/1"));
}
_ => panic!("invalid event received"),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/connection.rs | src/protocol/connection.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Connection-related helper code.
use crate::{
error::{Error, SubstreamError},
protocol::protocol_set::ProtocolCommand,
types::{protocol::ProtocolName, ConnectionId, SubstreamId},
};
use tokio::sync::mpsc::{error::TrySendError, Sender, WeakSender};
/// Connection type, from the point of view of the protocol.
#[derive(Debug, Clone)]
enum ConnectionType {
/// Connection is actively kept open.
Active(Sender<ProtocolCommand>),
/// Connection is considered inactive as far as the protocol is concerned
/// and if no substreams are being opened and no protocol is interested in
/// keeping the connection open, it will be closed.
Inactive(WeakSender<ProtocolCommand>),
}
/// Type representing a handle to connection which allows protocols to communicate with the
/// connection.
#[derive(Debug, Clone)]
pub struct ConnectionHandle {
/// Connection type.
connection: ConnectionType,
/// Connection ID.
connection_id: ConnectionId,
}
impl ConnectionHandle {
/// Create new [`ConnectionHandle`].
///
/// By default the connection is set as `Active` to give protocols time to open a substream if
/// they wish.
pub fn new(connection_id: ConnectionId, connection: Sender<ProtocolCommand>) -> Self {
Self {
connection_id,
connection: ConnectionType::Active(connection),
}
}
/// Get active sender from the [`ConnectionHandle`] and then downgrade it to an inactive
/// connection.
///
/// This function is only called once when the connection is established to remote peer and that
/// one time the connection type must be `Active`, unless there is a logic bug in `litep2p`.
pub fn downgrade(&mut self) -> Self {
match &self.connection {
ConnectionType::Active(connection) => {
let handle = Self::new(self.connection_id, connection.clone());
self.connection = ConnectionType::Inactive(connection.downgrade());
handle
}
ConnectionType::Inactive(_) => {
panic!("state mismatch: tried to downgrade an inactive connection")
}
}
}
/// Get reference to connection ID.
pub fn connection_id(&self) -> &ConnectionId {
&self.connection_id
}
/// Mark connection as closed.
pub fn close(&mut self) {
if let ConnectionType::Active(connection) = &self.connection {
self.connection = ConnectionType::Inactive(connection.downgrade());
}
}
/// Try to upgrade the connection to active state.
pub fn try_upgrade(&mut self) {
if let ConnectionType::Inactive(inactive) = &self.connection {
if let Some(active) = inactive.upgrade() {
self.connection = ConnectionType::Active(active);
}
}
}
/// Attempt to acquire permit which will keep the connection open for indefinite time.
pub fn try_get_permit(&self) -> Option<Permit> {
match &self.connection {
ConnectionType::Active(active) => Some(Permit::new(active.clone())),
ConnectionType::Inactive(inactive) => Some(Permit::new(inactive.upgrade()?)),
}
}
/// Open substream to remote peer over `protocol` and send the acquired permit to the
/// transport so it can be given to the opened substream.
pub fn open_substream(
&mut self,
protocol: ProtocolName,
fallback_names: Vec<ProtocolName>,
substream_id: SubstreamId,
permit: Permit,
) -> Result<(), SubstreamError> {
match &self.connection {
ConnectionType::Active(active) => active.clone(),
ConnectionType::Inactive(inactive) =>
inactive.upgrade().ok_or(SubstreamError::ConnectionClosed)?,
}
.try_send(ProtocolCommand::OpenSubstream {
protocol: protocol.clone(),
fallback_names,
substream_id,
connection_id: self.connection_id,
permit,
})
.map_err(|error| match error {
TrySendError::Full(_) => SubstreamError::ChannelClogged,
TrySendError::Closed(_) => SubstreamError::ConnectionClosed,
})
}
/// Force close connection.
pub fn force_close(&mut self) -> crate::Result<()> {
match &self.connection {
ConnectionType::Active(active) => active.clone(),
ConnectionType::Inactive(inactive) =>
inactive.upgrade().ok_or(Error::ConnectionClosed)?,
}
.try_send(ProtocolCommand::ForceClose)
.map_err(|error| match error {
TrySendError::Full(_) => Error::ChannelClogged,
TrySendError::Closed(_) => Error::ConnectionClosed,
})
}
/// Check if the connection is active.
pub fn is_active(&self) -> bool {
matches!(self.connection, ConnectionType::Active(_))
}
}
/// Type which allows the connection to be kept open.
#[derive(Debug, Clone)]
pub struct Permit {
/// Active connection.
_connection: Sender<ProtocolCommand>,
}
impl Permit {
/// Create new [`Permit`] which allows the connection to be kept open.
pub fn new(_connection: Sender<ProtocolCommand>) -> Self {
Self { _connection }
}
}
#[cfg(test)]
mod tests {
use super::*;
use tokio::sync::mpsc::channel;
#[test]
#[should_panic]
fn downgrade_inactive_connection() {
let (tx, _rx) = channel(1);
let mut handle = ConnectionHandle::new(ConnectionId::new(), tx);
let mut new_handle = handle.downgrade();
assert!(std::matches!(
new_handle.connection,
ConnectionType::Inactive(_)
));
// try to downgrade an already-downgraded connection
let _handle = new_handle.downgrade();
}
#[tokio::test]
async fn open_substream_open_downgraded_connection() {
let (tx, mut rx) = channel(1);
let mut handle = ConnectionHandle::new(ConnectionId::new(), tx);
let mut handle = handle.downgrade();
let permit = handle.try_get_permit().unwrap();
let result = handle.open_substream(
ProtocolName::from("/protocol/1"),
Vec::new(),
SubstreamId::new(),
permit,
);
assert!(result.is_ok());
assert!(rx.recv().await.is_some());
}
#[tokio::test]
async fn open_substream_closed_downgraded_connection() {
let (tx, _rx) = channel(1);
let mut handle = ConnectionHandle::new(ConnectionId::new(), tx);
let mut handle = handle.downgrade();
let permit = handle.try_get_permit().unwrap();
drop(_rx);
let result = handle.open_substream(
ProtocolName::from("/protocol/1"),
Vec::new(),
SubstreamId::new(),
permit,
);
assert!(result.is_err());
}
#[tokio::test]
async fn open_substream_channel_clogged() {
let (tx, _rx) = channel(1);
let mut handle = ConnectionHandle::new(ConnectionId::new(), tx);
let mut handle = handle.downgrade();
let permit = handle.try_get_permit().unwrap();
let result = handle.open_substream(
ProtocolName::from("/protocol/1"),
Vec::new(),
SubstreamId::new(),
permit,
);
assert!(result.is_ok());
let permit = handle.try_get_permit().unwrap();
match handle.open_substream(
ProtocolName::from("/protocol/1"),
Vec::new(),
SubstreamId::new(),
permit,
) {
Err(SubstreamError::ChannelClogged) => {}
error => panic!("invalid error: {error:?}"),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/transport_service.rs | src/protocol/transport_service.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
addresses::PublicAddresses,
error::{Error, ImmediateDialError, SubstreamError},
protocol::{connection::ConnectionHandle, InnerTransportEvent, TransportEvent},
transport::{manager::TransportManagerHandle, Endpoint},
types::{protocol::ProtocolName, ConnectionId, SubstreamId},
PeerId, DEFAULT_CHANNEL_SIZE,
};
use futures::{future::BoxFuture, stream::FuturesUnordered, Stream, StreamExt};
use multiaddr::{Multiaddr, Protocol};
use multihash::Multihash;
use tokio::sync::mpsc::{channel, Receiver, Sender};
use std::{
collections::{HashMap, HashSet},
fmt::Debug,
pin::Pin,
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
task::{Context, Poll, Waker},
time::{Duration, Instant},
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::transport-service";
/// Connection context for the peer.
///
/// Each peer is allowed to have at most two connections open. The first open connection is the
/// primary connections which the local node uses to open substreams to remote. Secondary connection
/// may be open if local and remote opened connections at the same time.
///
/// Secondary connection may be promoted to a primary connection if the primary connections closes
/// while the secondary connections remains open.
#[derive(Debug)]
struct ConnectionContext {
/// Primary connection.
primary: ConnectionHandle,
/// Secondary connection, if it exists.
secondary: Option<ConnectionHandle>,
}
impl ConnectionContext {
/// Create new [`ConnectionContext`].
fn new(primary: ConnectionHandle) -> Self {
Self {
primary,
secondary: None,
}
}
/// Downgrade connection to non-active which means it will be closed
/// if there are no substreams open over it.
fn downgrade(&mut self, connection_id: &ConnectionId) {
if self.primary.connection_id() == connection_id {
self.primary.close();
return;
}
if let Some(handle) = &mut self.secondary {
if handle.connection_id() == connection_id {
handle.close();
return;
}
}
tracing::debug!(
target: LOG_TARGET,
primary = ?self.primary.connection_id(),
secondary = ?self.secondary.as_ref().map(|handle| handle.connection_id()),
?connection_id,
"connection doesn't exist, cannot downgrade",
);
}
/// Try to upgrade the connection to active state.
fn try_upgrade(&mut self, connection_id: &ConnectionId) {
if self.primary.connection_id() == connection_id {
self.primary.try_upgrade();
return;
}
if let Some(handle) = &mut self.secondary {
if handle.connection_id() == connection_id {
handle.try_upgrade();
return;
}
}
tracing::debug!(
target: LOG_TARGET,
primary = ?self.primary.connection_id(),
secondary = ?self.secondary.as_ref().map(|handle| handle.connection_id()),
?connection_id,
"connection doesn't exist, cannot upgrade",
);
}
}
/// Tracks connection keep-alive timeouts.
///
/// A connection keep-alive timeout is started when a connection is established.
/// If no substreams are opened over the connection within the timeout,
/// the connection is downgraded. However, if a substream is opened over the connection,
/// the timeout is reset.
#[derive(Debug)]
struct KeepAliveTracker {
/// Close the connection if no substreams are open within this time frame.
keep_alive_timeout: Duration,
/// Track substream last activity.
last_activity: HashMap<(PeerId, ConnectionId), Instant>,
/// Pending keep-alive timeouts.
pending_keep_alive_timeouts: FuturesUnordered<BoxFuture<'static, (PeerId, ConnectionId)>>,
/// Saved waker.
waker: Option<Waker>,
}
impl KeepAliveTracker {
/// Create new [`KeepAliveTracker`].
pub fn new(keep_alive_timeout: Duration) -> Self {
Self {
keep_alive_timeout,
last_activity: HashMap::new(),
pending_keep_alive_timeouts: FuturesUnordered::new(),
waker: None,
}
}
/// Called on connection established event to add a new keep-alive timeout.
pub fn on_connection_established(&mut self, peer: PeerId, connection_id: ConnectionId) {
self.substream_activity(peer, connection_id);
}
/// Called on connection closed event.
pub fn on_connection_closed(&mut self, peer: PeerId, connection_id: ConnectionId) {
self.last_activity.remove(&(peer, connection_id));
}
/// Called on substream opened event to track the last activity.
pub fn substream_activity(&mut self, peer: PeerId, connection_id: ConnectionId) {
// Keep track of the connection ID and the time the substream was opened.
if self.last_activity.insert((peer, connection_id), Instant::now()).is_none() {
// Refill futures if there is no pending keep-alive timeout.
let timeout = self.keep_alive_timeout;
self.pending_keep_alive_timeouts.push(Box::pin(async move {
tokio::time::sleep(timeout).await;
(peer, connection_id)
}));
}
tracing::trace!(
target: LOG_TARGET,
?peer,
?connection_id,
?self.keep_alive_timeout,
last_activity = ?self.last_activity.len(),
pending_keep_alive_timeouts = ?self.pending_keep_alive_timeouts.len(),
"substream activity",
);
// Wake any pending poll.
if let Some(waker) = self.waker.take() {
waker.wake()
}
}
}
impl Stream for KeepAliveTracker {
type Item = (PeerId, ConnectionId);
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
if self.pending_keep_alive_timeouts.is_empty() {
// No pending keep-alive timeouts.
self.waker = Some(cx.waker().clone());
return Poll::Pending;
}
match self.pending_keep_alive_timeouts.poll_next_unpin(cx) {
Poll::Ready(Some(key)) => {
// Check last-activity time.
let Some(last_activity) = self.last_activity.get(&key) else {
tracing::debug!(
target: LOG_TARGET,
peer = ?key.0,
connection_id = ?key.1,
"Last activity no longer tracks the connection (closed event triggered)",
);
// We have effectively ignored this `Poll::Ready` event. To prevent the
// future from getting stuck, we need to tell the executor to poll again
// for more events.
cx.waker().wake_by_ref();
return Poll::Pending;
};
// Keep-alive timeout not reached yet.
let inactive_for = last_activity.elapsed();
if inactive_for < self.keep_alive_timeout {
let timeout = self.keep_alive_timeout.saturating_sub(inactive_for);
tracing::trace!(
target: LOG_TARGET,
peer = ?key.0,
connection_id = ?key.1,
?timeout,
"keep-alive timeout not yet reached",
);
// Refill the keep alive timeouts.
self.pending_keep_alive_timeouts.push(Box::pin(async move {
tokio::time::sleep(timeout).await;
key
}));
// This is similar to the `last_activity` check above, we need to inform
// the executor that this object may produce more events.
cx.waker().wake_by_ref();
return Poll::Pending;
}
// Keep-alive timeout reached.
tracing::debug!(
target: LOG_TARGET,
peer = ?key.0,
connection_id = ?key.1,
"keep-alive timeout triggered",
);
self.last_activity.remove(&key);
Poll::Ready(Some(key))
}
Poll::Ready(None) | Poll::Pending => Poll::Pending,
}
}
}
/// Provides an interfaces for [`Litep2p`](crate::Litep2p) protocols to interact
/// with the underlying transport protocols.
#[derive(Debug)]
pub struct TransportService {
/// Local peer ID.
local_peer_id: PeerId,
/// Protocol.
protocol: ProtocolName,
/// Fallback names for the protocol.
fallback_names: Vec<ProtocolName>,
/// Open connections.
connections: HashMap<PeerId, ConnectionContext>,
/// Transport handle.
transport_handle: TransportManagerHandle,
/// RX channel for receiving events from tranports and connections.
rx: Receiver<InnerTransportEvent>,
/// Next substream ID.
next_substream_id: Arc<AtomicUsize>,
/// Close the connection if no substreams are open within this time frame.
keep_alive_tracker: KeepAliveTracker,
}
impl TransportService {
/// Create new [`TransportService`].
pub(crate) fn new(
local_peer_id: PeerId,
protocol: ProtocolName,
fallback_names: Vec<ProtocolName>,
next_substream_id: Arc<AtomicUsize>,
transport_handle: TransportManagerHandle,
keep_alive_timeout: Duration,
) -> (Self, Sender<InnerTransportEvent>) {
let (tx, rx) = channel(DEFAULT_CHANNEL_SIZE);
let keep_alive_tracker = KeepAliveTracker::new(keep_alive_timeout);
(
Self {
rx,
protocol,
local_peer_id,
fallback_names,
transport_handle,
next_substream_id,
connections: HashMap::new(),
keep_alive_tracker,
},
tx,
)
}
/// Get the list of public addresses of the node.
pub fn public_addresses(&self) -> PublicAddresses {
self.transport_handle.public_addresses()
}
/// Get the list of listen addresses of the node.
pub fn listen_addresses(&self) -> HashSet<Multiaddr> {
self.transport_handle.listen_addresses()
}
/// Handle connection established event.
fn on_connection_established(
&mut self,
peer: PeerId,
endpoint: Endpoint,
connection_id: ConnectionId,
handle: ConnectionHandle,
) -> Option<TransportEvent> {
tracing::debug!(
target: LOG_TARGET,
?peer,
?endpoint,
?connection_id,
protocol = %self.protocol,
current_state = ?self.connections.get(&peer),
"on connection established",
);
match self.connections.get_mut(&peer) {
Some(context) => match context.secondary {
Some(_) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
?connection_id,
?endpoint,
protocol = %self.protocol,
"ignoring third connection",
);
None
}
None => {
self.keep_alive_tracker.on_connection_established(peer, connection_id);
tracing::trace!(
target: LOG_TARGET,
?peer,
?endpoint,
?connection_id,
protocol = %self.protocol,
"secondary connection established",
);
context.secondary = Some(handle);
None
}
},
None => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?endpoint,
?connection_id,
protocol = %self.protocol,
"primary connection established",
);
self.connections.insert(peer, ConnectionContext::new(handle));
self.keep_alive_tracker.on_connection_established(peer, connection_id);
Some(TransportEvent::ConnectionEstablished { peer, endpoint })
}
}
}
/// Handle connection closed event.
fn on_connection_closed(
&mut self,
peer: PeerId,
connection_id: ConnectionId,
) -> Option<TransportEvent> {
tracing::debug!(
target: LOG_TARGET,
?peer,
?connection_id,
protocol = %self.protocol,
current_state = ?self.connections.get(&peer),
"on connection closed",
);
self.keep_alive_tracker.on_connection_closed(peer, connection_id);
let Some(context) = self.connections.get_mut(&peer) else {
tracing::warn!(
target: LOG_TARGET,
?peer,
?connection_id,
protocol = %self.protocol,
"connection closed to a non-existent peer",
);
debug_assert!(false);
return None;
};
// if the primary connection was closed, check if there exist a secondary connection
// and if it does, convert the secondary connection a primary connection
if context.primary.connection_id() == &connection_id {
tracing::trace!(
target: LOG_TARGET,
?peer,
?connection_id,
protocol = %self.protocol,
"primary connection closed"
);
match context.secondary.take() {
None => {
self.connections.remove(&peer);
return Some(TransportEvent::ConnectionClosed { peer });
}
Some(handle) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
?connection_id,
protocol = %self.protocol,
"switch to secondary connection",
);
context.primary = handle;
return None;
}
}
}
match context.secondary.take() {
Some(handle) if handle.connection_id() == &connection_id => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?connection_id,
protocol = %self.protocol,
"secondary connection closed",
);
None
}
connection_state => {
tracing::debug!(
target: LOG_TARGET,
?peer,
?connection_id,
?connection_state,
protocol = %self.protocol,
"connection closed but it doesn't exist",
);
None
}
}
}
/// Dial `peer` using `PeerId`.
///
/// Call fails if `Litep2p` doesn't have a known address for the peer.
pub fn dial(&mut self, peer: &PeerId) -> Result<(), ImmediateDialError> {
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"Dial peer requested",
);
self.transport_handle.dial(peer)
}
/// Dial peer using a `Multiaddr`.
///
/// Call fails if the address is not in correct format or it contains an unsupported/disabled
/// transport.
///
/// Calling this function is only necessary for those addresses that are discovered out-of-band
/// since `Litep2p` internally keeps track of all peer addresses it has learned through user
/// calling this function, Kademlia peer discoveries and `Identify` responses.
pub fn dial_address(&mut self, address: Multiaddr) -> Result<(), ImmediateDialError> {
tracing::trace!(
target: LOG_TARGET,
?address,
protocol = %self.protocol,
"Dial address requested",
);
self.transport_handle.dial_address(address)
}
/// Add one or more addresses for `peer`.
///
/// The list is filtered for duplicates and unsupported transports.
pub fn add_known_address(&mut self, peer: &PeerId, addresses: impl Iterator<Item = Multiaddr>) {
let addresses: HashSet<Multiaddr> = addresses
.filter_map(|address| {
if !std::matches!(address.iter().last(), Some(Protocol::P2p(_))) {
Some(address.with(Protocol::P2p(Multihash::from_bytes(&peer.to_bytes()).ok()?)))
} else {
Some(address)
}
})
.collect();
self.transport_handle.add_known_address(peer, addresses.into_iter());
}
/// Open substream to `peer`.
///
/// Call fails if there is no connection open to `peer` or the channel towards
/// the connection is clogged.
pub fn open_substream(&mut self, peer: PeerId) -> Result<SubstreamId, SubstreamError> {
// always prefer the primary connection
let connection = &mut self
.connections
.get_mut(&peer)
.ok_or(SubstreamError::PeerDoesNotExist(peer))?
.primary;
let connection_id = *connection.connection_id();
let permit = connection.try_get_permit().ok_or(SubstreamError::ConnectionClosed)?;
let substream_id =
SubstreamId::from(self.next_substream_id.fetch_add(1usize, Ordering::Relaxed));
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?substream_id,
?connection_id,
"open substream",
);
self.keep_alive_tracker.substream_activity(peer, connection_id);
connection.try_upgrade();
connection
.open_substream(
self.protocol.clone(),
self.fallback_names.clone(),
substream_id,
permit,
)
.map(|_| substream_id)
}
/// Forcibly close the connection, even if other protocols have substreams open over it.
pub fn force_close(&mut self, peer: PeerId) -> crate::Result<()> {
let connection =
&mut self.connections.get_mut(&peer).ok_or(Error::PeerDoesntExist(peer))?;
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
secondary = ?connection.secondary,
"forcibly closing the connection",
);
if let Some(ref mut connection) = connection.secondary {
let _ = connection.force_close();
}
connection.primary.force_close()
}
/// Get local peer ID.
pub fn local_peer_id(&self) -> PeerId {
self.local_peer_id
}
/// Dynamically unregister a protocol.
///
/// This must be called when a protocol is no longer needed (e.g. user dropped the protocol
/// handle).
pub fn unregister_protocol(&self) {
self.transport_handle.unregister_protocol(self.protocol.clone());
}
}
impl Stream for TransportService {
type Item = TransportEvent;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let protocol_name = self.protocol.clone();
let duration = self.keep_alive_tracker.keep_alive_timeout;
while let Poll::Ready(event) = self.rx.poll_recv(cx) {
match event {
None => {
tracing::warn!(
target: LOG_TARGET,
protocol = ?protocol_name,
"transport service closed"
);
return Poll::Ready(None);
}
Some(InnerTransportEvent::ConnectionEstablished {
peer,
endpoint,
sender,
connection,
}) => {
if let Some(event) =
self.on_connection_established(peer, endpoint, connection, sender)
{
return Poll::Ready(Some(event));
}
}
Some(InnerTransportEvent::ConnectionClosed { peer, connection }) => {
if let Some(event) = self.on_connection_closed(peer, connection) {
return Poll::Ready(Some(event));
}
}
Some(InnerTransportEvent::SubstreamOpened {
peer,
protocol,
fallback,
direction,
substream,
connection_id,
}) => {
if protocol == self.protocol {
self.keep_alive_tracker.substream_activity(peer, connection_id);
if let Some(context) = self.connections.get_mut(&peer) {
context.try_upgrade(&connection_id);
}
}
return Poll::Ready(Some(TransportEvent::SubstreamOpened {
peer,
protocol,
fallback,
direction,
substream,
}));
}
Some(event) => return Poll::Ready(Some(event.into())),
}
}
while let Poll::Ready(Some((peer, connection_id))) =
self.keep_alive_tracker.poll_next_unpin(cx)
{
if let Some(context) = self.connections.get_mut(&peer) {
tracing::debug!(
target: LOG_TARGET,
?peer,
?connection_id,
protocol = ?protocol_name,
?duration,
"keep-alive timeout over, downgrade connection",
);
context.downgrade(&connection_id);
}
}
Poll::Pending
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
protocol::{ProtocolCommand, TransportService},
transport::{
manager::{handle::InnerTransportManagerCommand, TransportManagerHandle},
KEEP_ALIVE_TIMEOUT,
},
};
use futures::StreamExt;
use parking_lot::RwLock;
use std::collections::HashSet;
/// Create new `TransportService`
fn transport_service() -> (
TransportService,
Sender<InnerTransportEvent>,
Receiver<InnerTransportManagerCommand>,
) {
let (cmd_tx, cmd_rx) = channel(64);
let peer = PeerId::random();
let handle = TransportManagerHandle::new(
peer,
Arc::new(RwLock::new(HashMap::new())),
cmd_tx,
HashSet::new(),
Default::default(),
PublicAddresses::new(peer),
);
let (service, sender) = TransportService::new(
peer,
ProtocolName::from("/notif/1"),
Vec::new(),
Arc::new(AtomicUsize::new(0usize)),
handle,
KEEP_ALIVE_TIMEOUT,
);
(service, sender, cmd_rx)
}
#[tokio::test]
async fn secondary_connection_stored() {
let (mut service, sender, _) = transport_service();
let peer = PeerId::random();
// register first connection
let (cmd_tx1, _cmd_rx1) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::from(0usize),
endpoint: Endpoint::listener(Multiaddr::empty(), ConnectionId::from(0usize)),
sender: ConnectionHandle::new(ConnectionId::from(0usize), cmd_tx1),
})
.await
.unwrap();
if let Some(TransportEvent::ConnectionEstablished {
peer: connected_peer,
endpoint,
}) = service.next().await
{
assert_eq!(connected_peer, peer);
assert_eq!(endpoint.address(), &Multiaddr::empty());
} else {
panic!("expected event from `TransportService`");
};
// register secondary connection
let (cmd_tx2, _cmd_rx2) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::from(1usize),
endpoint: Endpoint::listener(Multiaddr::empty(), ConnectionId::from(1usize)),
sender: ConnectionHandle::new(ConnectionId::from(1usize), cmd_tx2),
})
.await
.unwrap();
futures::future::poll_fn(|cx| match service.poll_next_unpin(cx) {
std::task::Poll::Ready(_) => panic!("didn't expect event from `TransportService`"),
std::task::Poll::Pending => std::task::Poll::Ready(()),
})
.await;
let context = service.connections.get(&peer).unwrap();
assert_eq!(context.primary.connection_id(), &ConnectionId::from(0usize));
assert_eq!(
context.secondary.as_ref().unwrap().connection_id(),
&ConnectionId::from(1usize)
);
}
#[tokio::test]
async fn tertiary_connection_ignored() {
let (mut service, sender, _) = transport_service();
let peer = PeerId::random();
// register first connection
let (cmd_tx1, _cmd_rx1) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::from(0usize),
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(0usize)),
sender: ConnectionHandle::new(ConnectionId::from(0usize), cmd_tx1),
})
.await
.unwrap();
if let Some(TransportEvent::ConnectionEstablished {
peer: connected_peer,
endpoint,
}) = service.next().await
{
assert_eq!(connected_peer, peer);
assert_eq!(endpoint.address(), &Multiaddr::empty());
} else {
panic!("expected event from `TransportService`");
};
// register secondary connection
let (cmd_tx2, _cmd_rx2) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::from(1usize),
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(1usize)),
sender: ConnectionHandle::new(ConnectionId::from(1usize), cmd_tx2),
})
.await
.unwrap();
futures::future::poll_fn(|cx| match service.poll_next_unpin(cx) {
std::task::Poll::Ready(_) => panic!("didn't expect event from `TransportService`"),
std::task::Poll::Pending => std::task::Poll::Ready(()),
})
.await;
let context = service.connections.get(&peer).unwrap();
assert_eq!(context.primary.connection_id(), &ConnectionId::from(0usize));
assert_eq!(
context.secondary.as_ref().unwrap().connection_id(),
&ConnectionId::from(1usize)
);
// try to register tertiary connection and verify it's ignored
let (cmd_tx3, mut cmd_rx3) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::from(2usize),
endpoint: Endpoint::listener(Multiaddr::empty(), ConnectionId::from(2usize)),
sender: ConnectionHandle::new(ConnectionId::from(2usize), cmd_tx3),
})
.await
.unwrap();
futures::future::poll_fn(|cx| match service.poll_next_unpin(cx) {
std::task::Poll::Ready(_) => panic!("didn't expect event from `TransportService`"),
std::task::Poll::Pending => std::task::Poll::Ready(()),
})
.await;
let context = service.connections.get(&peer).unwrap();
assert_eq!(context.primary.connection_id(), &ConnectionId::from(0usize));
assert_eq!(
context.secondary.as_ref().unwrap().connection_id(),
&ConnectionId::from(1usize)
);
assert!(cmd_rx3.try_recv().is_err());
}
#[tokio::test]
async fn secondary_closing_does_not_emit_event() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut service, sender, _) = transport_service();
let peer = PeerId::random();
// register first connection
let (cmd_tx1, _cmd_rx1) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::from(0usize),
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(0usize)),
sender: ConnectionHandle::new(ConnectionId::from(0usize), cmd_tx1),
})
.await
.unwrap();
if let Some(TransportEvent::ConnectionEstablished {
peer: connected_peer,
endpoint,
}) = service.next().await
{
assert_eq!(connected_peer, peer);
assert_eq!(endpoint.address(), &Multiaddr::empty());
} else {
panic!("expected event from `TransportService`");
};
// register secondary connection
let (cmd_tx2, _cmd_rx2) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::from(1usize),
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(1usize)),
sender: ConnectionHandle::new(ConnectionId::from(1usize), cmd_tx2),
})
.await
.unwrap();
futures::future::poll_fn(|cx| match service.poll_next_unpin(cx) {
std::task::Poll::Ready(_) => panic!("didn't expect event from `TransportService`"),
std::task::Poll::Pending => std::task::Poll::Ready(()),
})
.await;
let context = service.connections.get(&peer).unwrap();
assert_eq!(context.primary.connection_id(), &ConnectionId::from(0usize));
assert_eq!(
context.secondary.as_ref().unwrap().connection_id(),
&ConnectionId::from(1usize)
);
// close the secondary connection
sender
.send(InnerTransportEvent::ConnectionClosed {
peer,
connection: ConnectionId::from(1usize),
})
.await
.unwrap();
// verify that the protocol is not notified
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/mod.rs | src/protocol/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Protocol-related defines.
use crate::{
codec::ProtocolCodec,
error::SubstreamError,
substream::Substream,
transport::Endpoint,
types::{protocol::ProtocolName, SubstreamId},
PeerId,
};
use multiaddr::Multiaddr;
use std::fmt::Debug;
pub(crate) use connection::Permit;
pub(crate) use protocol_set::{InnerTransportEvent, ProtocolCommand, ProtocolSet};
pub use transport_service::TransportService;
pub mod libp2p;
pub mod mdns;
pub mod notification;
pub mod request_response;
mod connection;
mod protocol_set;
mod transport_service;
/// Substream direction.
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)]
pub enum Direction {
/// Substream was opened by the remote peer.
Inbound,
/// Substream was opened by the local peer.
Outbound(SubstreamId),
}
/// Events emitted by one of the installed transports to protocol(s).
#[derive(Debug)]
pub enum TransportEvent {
/// Connection established to `peer`.
ConnectionEstablished {
/// Peer ID.
peer: PeerId,
/// Endpoint.
endpoint: Endpoint,
},
/// Connection closed to peer.
ConnectionClosed {
/// Peer ID.
peer: PeerId,
},
/// Failed to dial peer.
///
/// This is reported to that protocol which initiated the connection.
DialFailure {
/// Peer ID.
peer: PeerId,
/// Dialed addresseses.
addresses: Vec<Multiaddr>,
},
/// Substream opened for `peer`.
SubstreamOpened {
/// Peer ID.
peer: PeerId,
/// Protocol name.
///
/// One protocol handler may handle multiple sub-protocols (such as `/ipfs/identify/1.0.0`
/// and `/ipfs/identify/push/1.0.0`) or it may have aliases which should be handled by
/// the same protocol handler. When the substream is sent from transport to the protocol
/// handler, the protocol name that was used to negotiate the substream is also sent so
/// the protocol can handle the substream appropriately.
protocol: ProtocolName,
/// Fallback protocol.
fallback: Option<ProtocolName>,
/// Substream direction.
///
/// Informs the protocol whether the substream is inbound (opened by the remote node)
/// or outbound (opened by the local node). This allows the protocol to distinguish
/// between the two types of substreams and execute correct code for the substream.
///
/// Outbound substreams also contain the substream ID which allows the protocol to
/// distinguish between different outbound substreams.
direction: Direction,
/// Substream.
substream: Substream,
},
/// Failed to open substream.
///
/// Substream open failures are reported only for outbound substreams.
SubstreamOpenFailure {
/// Substream ID.
substream: SubstreamId,
/// Error that occurred when the substream was being opened.
error: SubstreamError,
},
}
/// Trait defining the interface for a user protocol.
#[async_trait::async_trait]
pub trait UserProtocol: Send {
/// Get user protocol name.
fn protocol(&self) -> ProtocolName;
/// Get user protocol codec.
fn codec(&self) -> ProtocolCodec;
/// Start the the user protocol event loop.
async fn run(self: Box<Self>, service: TransportService) -> crate::Result<()>;
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/mdns.rs | src/protocol/mdns.rs | // Copyright 2018 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! [Multicast DNS](https://en.wikipedia.org/wiki/Multicast_DNS) implementation.
use crate::{transport::manager::TransportManagerHandle, DEFAULT_CHANNEL_SIZE};
use futures::Stream;
use multiaddr::Multiaddr;
use rand::{distributions::Alphanumeric, Rng};
use simple_dns::{
rdata::{RData, PTR, TXT},
Name, Packet, PacketFlag, Question, ResourceRecord, CLASS, QCLASS, QTYPE, TYPE,
};
use socket2::{Domain, Protocol, Socket, Type};
use tokio::{
net::UdpSocket,
sync::mpsc::{channel, Sender},
};
use tokio_stream::wrappers::ReceiverStream;
use std::{
collections::HashSet,
net,
net::{IpAddr, Ipv4Addr, SocketAddr},
sync::Arc,
time::Duration,
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::mdns";
/// IPv4 multicast address.
const IPV4_MULTICAST_ADDRESS: Ipv4Addr = Ipv4Addr::new(224, 0, 0, 251);
/// IPV4 multicast port.
const IPV4_MULTICAST_PORT: u16 = 5353;
/// Service name.
const SERVICE_NAME: &str = "_p2p._udp.local";
/// Events emitted by mDNS.
// #[derive(Debug, Clone)]
pub enum MdnsEvent {
/// One or more addresses discovered.
Discovered(Vec<Multiaddr>),
}
/// mDNS configuration.
// #[derive(Debug)]
pub struct Config {
/// How often the network should be queried for new peers.
query_interval: Duration,
/// TX channel for sending mDNS events to user.
tx: Sender<MdnsEvent>,
}
impl Config {
/// Create new [`Config`].
///
/// Return the configuration and an event stream for receiving [`MdnsEvent`]s.
pub fn new(
query_interval: Duration,
) -> (Self, Box<dyn Stream<Item = MdnsEvent> + Send + Unpin>) {
let (tx, rx) = channel(DEFAULT_CHANNEL_SIZE);
(
Self { query_interval, tx },
Box::new(ReceiverStream::new(rx)),
)
}
}
/// Main mDNS object.
pub(crate) struct Mdns {
/// Query interval.
query_interval: tokio::time::Interval,
/// TX channel for sending events to user.
event_tx: Sender<MdnsEvent>,
/// Handle to `TransportManager`.
_transport_handle: TransportManagerHandle,
// Username.
username: String,
/// Next query ID.
next_query_id: u16,
/// Buffer for incoming messages.
receive_buffer: Vec<u8>,
/// Listen addresses.
listen_addresses: Vec<Arc<str>>,
/// Discovered addresses.
discovered: HashSet<Multiaddr>,
}
impl Mdns {
/// Create new [`Mdns`].
pub(crate) fn new(
_transport_handle: TransportManagerHandle,
config: Config,
listen_addresses: Vec<Multiaddr>,
) -> Self {
let mut query_interval = tokio::time::interval(config.query_interval);
query_interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay);
Self {
_transport_handle,
event_tx: config.tx,
next_query_id: 1337u16,
discovered: HashSet::new(),
query_interval,
receive_buffer: vec![0u8; 4096],
username: rand::thread_rng()
.sample_iter(&Alphanumeric)
.take(32)
.map(char::from)
.collect(),
listen_addresses: listen_addresses
.into_iter()
.map(|address| format!("dnsaddr={address}").into())
.collect(),
}
}
/// Get next query ID.
fn next_query_id(&mut self) -> u16 {
let query_id = self.next_query_id;
self.next_query_id += 1;
query_id
}
/// Send mDNS query on the network.
async fn on_outbound_request(&mut self, socket: &UdpSocket) -> crate::Result<()> {
tracing::debug!(target: LOG_TARGET, "send outbound query");
let mut packet = Packet::new_query(self.next_query_id());
packet.questions.push(Question {
qname: Name::new_unchecked(SERVICE_NAME),
qtype: QTYPE::TYPE(TYPE::PTR),
qclass: QCLASS::CLASS(CLASS::IN),
unicast_response: false,
});
socket
.send_to(
&packet.build_bytes_vec().expect("valid packet"),
(IPV4_MULTICAST_ADDRESS, IPV4_MULTICAST_PORT),
)
.await
.map(|_| ())
.map_err(From::from)
}
/// Handle inbound query.
fn on_inbound_request(&self, packet: Packet) -> Option<Vec<u8>> {
tracing::debug!(target: LOG_TARGET, ?packet, "handle inbound request");
let mut packet = Packet::new_reply(packet.id());
let srv_name = Name::new_unchecked(SERVICE_NAME);
packet.answers.push(ResourceRecord::new(
srv_name.clone(),
CLASS::IN,
360,
RData::PTR(PTR(Name::new_unchecked(&self.username))),
));
for address in &self.listen_addresses {
let mut record = TXT::new();
record.add_string(address).expect("valid string");
packet.additional_records.push(ResourceRecord {
name: Name::new_unchecked(&self.username),
class: CLASS::IN,
ttl: 360,
rdata: RData::TXT(record),
cache_flush: false,
});
}
Some(packet.build_bytes_vec().expect("valid packet"))
}
/// Handle inbound response.
fn on_inbound_response(&self, packet: Packet) -> Vec<Multiaddr> {
tracing::debug!(target: LOG_TARGET, "handle inbound response");
let names = packet
.answers
.iter()
.filter_map(|answer| {
if answer.name != Name::new_unchecked(SERVICE_NAME) {
return None;
}
match answer.rdata {
RData::PTR(PTR(ref name)) if name != &Name::new_unchecked(&self.username) =>
Some(name),
_ => None,
}
})
.collect::<Vec<&Name>>();
let name = match names.len() {
0 => return Vec::new(),
_ => {
tracing::debug!(
target: LOG_TARGET,
?names,
"response name"
);
names[0]
}
};
packet
.additional_records
.iter()
.flat_map(|record| {
if &record.name != name {
return vec![];
}
// TODO: https://github.com/paritytech/litep2p/issues/333
// `filter_map` is not necessary as there's at most one entry
match &record.rdata {
RData::TXT(text) => text
.attributes()
.iter()
.filter_map(|(_, address)| {
address.as_ref().and_then(|inner| inner.parse().ok())
})
.collect(),
_ => vec![],
}
})
.collect()
}
/// Setup the socket.
fn setup_socket() -> crate::Result<UdpSocket> {
let socket = Socket::new(Domain::IPV4, Type::DGRAM, Some(Protocol::UDP))?;
socket.set_reuse_address(true)?;
#[cfg(unix)]
socket.set_reuse_port(true)?;
socket.bind(
&SocketAddr::new(IpAddr::V4(Ipv4Addr::UNSPECIFIED), IPV4_MULTICAST_PORT).into(),
)?;
socket.set_multicast_loop_v4(true)?;
socket.set_multicast_ttl_v4(255)?;
socket.join_multicast_v4(&IPV4_MULTICAST_ADDRESS, &Ipv4Addr::UNSPECIFIED)?;
socket.set_nonblocking(true)?;
UdpSocket::from_std(net::UdpSocket::from(socket)).map_err(Into::into)
}
/// Event loop for [`Mdns`].
pub(crate) async fn start(mut self) {
tracing::debug!(target: LOG_TARGET, "starting mdns event loop");
let mut socket_opt = None;
loop {
let socket = match socket_opt.take() {
Some(s) => s,
None => {
let _ = self.query_interval.tick().await;
match Self::setup_socket() {
Ok(s) => s,
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?error,
"failed to setup mDNS socket, will try again"
);
continue;
}
}
}
};
tokio::select! {
_ = self.query_interval.tick() => {
tracing::trace!(target: LOG_TARGET, "query interval ticked");
if let Err(error) = self.on_outbound_request(&socket).await {
tracing::debug!(target: LOG_TARGET, ?error, "failed to send mdns query");
// Let's recreate the socket
continue;
}
},
result = socket.recv_from(&mut self.receive_buffer) => match result {
Ok((nread, address)) => match Packet::parse(&self.receive_buffer[..nread]) {
Ok(packet) => match packet.has_flags(PacketFlag::RESPONSE) {
true => {
let to_forward = self.on_inbound_response(packet).into_iter().filter_map(|address| {
self.discovered.insert(address.clone()).then_some(address)
})
.collect::<Vec<_>>();
if !to_forward.is_empty() {
let _ = self.event_tx.send(MdnsEvent::Discovered(to_forward)).await;
}
}
false => if let Some(response) = self.on_inbound_request(packet) {
if let Err(error) = socket
.send_to(&response, (IPV4_MULTICAST_ADDRESS, IPV4_MULTICAST_PORT))
.await {
tracing::debug!(target: LOG_TARGET, ?error, "failed to send mdns response");
// Let's recreate the socket
continue;
}
}
}
Err(error) => tracing::debug!(
target: LOG_TARGET,
?address,
?error,
?nread,
"failed to parse mdns packet"
),
}
Err(error) => {
tracing::debug!(target: LOG_TARGET, ?error, "failed to read from socket");
// Let's recreate the socket
continue;
}
},
};
socket_opt = Some(socket);
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::transport::manager::TransportManagerBuilder;
use futures::StreamExt;
use multiaddr::Protocol;
#[tokio::test]
async fn mdns_works() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (config1, mut stream1) = Config::new(Duration::from_secs(5));
let manager1 = TransportManagerBuilder::new().build();
let mdns1 = Mdns::new(
manager1.transport_manager_handle(),
config1,
vec![
"/ip6/::1/tcp/8888/p2p/12D3KooWNP463TyS3vUpmekjjZ2dg7xy1WHNMM7MqfsMevMTaaaa"
.parse()
.unwrap(),
"/ip4/127.0.0.1/tcp/8888/p2p/12D3KooWNP463TyS3vUpmekjjZ2dg7xy1WHNMM7MqfsMevMTaaaa"
.parse()
.unwrap(),
],
);
let (config2, mut stream2) = Config::new(Duration::from_secs(5));
let manager2 = TransportManagerBuilder::new().build();
let mdns2 = Mdns::new(
manager2.transport_manager_handle(),
config2,
vec![
"/ip6/::1/tcp/9999/p2p/12D3KooWNP463TyS3vUpmekjjZ2dg7xy1WHNMM7MqfsMevMTbbbb"
.parse()
.unwrap(),
"/ip4/127.0.0.1/tcp/9999/p2p/12D3KooWNP463TyS3vUpmekjjZ2dg7xy1WHNMM7MqfsMevMTbbbb"
.parse()
.unwrap(),
],
);
tokio::spawn(mdns1.start());
tokio::spawn(mdns2.start());
let mut peer1_discovered = false;
let mut peer2_discovered = false;
while !peer1_discovered && !peer2_discovered {
tokio::select! {
event = stream1.next() => match event.unwrap() {
MdnsEvent::Discovered(addrs) => {
if addrs.len() == 2 {
let mut iter = addrs[0].iter();
if !std::matches!(iter.next(), Some(Protocol::Ip4(_) | Protocol::Ip6(_))) {
continue
}
match iter.next() {
Some(Protocol::Tcp(port)) => {
if port != 9999 {
continue
}
}
_ => continue,
}
peer1_discovered = true;
}
}
},
event = stream2.next() => match event.unwrap() {
MdnsEvent::Discovered(addrs) => {
if addrs.len() == 2 {
let mut iter = addrs[0].iter();
if !std::matches!(iter.next(), Some(Protocol::Ip4(_) | Protocol::Ip6(_))) {
continue
}
match iter.next() {
Some(Protocol::Tcp(port)) => {
if port != 8888 {
continue
}
}
_ => continue,
}
peer2_discovered = true;
}
}
}
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/request_response/config.rs | src/protocol/request_response/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
codec::ProtocolCodec,
protocol::request_response::{
handle::{InnerRequestResponseEvent, RequestResponseCommand, RequestResponseHandle},
REQUEST_TIMEOUT,
},
types::protocol::ProtocolName,
DEFAULT_CHANNEL_SIZE,
};
use tokio::sync::mpsc::{channel, Receiver, Sender};
use std::{
sync::{atomic::AtomicUsize, Arc},
time::Duration,
};
/// Request-response protocol configuration.
pub struct Config {
/// Protocol name.
pub(crate) protocol_name: ProtocolName,
/// Fallback names for the main protocol name.
pub(crate) fallback_names: Vec<ProtocolName>,
/// Timeout for outbound requests.
pub(crate) timeout: Duration,
/// Codec used by the protocol.
pub(crate) codec: ProtocolCodec,
/// TX channel for sending events to the user protocol.
pub(super) event_tx: Sender<InnerRequestResponseEvent>,
/// RX channel for receiving commands from the user protocol.
pub(crate) command_rx: Receiver<RequestResponseCommand>,
/// Next ephemeral request ID.
pub(crate) next_request_id: Arc<AtomicUsize>,
/// Maximum number of concurrent inbound requests.
pub(crate) max_concurrent_inbound_request: Option<usize>,
}
impl Config {
/// Create new [`Config`].
pub fn new(
protocol_name: ProtocolName,
fallback_names: Vec<ProtocolName>,
max_message_size: usize,
timeout: Duration,
max_concurrent_inbound_request: Option<usize>,
) -> (Self, RequestResponseHandle) {
let (event_tx, event_rx) = channel(DEFAULT_CHANNEL_SIZE);
let (command_tx, command_rx) = channel(DEFAULT_CHANNEL_SIZE);
let next_request_id = Default::default();
let handle = RequestResponseHandle::new(event_rx, command_tx, Arc::clone(&next_request_id));
(
Self {
event_tx,
command_rx,
protocol_name,
fallback_names,
next_request_id,
timeout,
max_concurrent_inbound_request,
codec: ProtocolCodec::UnsignedVarint(Some(max_message_size)),
},
handle,
)
}
/// Get protocol name.
pub(crate) fn protocol_name(&self) -> &ProtocolName {
&self.protocol_name
}
}
/// Builder for [`Config`].
pub struct ConfigBuilder {
/// Protocol name.
pub(crate) protocol_name: ProtocolName,
/// Fallback names for the main protocol name.
pub(crate) fallback_names: Vec<ProtocolName>,
/// Maximum message size.
max_message_size: Option<usize>,
/// Timeout for outbound requests.
timeout: Option<Duration>,
/// Maximum number of concurrent inbound requests.
max_concurrent_inbound_request: Option<usize>,
}
impl ConfigBuilder {
/// Create new [`ConfigBuilder`].
pub fn new(protocol_name: ProtocolName) -> Self {
Self {
protocol_name,
fallback_names: Vec::new(),
max_message_size: None,
timeout: Some(REQUEST_TIMEOUT),
max_concurrent_inbound_request: None,
}
}
/// Set maximum message size.
pub fn with_max_size(mut self, max_message_size: usize) -> Self {
self.max_message_size = Some(max_message_size);
self
}
/// Set fallback names.
pub fn with_fallback_names(mut self, fallback_names: Vec<ProtocolName>) -> Self {
self.fallback_names = fallback_names;
self
}
/// Set timeout for outbound requests.
pub fn with_timeout(mut self, timeout: Duration) -> Self {
self.timeout = Some(timeout);
self
}
/// Specify the maximum number of concurrent inbound requests. By default the number of inbound
/// requests is not limited.
///
/// If a new request is received while the number of inbound requests is already at a maximum,
/// the request is dropped.
pub fn with_max_concurrent_inbound_requests(
mut self,
max_concurrent_inbound_requests: usize,
) -> Self {
self.max_concurrent_inbound_request = Some(max_concurrent_inbound_requests);
self
}
/// Build [`Config`].
pub fn build(mut self) -> (Config, RequestResponseHandle) {
Config::new(
self.protocol_name,
self.fallback_names,
self.max_message_size.take().expect("maximum message size to be set"),
self.timeout.take().expect("timeout to exist"),
self.max_concurrent_inbound_request,
)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/request_response/tests.rs | src/protocol/request_response/tests.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
mock::substream::{DummySubstream, MockSubstream},
protocol::{
request_response::{
ConfigBuilder, DialOptions, RequestResponseError, RequestResponseEvent,
RequestResponseHandle, RequestResponseProtocol,
},
InnerTransportEvent, SubstreamError, TransportService,
},
substream::Substream,
transport::{
manager::{TransportManager, TransportManagerBuilder},
KEEP_ALIVE_TIMEOUT,
},
types::{RequestId, SubstreamId},
Error, PeerId, ProtocolName,
};
use futures::StreamExt;
use tokio::sync::mpsc::Sender;
use std::task::Poll;
// create new protocol for testing
fn protocol() -> (
RequestResponseProtocol,
RequestResponseHandle,
TransportManager,
Sender<InnerTransportEvent>,
) {
let manager = TransportManagerBuilder::new().build();
let peer = PeerId::random();
let (transport_service, tx) = TransportService::new(
peer,
ProtocolName::from("/notif/1"),
Vec::new(),
std::sync::Arc::new(Default::default()),
manager.transport_manager_handle(),
KEEP_ALIVE_TIMEOUT,
);
let (config, handle) =
ConfigBuilder::new(ProtocolName::from("/req/1")).with_max_size(1024).build();
(
RequestResponseProtocol::new(transport_service, config),
handle,
manager,
tx,
)
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn connection_closed_twice() {
let (mut protocol, _handle, _manager, _tx) = protocol();
let peer = PeerId::random();
protocol.on_connection_established(peer).await.unwrap();
assert!(protocol.peers.contains_key(&peer));
protocol.on_connection_established(peer).await.unwrap();
}
#[tokio::test]
#[cfg(debug_assertions)]
async fn connection_established_twice() {
let (mut protocol, _handle, _manager, _tx) = protocol();
let peer = PeerId::random();
protocol.on_connection_established(peer).await.unwrap();
assert!(protocol.peers.contains_key(&peer));
protocol.on_connection_closed(peer).await;
assert!(!protocol.peers.contains_key(&peer));
protocol.on_connection_closed(peer).await;
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn unknown_outbound_substream_opened() {
let (mut protocol, _handle, _manager, _tx) = protocol();
let peer = PeerId::random();
match protocol
.on_outbound_substream(
peer,
SubstreamId::from(1337usize),
Substream::new_mock(
peer,
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
None,
)
.await
{
Err(Error::InvalidState) => {}
_ => panic!("invalid return value"),
}
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn unknown_substream_open_failure() {
let (mut protocol, _handle, _manager, _tx) = protocol();
match protocol
.on_substream_open_failure(
SubstreamId::from(1338usize),
SubstreamError::ConnectionClosed,
)
.await
{
Err(Error::InvalidState) => {}
_ => panic!("invalid return value"),
}
}
#[tokio::test]
async fn cancel_unknown_request() {
let (mut protocol, _handle, _manager, _tx) = protocol();
let request_id = RequestId::from(1337usize);
assert!(!protocol.pending_outbound_cancels.contains_key(&request_id));
assert!(protocol.on_cancel_request(request_id).is_ok());
}
#[tokio::test]
async fn substream_event_for_unknown_peer() {
let (mut protocol, _handle, _manager, _tx) = protocol();
// register peer
let peer = PeerId::random();
protocol.on_connection_established(peer).await.unwrap();
assert!(protocol.peers.contains_key(&peer));
match protocol
.on_substream_event(peer, RequestId::from(1337usize), None, Ok(vec![13, 37]))
.await
{
Err(Error::InvalidState) => {}
_ => panic!("invalid return value"),
}
}
#[tokio::test]
async fn inbound_substream_error() {
let (mut protocol, _handle, _manager, _tx) = protocol();
// register peer
let peer = PeerId::random();
protocol.on_connection_established(peer).await.unwrap();
assert!(protocol.peers.contains_key(&peer));
let mut substream = MockSubstream::new();
substream
.expect_poll_next()
.times(1)
.return_once(|_| Poll::Ready(Some(Err(SubstreamError::ConnectionClosed))));
// register inbound substream from peer
protocol
.on_inbound_substream(
peer,
None,
Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream)),
)
.await
.unwrap();
// poll the substream and get the failure event
assert_eq!(protocol.pending_inbound_requests.len(), 1);
let (peer, request_id, event, substream) =
protocol.pending_inbound_requests.next().await.unwrap();
match protocol.on_inbound_request(peer, request_id, event, substream).await {
Err(Error::InvalidData) => {}
_ => panic!("invalid return value"),
}
}
// when a peer who had an active inbound substream disconnects, verify that the substream is removed
// from `pending_inbound_requests` so it doesn't generate new wake-up notifications
#[tokio::test]
async fn disconnect_peer_has_active_inbound_substream() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut protocol, mut handle, _manager, _tx) = protocol();
// register new peer
let peer = PeerId::random();
protocol.on_connection_established(peer).await.unwrap();
// register inbound substream from peer
protocol
.on_inbound_substream(
peer,
None,
Substream::new_mock(
peer,
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
)
.await
.unwrap();
assert_eq!(protocol.pending_inbound_requests.len(), 1);
// disconnect the peer and verify that no events are read from the handle
// since no outbound request was initiated
protocol.on_connection_closed(peer).await;
futures::future::poll_fn(|cx| match handle.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
event => panic!("read an unexpected event from handle: {event:?}"),
})
.await;
}
// when user initiates an outbound request and `RequestResponseProtocol` tries to open an outbound
// substream to them and it fails, the failure should be reported to the user. When the remote peer
// later disconnects, this failure should not be reported again.
#[tokio::test]
async fn request_failure_reported_once() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut protocol, mut handle, _manager, _tx) = protocol();
// register new peer
let peer = PeerId::random();
protocol.on_connection_established(peer).await.unwrap();
// initiate outbound request
//
// since the peer wasn't properly registered, opening substream to them will fail
let request_id = RequestId::from(1337usize);
let error = protocol
.on_send_request(
peer,
request_id,
vec![1, 2, 3, 4],
DialOptions::Reject,
None,
)
.unwrap_err();
protocol.report_request_failure(peer, request_id, error).await.unwrap();
match handle.next().await {
Some(RequestResponseEvent::RequestFailed {
peer: request_peer,
request_id,
error,
}) => {
assert_eq!(request_peer, peer);
assert_eq!(request_id, RequestId::from(1337usize));
assert!(matches!(error, RequestResponseError::Rejected(_)));
}
event => panic!("unexpected event: {event:?}"),
}
// disconnect the peer and verify that no events are read from the handle
// since the outbound request failure was already reported
protocol.on_connection_closed(peer).await;
futures::future::poll_fn(|cx| match handle.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
event => panic!("read an unexpected event from handle: {event:?}"),
})
.await;
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/request_response/mod.rs | src/protocol/request_response/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Request-response protocol implementation.
use crate::{
error::{Error, NegotiationError, SubstreamError},
multistream_select::NegotiationError::Failed as MultistreamFailed,
protocol::{
request_response::handle::InnerRequestResponseEvent, Direction, TransportEvent,
TransportService,
},
substream::Substream,
types::{protocol::ProtocolName, RequestId, SubstreamId},
utils::futures_stream::FuturesStream,
PeerId,
};
use bytes::BytesMut;
use futures::{channel, future::BoxFuture, stream::FuturesUnordered, StreamExt};
use tokio::{
sync::{
mpsc::{Receiver, Sender},
oneshot,
},
time::sleep,
};
use std::{
collections::{hash_map::Entry, HashMap, HashSet},
io::ErrorKind,
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
time::Duration,
};
pub use config::{Config, ConfigBuilder};
pub use handle::{
DialOptions, RejectReason, RequestResponseCommand, RequestResponseError, RequestResponseEvent,
RequestResponseHandle,
};
mod config;
mod handle;
#[cfg(test)]
mod tests;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::request-response::protocol";
/// Default request timeout.
const REQUEST_TIMEOUT: Duration = Duration::from_secs(5);
/// Pending request.
type PendingRequest = (
PeerId,
RequestId,
Option<ProtocolName>,
Result<Vec<u8>, RequestResponseError>,
);
/// Request context.
struct RequestContext {
/// Peer ID.
peer: PeerId,
/// Request ID.
request_id: RequestId,
/// Request.
request: Vec<u8>,
/// Fallback request.
fallback: Option<(ProtocolName, Vec<u8>)>,
}
impl RequestContext {
/// Create new [`RequestContext`].
fn new(
peer: PeerId,
request_id: RequestId,
request: Vec<u8>,
fallback: Option<(ProtocolName, Vec<u8>)>,
) -> Self {
Self {
peer,
request_id,
request,
fallback,
}
}
}
/// Peer context.
struct PeerContext {
/// Active requests.
active: HashSet<RequestId>,
/// Active inbound requests and their fallback names.
active_inbound: HashMap<RequestId, Option<ProtocolName>>,
}
impl PeerContext {
/// Create new [`PeerContext`].
fn new() -> Self {
Self {
active: HashSet::new(),
active_inbound: HashMap::new(),
}
}
}
/// Request-response protocol.
pub(crate) struct RequestResponseProtocol {
/// Transport service.
service: TransportService,
/// Protocol.
protocol: ProtocolName,
/// Connected peers.
peers: HashMap<PeerId, PeerContext>,
/// Pending outbound substreams, mapped from `SubstreamId` to `RequestId`.
pending_outbound: HashMap<SubstreamId, RequestContext>,
/// Pending outbound responses.
///
/// The future listens to a `oneshot::Sender` which is given to `RequestResponseHandle`.
/// If the request is accepted by the local node, the response is sent over the channel to the
/// the future which sends it to remote peer and closes the substream.
///
/// If the substream is rejected by the local node, the `oneshot::Sender` is dropped which
/// notifies the future that the request should be rejected by closing the substream.
pending_outbound_responses: FuturesUnordered<BoxFuture<'static, ()>>,
/// Pending outbound cancellation handles.
pending_outbound_cancels: HashMap<RequestId, oneshot::Sender<()>>,
/// Pending inbound responses.
pending_inbound: FuturesUnordered<BoxFuture<'static, PendingRequest>>,
/// Pending inbound requests.
pending_inbound_requests: FuturesStream<
BoxFuture<
'static,
(
PeerId,
RequestId,
Result<BytesMut, SubstreamError>,
Substream,
),
>,
>,
/// Pending dials for outbound requests.
pending_dials: HashMap<PeerId, RequestContext>,
/// TX channel for sending events to the user protocol.
event_tx: Sender<InnerRequestResponseEvent>,
/// RX channel for receive commands from the `RequestResponseHandle`.
command_rx: Receiver<RequestResponseCommand>,
/// Next request ID.
next_request_id: Arc<AtomicUsize>,
/// Timeout for outbound requests.
timeout: Duration,
/// Maximum concurrent inbound requests, if specified.
max_concurrent_inbound_requests: Option<usize>,
}
impl RequestResponseProtocol {
/// Create new [`RequestResponseProtocol`].
pub(crate) fn new(service: TransportService, config: Config) -> Self {
Self {
service,
peers: HashMap::new(),
timeout: config.timeout,
next_request_id: config.next_request_id,
event_tx: config.event_tx,
command_rx: config.command_rx,
protocol: config.protocol_name,
pending_dials: HashMap::new(),
pending_outbound: HashMap::new(),
pending_inbound: FuturesUnordered::new(),
pending_outbound_cancels: HashMap::new(),
pending_inbound_requests: FuturesStream::new(),
pending_outbound_responses: FuturesUnordered::new(),
max_concurrent_inbound_requests: config.max_concurrent_inbound_request,
}
}
/// Get next ephemeral request ID.
fn next_request_id(&mut self) -> RequestId {
RequestId::from(self.next_request_id.fetch_add(1usize, Ordering::Relaxed))
}
/// Connection established to remote peer.
async fn on_connection_established(&mut self, peer: PeerId) -> crate::Result<()> {
tracing::debug!(target: LOG_TARGET, ?peer, protocol = %self.protocol, "connection established");
let Entry::Vacant(entry) = self.peers.entry(peer) else {
tracing::error!(
target: LOG_TARGET,
?peer,
"state mismatch: peer already exists",
);
debug_assert!(false);
return Err(Error::PeerAlreadyExists(peer));
};
match self.pending_dials.remove(&peer) {
None => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"peer connected without pending dial",
);
entry.insert(PeerContext::new());
}
Some(context) => match self.service.open_substream(peer) {
Ok(substream_id) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
request_id = ?context.request_id,
?substream_id,
"dial succeeded, open substream",
);
entry.insert(PeerContext {
active: HashSet::from_iter([context.request_id]),
active_inbound: HashMap::new(),
});
self.pending_outbound.insert(
substream_id,
RequestContext::new(
peer,
context.request_id,
context.request,
context.fallback,
),
);
}
// only reason the substream would fail to open would be that the connection
// would've been reported to the protocol with enough delay that the keep-alive
// timeout had expired and no other protocol had opened a substream to it, causing
// the connection to be closed
Err(error) => {
tracing::warn!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
request_id = ?context.request_id,
?error,
"failed to open substream",
);
return self
.report_request_failure(
peer,
context.request_id,
RequestResponseError::Rejected(error.into()),
)
.await;
}
},
}
Ok(())
}
/// Connection closed to remote peer.
async fn on_connection_closed(&mut self, peer: PeerId) {
tracing::debug!(target: LOG_TARGET, ?peer, protocol = %self.protocol, "connection closed");
// Remove any pending outbound substreams for this peer.
self.pending_outbound.retain(|_, context| context.peer != peer);
let Some(context) = self.peers.remove(&peer) else {
tracing::error!(
target: LOG_TARGET,
?peer,
"Peer does not exist or substream open failed during connection establishment",
);
return;
};
// sent failure events for all pending outbound requests
for request_id in context.active {
let _ = self
.event_tx
.send(InnerRequestResponseEvent::RequestFailed {
peer,
request_id,
error: RequestResponseError::Rejected(RejectReason::ConnectionClosed),
})
.await;
}
}
/// Local node opened a substream to remote node.
async fn on_outbound_substream(
&mut self,
peer: PeerId,
substream_id: SubstreamId,
mut substream: Substream,
fallback_protocol: Option<ProtocolName>,
) -> crate::Result<()> {
let Some(RequestContext {
request_id,
request,
fallback,
..
}) = self.pending_outbound.remove(&substream_id)
else {
tracing::error!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?substream_id,
"pending outbound request does not exist",
);
debug_assert!(false);
return Err(Error::InvalidState);
};
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?substream_id,
?request_id,
"substream opened, send request",
);
let request = match (&fallback_protocol, fallback) {
(Some(protocol), Some((fallback_protocol, fallback_request)))
if protocol == &fallback_protocol =>
fallback_request,
_ => request,
};
let request_timeout = self.timeout;
let protocol = self.protocol.clone();
let (tx, rx) = oneshot::channel();
self.pending_outbound_cancels.insert(request_id, tx);
self.pending_inbound.push(Box::pin(async move {
match tokio::time::timeout(request_timeout, substream.send_framed(request.into())).await
{
Err(_) => (
peer,
request_id,
fallback_protocol,
Err(RequestResponseError::Timeout),
),
Ok(Err(SubstreamError::IoError(ErrorKind::PermissionDenied))) => {
tracing::warn!(
target: LOG_TARGET,
?peer,
%protocol,
"tried to send too large request",
);
(
peer,
request_id,
fallback_protocol,
Err(RequestResponseError::TooLargePayload),
)
}
Ok(Err(error)) => (
peer,
request_id,
fallback_protocol,
Err(RequestResponseError::Rejected(error.into())),
),
Ok(Ok(_)) => {
tokio::select! {
_ = rx => {
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
"request canceled",
);
let _ = substream.close().await;
(
peer,
request_id,
fallback_protocol,
Err(RequestResponseError::Canceled))
}
_ = sleep(request_timeout) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
"request timed out",
);
let _ = substream.close().await;
(peer, request_id, fallback_protocol, Err(RequestResponseError::Timeout))
}
event = substream.next() => match event {
Some(Ok(response)) => {
(peer, request_id, fallback_protocol, Ok(response.freeze().into()))
},
Some(Err(error)) => {
(peer, request_id, fallback_protocol, Err(RequestResponseError::Rejected(error.into())))
},
None => {
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
"substream closed",
);
(peer, request_id, fallback_protocol, Err(RequestResponseError::Rejected(RejectReason::SubstreamClosed)))
}
}
}
}
}
}));
Ok(())
}
/// Handle pending inbound response.
async fn on_inbound_request(
&mut self,
peer: PeerId,
request_id: RequestId,
request: Result<BytesMut, SubstreamError>,
mut substream: Substream,
) -> crate::Result<()> {
// The peer will no longer exist if the connection was closed before processing the request.
let peer_context = self.peers.get_mut(&peer).ok_or(Error::PeerDoesntExist(peer))?;
let fallback = peer_context.active_inbound.remove(&request_id).ok_or_else(|| {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
"no active inbound request",
);
Error::InvalidState
})?;
let protocol = self.protocol.clone();
tracing::trace!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
"inbound request",
);
let Ok(request) = request else {
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
?request,
"failed to read request from substream",
);
return Err(Error::InvalidData);
};
// once the request has been read from the substream, start a future which waits
// for an input from the user.
//
// the input is either a response (succes) or rejection (failure) which is communicated
// by sending the response over the `oneshot::Sender` or closing it, respectively.
let timeout = self.timeout;
let (response_tx, rx): (
oneshot::Sender<(Vec<u8>, Option<channel::oneshot::Sender<()>>)>,
_,
) = oneshot::channel();
self.pending_outbound_responses.push(Box::pin(async move {
match rx.await {
Err(_) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
"request rejected",
);
let _ = substream.close().await;
}
Ok((response, mut feedback)) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
"send response",
);
match tokio::time::timeout(timeout, substream.send_framed(response.into()))
.await
{
Err(_) => tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
"timed out while sending response",
),
Ok(Ok(_)) => feedback.take().map_or((), |feedback| {
let _ = feedback.send(());
}),
Ok(Err(error)) => tracing::trace!(
target: LOG_TARGET,
?peer,
%protocol,
?request_id,
?error,
"failed to send request to peer",
),
}
}
}
}));
self.event_tx
.send(InnerRequestResponseEvent::RequestReceived {
peer,
fallback,
request_id,
request: request.freeze().into(),
response_tx,
})
.await
.map_err(From::from)
}
/// Remote opened a substream to local node.
async fn on_inbound_substream(
&mut self,
peer: PeerId,
fallback: Option<ProtocolName>,
mut substream: Substream,
) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, protocol = %self.protocol, "handle inbound substream");
if let Some(max_requests) = self.max_concurrent_inbound_requests {
let num_inbound_requests =
self.pending_inbound_requests.len() + self.pending_outbound_responses.len();
if max_requests <= num_inbound_requests {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?fallback,
?max_requests,
"rejecting request as already at maximum",
);
let _ = substream.close().await;
return Ok(());
}
}
// allocate ephemeral id for the inbound request and return it to the user protocol
//
// when user responds to the request, this is used to associate the response with the
// correct substream.
let request_id = self.next_request_id();
self.peers
.get_mut(&peer)
.ok_or(Error::PeerDoesntExist(peer))?
.active_inbound
.insert(request_id, fallback);
self.pending_inbound_requests.push(Box::pin(async move {
let request = match substream.next().await {
Some(Ok(request)) => Ok(request),
Some(Err(error)) => Err(error),
None => Err(SubstreamError::ConnectionClosed),
};
(peer, request_id, request, substream)
}));
Ok(())
}
async fn on_dial_failure(&mut self, peer: PeerId) {
if let Some(context) = self.pending_dials.remove(&peer) {
tracing::debug!(target: LOG_TARGET, ?peer, protocol = %self.protocol, "failed to dial peer");
let _ = self
.peers
.get_mut(&peer)
.map(|peer_context| peer_context.active.remove(&context.request_id));
let _ = self
.report_request_failure(
peer,
context.request_id,
RequestResponseError::Rejected(RejectReason::DialFailed(None)),
)
.await;
}
}
/// Failed to open substream to remote peer.
async fn on_substream_open_failure(
&mut self,
substream: SubstreamId,
error: SubstreamError,
) -> crate::Result<()> {
let Some(RequestContext {
request_id, peer, ..
}) = self.pending_outbound.remove(&substream)
else {
tracing::error!(
target: LOG_TARGET,
protocol = %self.protocol,
?substream,
"pending outbound request does not exist",
);
debug_assert!(false);
return Err(Error::InvalidState);
};
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
?substream,
?error,
"failed to open substream",
);
let _ = self
.peers
.get_mut(&peer)
.map(|peer_context| peer_context.active.remove(&request_id));
self.event_tx
.send(InnerRequestResponseEvent::RequestFailed {
peer,
request_id,
error: match error {
SubstreamError::NegotiationError(NegotiationError::MultistreamSelectError(
MultistreamFailed,
)) => RequestResponseError::UnsupportedProtocol,
_ => RequestResponseError::Rejected(error.into()),
},
})
.await
.map_err(From::from)
}
/// Report request send failure to user.
async fn report_request_failure(
&mut self,
peer: PeerId,
request_id: RequestId,
error: RequestResponseError,
) -> crate::Result<()> {
self.event_tx
.send(InnerRequestResponseEvent::RequestFailed {
peer,
request_id,
error,
})
.await
.map_err(From::from)
}
/// Send request to remote peer.
fn on_send_request(
&mut self,
peer: PeerId,
request_id: RequestId,
request: Vec<u8>,
dial_options: DialOptions,
fallback: Option<(ProtocolName, Vec<u8>)>,
) -> Result<(), RequestResponseError> {
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
?dial_options,
"send request to remote peer",
);
let Some(context) = self.peers.get_mut(&peer) else {
match dial_options {
DialOptions::Reject => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
?dial_options,
"peer not connected and should not dial",
);
return Err(RequestResponseError::NotConnected);
}
DialOptions::Dial => match self.service.dial(&peer) {
Ok(_) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
"started dialing peer",
);
self.pending_dials.insert(
peer,
RequestContext::new(peer, request_id, request, fallback),
);
return Ok(());
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?error,
"failed to dial peer"
);
return Err(RequestResponseError::Rejected(RejectReason::DialFailed(
Some(error),
)));
}
},
}
};
// open substream and push it pending outbound substreams
// once the substream is opened, send the request.
match self.service.open_substream(peer) {
Ok(substream_id) => {
let unique_request_id = context.active.insert(request_id);
debug_assert!(unique_request_id);
self.pending_outbound.insert(
substream_id,
RequestContext::new(peer, request_id, request, fallback),
);
Ok(())
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
?error,
"failed to open substream",
);
Err(RequestResponseError::Rejected(error.into()))
}
}
}
/// Handle substream event.
async fn on_substream_event(
&mut self,
peer: PeerId,
request_id: RequestId,
fallback: Option<ProtocolName>,
message: Result<Vec<u8>, RequestResponseError>,
) -> crate::Result<()> {
if !self
.peers
.get_mut(&peer)
.ok_or(Error::PeerDoesntExist(peer))?
.active
.remove(&request_id)
{
tracing::warn!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
"invalid state: received substream event but no active substream",
);
return Err(Error::InvalidState);
}
let event = match message {
Ok(response) => InnerRequestResponseEvent::ResponseReceived {
peer,
request_id,
response,
fallback,
},
Err(error) => match error {
RequestResponseError::Canceled => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
"request canceled by local node",
);
return Ok(());
}
error => InnerRequestResponseEvent::RequestFailed {
peer,
request_id,
error,
},
},
};
self.event_tx.send(event).await.map_err(From::from)
}
/// Cancel outbound request.
fn on_cancel_request(&mut self, request_id: RequestId) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, protocol = %self.protocol, ?request_id, "cancel outbound request");
match self.pending_outbound_cancels.remove(&request_id) {
Some(tx) => tx.send(()).map_err(|_| Error::SubstreamDoesntExist),
None => {
tracing::debug!(
target: LOG_TARGET,
protocol = %self.protocol,
?request_id,
"tried to cancel request which doesn't exist",
);
Ok(())
}
}
}
/// Handles the service event.
async fn handle_service_event(&mut self, event: TransportEvent) {
match event {
TransportEvent::ConnectionEstablished { peer, .. } => {
if let Err(error) = self.on_connection_established(peer).await {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?error,
"failed to handle connection established",
);
}
}
TransportEvent::ConnectionClosed { peer } => {
self.on_connection_closed(peer).await;
}
TransportEvent::SubstreamOpened {
peer,
substream,
direction,
fallback,
..
} => match direction {
Direction::Inbound => {
if let Err(error) = self.on_inbound_substream(peer, fallback, substream).await {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?error,
"failed to handle inbound substream",
);
}
}
Direction::Outbound(substream_id) => {
let _ =
self.on_outbound_substream(peer, substream_id, substream, fallback).await;
}
},
TransportEvent::SubstreamOpenFailure { substream, error } => {
if let Err(error) = self.on_substream_open_failure(substream, error).await {
tracing::warn!(
target: LOG_TARGET,
protocol = %self.protocol,
?error,
"failed to handle substream open failure",
);
}
}
TransportEvent::DialFailure { peer, .. } => self.on_dial_failure(peer).await,
}
}
/// Handles the user command.
async fn handle_user_command(&mut self, command: RequestResponseCommand) {
match command {
RequestResponseCommand::SendRequest {
peer,
request_id,
request,
dial_options,
} => {
if let Err(error) =
self.on_send_request(peer, request_id, request, dial_options, None)
{
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?request_id,
?error,
"failed to send request",
);
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/request_response/handle.rs | src/protocol/request_response/handle.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
error::{ImmediateDialError, SubstreamError},
multistream_select::ProtocolError,
types::{protocol::ProtocolName, RequestId},
Error, PeerId,
};
use futures::channel;
use tokio::sync::{
mpsc::{Receiver, Sender},
oneshot,
};
use std::{
collections::HashMap,
io::ErrorKind,
pin::Pin,
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
task::{Context, Poll},
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::request-response::handle";
/// Request-response error.
#[derive(Debug, PartialEq)]
pub enum RequestResponseError {
/// Request was rejected.
Rejected(RejectReason),
/// Request was canceled by the local node.
Canceled,
/// Request timed out.
Timeout,
/// The peer is not connected and the dialing option was [`DialOptions::Reject`].
NotConnected,
/// Too large payload.
TooLargePayload,
/// Protocol not supported.
UnsupportedProtocol,
}
/// The reason why a request was rejected.
#[derive(Debug, PartialEq)]
pub enum RejectReason {
/// Substream error.
SubstreamOpenError(SubstreamError),
/// The peer disconnected before the request was processed.
ConnectionClosed,
/// The substream was closed before the request was processed.
SubstreamClosed,
/// The dial failed.
///
/// If the dial failure is immediate, the error is included.
///
/// If the dialing process is happening in parallel on multiple
/// addresses (potentially with multiple protocols), the dialing
/// process is not considered immediate and the given errors are not
/// propagated for simplicity.
DialFailed(Option<ImmediateDialError>),
}
impl From<SubstreamError> for RejectReason {
fn from(error: SubstreamError) -> Self {
// Convert `ErrorKind::NotConnected` to `RejectReason::ConnectionClosed`.
match error {
SubstreamError::IoError(ErrorKind::NotConnected) => RejectReason::ConnectionClosed,
SubstreamError::YamuxError(crate::yamux::ConnectionError::Io(error), _)
if error.kind() == ErrorKind::NotConnected =>
RejectReason::ConnectionClosed,
SubstreamError::NegotiationError(crate::error::NegotiationError::IoError(
ErrorKind::NotConnected,
)) => RejectReason::ConnectionClosed,
SubstreamError::NegotiationError(
crate::error::NegotiationError::MultistreamSelectError(
crate::multistream_select::NegotiationError::ProtocolError(
ProtocolError::IoError(error),
),
),
) if error.kind() == ErrorKind::NotConnected => RejectReason::ConnectionClosed,
error => RejectReason::SubstreamOpenError(error),
}
}
}
/// Request-response events.
#[derive(Debug)]
pub(super) enum InnerRequestResponseEvent {
/// Request received from remote
RequestReceived {
/// Peer Id.
peer: PeerId,
/// Fallback protocol, if the substream was negotiated using a fallback.
fallback: Option<ProtocolName>,
/// Request ID.
request_id: RequestId,
/// Received request.
request: Vec<u8>,
/// `oneshot::Sender` for response.
response_tx: oneshot::Sender<(Vec<u8>, Option<channel::oneshot::Sender<()>>)>,
},
/// Response received.
ResponseReceived {
/// Peer Id.
peer: PeerId,
/// Fallback protocol, if the substream was negotiated using a fallback.
fallback: Option<ProtocolName>,
/// Request ID.
request_id: RequestId,
/// Received request.
response: Vec<u8>,
},
/// Request failed.
RequestFailed {
/// Peer Id.
peer: PeerId,
/// Request ID.
request_id: RequestId,
/// Request-response error.
error: RequestResponseError,
},
}
impl From<InnerRequestResponseEvent> for RequestResponseEvent {
fn from(event: InnerRequestResponseEvent) -> Self {
match event {
InnerRequestResponseEvent::ResponseReceived {
peer,
request_id,
response,
fallback,
} => RequestResponseEvent::ResponseReceived {
peer,
request_id,
response,
fallback,
},
InnerRequestResponseEvent::RequestFailed {
peer,
request_id,
error,
} => RequestResponseEvent::RequestFailed {
peer,
request_id,
error,
},
_ => panic!("unhandled event"),
}
}
}
/// Request-response events.
#[derive(Debug, PartialEq)]
pub enum RequestResponseEvent {
/// Request received from remote
RequestReceived {
/// Peer Id.
peer: PeerId,
/// Fallback protocol, if the substream was negotiated using a fallback.
fallback: Option<ProtocolName>,
/// Request ID.
///
/// While `request_id` is guaranteed to be unique for this protocols, the request IDs are
/// not unique across different request-response protocols, meaning two different
/// request-response protocols can both assign `RequestId(123)` for any given request.
request_id: RequestId,
/// Received request.
request: Vec<u8>,
},
/// Response received.
ResponseReceived {
/// Peer Id.
peer: PeerId,
/// Request ID.
request_id: RequestId,
/// Fallback protocol, if the substream was negotiated using a fallback.
fallback: Option<ProtocolName>,
/// Received request.
response: Vec<u8>,
},
/// Request failed.
RequestFailed {
/// Peer Id.
peer: PeerId,
/// Request ID.
request_id: RequestId,
/// Request-response error.
error: RequestResponseError,
},
}
/// Dial behavior when sending requests.
#[derive(Debug)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub enum DialOptions {
/// If the peer is not currently connected, attempt to dial them before sending a request.
///
/// If the dial succeeds, the request is sent to the peer once the peer has been registered
/// to the protocol.
///
/// If the dial fails, [`RequestResponseError::Rejected`] is returned.
Dial,
/// If the peer is not connected, immediately reject the request and return
/// [`RequestResponseError::NotConnected`].
Reject,
}
/// Request-response commands.
#[derive(Debug)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub enum RequestResponseCommand {
/// Send request to remote peer.
SendRequest {
/// Peer ID.
peer: PeerId,
/// Request ID.
///
/// When a response is received or the request fails, the event contains this ID that
/// the user protocol can associate with the correct request.
///
/// If the user protocol only has one active request per peer, this ID can be safely
/// discarded.
request_id: RequestId,
/// Request.
request: Vec<u8>,
/// Dial options, see [`DialOptions`] for more details.
dial_options: DialOptions,
},
SendRequestWithFallback {
/// Peer ID.
peer: PeerId,
/// Request ID.
request_id: RequestId,
/// Request that is sent over the main protocol, if negotiated.
request: Vec<u8>,
/// Request that is sent over the fallback protocol, if negotiated.
fallback: (ProtocolName, Vec<u8>),
/// Dial options, see [`DialOptions`] for more details.
dial_options: DialOptions,
},
/// Cancel outbound request.
CancelRequest {
/// Request ID.
request_id: RequestId,
},
}
/// Handle given to the user protocol which allows it to interact with the request-response
/// protocol.
pub struct RequestResponseHandle {
/// TX channel for sending commands to the request-response protocol.
event_rx: Receiver<InnerRequestResponseEvent>,
/// RX channel for receiving events from the request-response protocol.
command_tx: Sender<RequestResponseCommand>,
/// Pending responses.
pending_responses:
HashMap<RequestId, oneshot::Sender<(Vec<u8>, Option<channel::oneshot::Sender<()>>)>>,
/// Next ephemeral request ID.
next_request_id: Arc<AtomicUsize>,
}
impl RequestResponseHandle {
/// Create new [`RequestResponseHandle`].
pub(super) fn new(
event_rx: Receiver<InnerRequestResponseEvent>,
command_tx: Sender<RequestResponseCommand>,
next_request_id: Arc<AtomicUsize>,
) -> Self {
Self {
event_rx,
command_tx,
next_request_id,
pending_responses: HashMap::new(),
}
}
#[cfg(feature = "fuzz")]
/// Expose functionality for fuzzing
pub async fn fuzz_send_message(
&mut self,
command: RequestResponseCommand,
) -> crate::Result<RequestId> {
let request_id = self.next_request_id();
self.command_tx.send(command).await.map(|_| request_id).map_err(From::from)
}
/// Reject an inbound request.
///
/// Reject request received from a remote peer. The substream is dropped which signals
/// to the remote peer that request was rejected.
pub fn reject_request(&mut self, request_id: RequestId) {
match self.pending_responses.remove(&request_id) {
None => {
tracing::debug!(target: LOG_TARGET, ?request_id, "rejected request doesn't exist")
}
Some(sender) => {
tracing::debug!(target: LOG_TARGET, ?request_id, "reject request");
drop(sender);
}
}
}
/// Cancel an outbound request.
///
/// Allows canceling an in-flight request if the local node is not interested in the answer
/// anymore. If the request was canceled, no event is reported to the user as the cancelation
/// always succeeds and it's assumed that the user does the necessary state clean up in their
/// end after calling [`RequestResponseHandle::cancel_request()`].
pub async fn cancel_request(&mut self, request_id: RequestId) {
tracing::trace!(target: LOG_TARGET, ?request_id, "cancel request");
let _ = self.command_tx.send(RequestResponseCommand::CancelRequest { request_id }).await;
}
/// Get next request ID.
fn next_request_id(&self) -> RequestId {
let request_id = self.next_request_id.fetch_add(1usize, Ordering::Relaxed);
RequestId::from(request_id)
}
/// Send request to remote peer.
///
/// While the returned `RequestId` is guaranteed to be unique for this request-response
/// protocol, it's not unique across all installed request-response protocols. That is,
/// multiple request-response protocols can return the same `RequestId` and this must be
/// handled by the calling code correctly if the `RequestId`s are stored somewhere.
pub async fn send_request(
&mut self,
peer: PeerId,
request: Vec<u8>,
dial_options: DialOptions,
) -> crate::Result<RequestId> {
tracing::trace!(target: LOG_TARGET, ?peer, "send request to peer");
let request_id = self.next_request_id();
self.command_tx
.send(RequestResponseCommand::SendRequest {
peer,
request_id,
request,
dial_options,
})
.await
.map(|_| request_id)
.map_err(From::from)
}
/// Attempt to send request to peer and if the channel is clogged, return
/// `Error::ChannelClogged`.
///
/// While the returned `RequestId` is guaranteed to be unique for this request-response
/// protocol, it's not unique across all installed request-response protocols. That is,
/// multiple request-response protocols can return the same `RequestId` and this must be
/// handled by the calling code correctly if the `RequestId`s are stored somewhere.
pub fn try_send_request(
&mut self,
peer: PeerId,
request: Vec<u8>,
dial_options: DialOptions,
) -> crate::Result<RequestId> {
tracing::trace!(target: LOG_TARGET, ?peer, "send request to peer");
let request_id = self.next_request_id();
self.command_tx
.try_send(RequestResponseCommand::SendRequest {
peer,
request_id,
request,
dial_options,
})
.map(|_| request_id)
.map_err(|_| Error::ChannelClogged)
}
/// Send request to remote peer with fallback.
pub async fn send_request_with_fallback(
&mut self,
peer: PeerId,
request: Vec<u8>,
fallback: (ProtocolName, Vec<u8>),
dial_options: DialOptions,
) -> crate::Result<RequestId> {
tracing::trace!(
target: LOG_TARGET,
?peer,
fallback = %fallback.0,
?dial_options,
"send request with fallback to peer",
);
let request_id = self.next_request_id();
self.command_tx
.send(RequestResponseCommand::SendRequestWithFallback {
peer,
request_id,
fallback,
request,
dial_options,
})
.await
.map(|_| request_id)
.map_err(From::from)
}
/// Attempt to send request to peer with fallback and if the channel is clogged,
/// return `Error::ChannelClogged`.
pub fn try_send_request_with_fallback(
&mut self,
peer: PeerId,
request: Vec<u8>,
fallback: (ProtocolName, Vec<u8>),
dial_options: DialOptions,
) -> crate::Result<RequestId> {
tracing::trace!(
target: LOG_TARGET,
?peer,
fallback = %fallback.0,
?dial_options,
"send request with fallback to peer",
);
let request_id = self.next_request_id();
self.command_tx
.try_send(RequestResponseCommand::SendRequestWithFallback {
peer,
request_id,
fallback,
request,
dial_options,
})
.map(|_| request_id)
.map_err(|_| Error::ChannelClogged)
}
/// Send response to remote peer.
pub fn send_response(&mut self, request_id: RequestId, response: Vec<u8>) {
match self.pending_responses.remove(&request_id) {
None => {
tracing::debug!(target: LOG_TARGET, ?request_id, "pending response doens't exist");
}
Some(response_tx) => {
tracing::trace!(target: LOG_TARGET, ?request_id, "send response to peer");
if let Err(_) = response_tx.send((response, None)) {
tracing::debug!(target: LOG_TARGET, ?request_id, "substream closed");
}
}
}
}
/// Send response to remote peer with feedback.
///
/// The feedback system is inherited from Polkadot SDK's `sc-network` and it's used to notify
/// the sender of the response whether it was sent successfully or not. Once the response has
/// been sent over the substream successfully, `()` will be sent over the feedback channel
/// to the sender to notify them about it. If the substream has been closed or the substream
/// failed while sending the response, the feedback channel will be dropped, notifying the
/// sender that sending the response failed.
pub fn send_response_with_feedback(
&mut self,
request_id: RequestId,
response: Vec<u8>,
feedback: channel::oneshot::Sender<()>,
) {
match self.pending_responses.remove(&request_id) {
None => {
tracing::debug!(target: LOG_TARGET, ?request_id, "pending response doens't exist");
}
Some(response_tx) => {
tracing::trace!(target: LOG_TARGET, ?request_id, "send response to peer");
if let Err(_) = response_tx.send((response, Some(feedback))) {
tracing::debug!(target: LOG_TARGET, ?request_id, "substream closed");
}
}
}
}
}
impl futures::Stream for RequestResponseHandle {
type Item = RequestResponseEvent;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
match futures::ready!(self.event_rx.poll_recv(cx)) {
None => Poll::Ready(None),
Some(event) => match event {
InnerRequestResponseEvent::RequestReceived {
peer,
fallback,
request_id,
request,
response_tx,
} => {
self.pending_responses.insert(request_id, response_tx);
Poll::Ready(Some(RequestResponseEvent::RequestReceived {
peer,
fallback,
request_id,
request,
}))
}
event => Poll::Ready(Some(event.into())),
},
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/config.rs | src/protocol/notification/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
codec::ProtocolCodec,
protocol::notification::{
handle::NotificationHandle,
types::{
InnerNotificationEvent, NotificationCommand, ASYNC_CHANNEL_SIZE, SYNC_CHANNEL_SIZE,
},
},
types::protocol::ProtocolName,
PeerId, DEFAULT_CHANNEL_SIZE,
};
use bytes::BytesMut;
use parking_lot::RwLock;
use tokio::sync::mpsc::{channel, Receiver, Sender};
use std::sync::Arc;
/// Notification configuration.
#[derive(Debug)]
pub struct Config {
/// Protocol name.
pub(crate) protocol_name: ProtocolName,
/// Protocol codec.
pub(crate) codec: ProtocolCodec,
/// Maximum notification size.
_max_notification_size: usize,
/// Handshake bytes.
pub(crate) handshake: Arc<RwLock<Vec<u8>>>,
/// Auto accept inbound substream.
pub(super) auto_accept: bool,
/// Protocol aliases.
pub(crate) fallback_names: Vec<ProtocolName>,
/// TX channel passed to the protocol used for sending events.
pub(crate) event_tx: Sender<InnerNotificationEvent>,
/// TX channel for sending notifications from the connection handlers.
pub(crate) notif_tx: Sender<(PeerId, BytesMut)>,
/// RX channel passed to the protocol used for receiving commands.
pub(crate) command_rx: Receiver<NotificationCommand>,
/// Synchronous channel size.
pub(crate) sync_channel_size: usize,
/// Asynchronous channel size.
pub(crate) async_channel_size: usize,
/// Should `NotificationProtocol` dial the peer if there is no connection to them
/// when an outbound substream is requested.
pub(crate) should_dial: bool,
}
impl Config {
/// Create new [`Config`].
pub fn new(
protocol_name: ProtocolName,
max_notification_size: usize,
handshake: Vec<u8>,
fallback_names: Vec<ProtocolName>,
auto_accept: bool,
sync_channel_size: usize,
async_channel_size: usize,
should_dial: bool,
) -> (Self, NotificationHandle) {
let (event_tx, event_rx) = channel(DEFAULT_CHANNEL_SIZE);
let (notif_tx, notif_rx) = channel(DEFAULT_CHANNEL_SIZE);
let (command_tx, command_rx) = channel(DEFAULT_CHANNEL_SIZE);
let handshake = Arc::new(RwLock::new(handshake));
let handle =
NotificationHandle::new(event_rx, notif_rx, command_tx, Arc::clone(&handshake));
(
Self {
protocol_name,
codec: ProtocolCodec::UnsignedVarint(Some(max_notification_size)),
_max_notification_size: max_notification_size,
auto_accept,
handshake,
fallback_names,
event_tx,
notif_tx,
command_rx,
should_dial,
sync_channel_size,
async_channel_size,
},
handle,
)
}
/// Get protocol name.
pub(crate) fn protocol_name(&self) -> &ProtocolName {
&self.protocol_name
}
/// Set handshake for the protocol.
///
/// This function is used to work around an issue in Polkadot SDK and users
/// should not depend on its continued existence.
pub fn set_handshake(&mut self, handshake: Vec<u8>) {
let mut inner = self.handshake.write();
*inner = handshake;
}
}
/// Notification configuration builder.
pub struct ConfigBuilder {
/// Protocol name.
protocol_name: ProtocolName,
/// Maximum notification size.
max_notification_size: Option<usize>,
/// Handshake bytes.
handshake: Option<Vec<u8>>,
/// Should `NotificationProtocol` dial the peer if an outbound substream is requested but there
/// is no connection to the peer.
should_dial: bool,
/// Fallback names.
fallback_names: Vec<ProtocolName>,
/// Auto accept inbound substream.
auto_accept_inbound_for_initiated: bool,
/// Synchronous channel size.
sync_channel_size: usize,
/// Asynchronous channel size.
async_channel_size: usize,
}
impl ConfigBuilder {
/// Create new [`ConfigBuilder`].
pub fn new(protocol_name: ProtocolName) -> Self {
Self {
protocol_name,
max_notification_size: None,
handshake: None,
fallback_names: Vec::new(),
auto_accept_inbound_for_initiated: false,
sync_channel_size: SYNC_CHANNEL_SIZE,
async_channel_size: ASYNC_CHANNEL_SIZE,
should_dial: true,
}
}
/// Set maximum notification size.
pub fn with_max_size(mut self, max_notification_size: usize) -> Self {
self.max_notification_size = Some(max_notification_size);
self
}
/// Set handshake.
pub fn with_handshake(mut self, handshake: Vec<u8>) -> Self {
self.handshake = Some(handshake);
self
}
/// Set fallback names.
pub fn with_fallback_names(mut self, fallback_names: Vec<ProtocolName>) -> Self {
self.fallback_names = fallback_names;
self
}
/// Auto-accept inbound substreams for those connections which were initiated by the local
/// node.
///
/// Connection in this context means a bidirectional substream pair between two peers over a
/// given protocol.
///
/// By default, when a node starts a connection with a remote node and opens an outbound
/// substream to them, that substream is validated and if it's accepted, remote node sends
/// their handshake over that substream and opens another substream to local node. The
/// substream that was opened by the local node is used for sending data and the one opened
/// by the remote node is used for receiving data.
///
/// By default, even if the local node was the one that opened the first substream, this inbound
/// substream coming from remote node must be validated as the handshake of the remote node
/// may reveal that it's not someone that the local node is willing to accept.
///
/// To disable this behavior, auto accepting for the inbound substream can be enabled. If local
/// node is the one that opened the connection and it was accepted by the remote node, local
/// node is only notified via
/// [`NotificationStreamOpened`](super::types::NotificationEvent::NotificationStreamOpened).
pub fn with_auto_accept_inbound(mut self, auto_accept: bool) -> Self {
self.auto_accept_inbound_for_initiated = auto_accept;
self
}
/// Configure size of the channel for sending synchronous notifications.
///
/// Default value is `16`.
pub fn with_sync_channel_size(mut self, size: usize) -> Self {
self.sync_channel_size = size;
self
}
/// Configure size of the channel for sending asynchronous notifications.
///
/// Default value is `8`.
pub fn with_async_channel_size(mut self, size: usize) -> Self {
self.async_channel_size = size;
self
}
/// Should `NotificationProtocol` attempt to dial the peer if an outbound substream is opened
/// but no connection to the peer exist.
///
/// Dialing is enabled by default.
pub fn with_dialing_enabled(mut self, should_dial: bool) -> Self {
self.should_dial = should_dial;
self
}
/// Build notification configuration.
pub fn build(mut self) -> (Config, NotificationHandle) {
Config::new(
self.protocol_name,
self.max_notification_size.take().expect("notification size to be specified"),
self.handshake.take().expect("handshake to be specified"),
self.fallback_names,
self.auto_accept_inbound_for_initiated,
self.sync_channel_size,
self.async_channel_size,
self.should_dial,
)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/connection.rs | src/protocol/notification/connection.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::notification::handle::NotificationEventHandle, substream::Substream, PeerId,
};
use bytes::BytesMut;
use futures::{FutureExt, SinkExt, Stream, StreamExt};
use tokio::sync::{
mpsc::{Receiver, Sender},
oneshot,
};
use tokio_util::sync::PollSender;
use std::{
pin::Pin,
task::{Context, Poll},
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::notification::connection";
/// Bidirectional substream pair representing a connection to a remote peer.
pub(crate) struct Connection {
/// Remote peer ID.
peer: PeerId,
/// Inbound substreams for receiving notifications.
inbound: Substream,
/// Outbound substream for sending notifications.
outbound: Substream,
/// Handle for sending notification events to user.
event_handle: NotificationEventHandle,
/// TX channel used to notify [`NotificationProtocol`](super::NotificationProtocol)
/// that the connection has been closed.
conn_closed_tx: Sender<PeerId>,
/// TX channel for sending notifications.
notif_tx: PollSender<(PeerId, BytesMut)>,
/// Receiver for asynchronously sent notifications.
async_rx: Receiver<Vec<u8>>,
/// Receiver for synchronously sent notifications.
sync_rx: Receiver<Vec<u8>>,
/// Oneshot receiver used by [`NotificationProtocol`](super::NotificationProtocol)
/// to signal that local node wishes the close the connection.
rx: oneshot::Receiver<()>,
/// Next notification to send, if any.
next_notification: Option<Vec<u8>>,
}
/// Notify [`NotificationProtocol`](super::NotificationProtocol) that the connection was closed.
#[derive(Debug)]
pub enum NotifyProtocol {
/// Notify the protocol handler.
Yes,
/// Do not notify protocol handler.
No,
}
impl Connection {
/// Create new [`Connection`].
pub(crate) fn new(
peer: PeerId,
inbound: Substream,
outbound: Substream,
event_handle: NotificationEventHandle,
conn_closed_tx: Sender<PeerId>,
notif_tx: Sender<(PeerId, BytesMut)>,
async_rx: Receiver<Vec<u8>>,
sync_rx: Receiver<Vec<u8>>,
) -> (Self, oneshot::Sender<()>) {
let (tx, rx) = oneshot::channel();
(
Self {
rx,
peer,
sync_rx,
async_rx,
inbound,
outbound,
event_handle,
conn_closed_tx,
next_notification: None,
notif_tx: PollSender::new(notif_tx),
},
tx,
)
}
/// Connection closed, clean up state.
///
/// If [`NotificationProtocol`](super::NotificationProtocol) was the one that initiated
/// shut down, it's not notified of connection getting closed.
async fn close_connection(self, notify_protocol: NotifyProtocol) {
tracing::trace!(
target: LOG_TARGET,
peer = ?self.peer,
?notify_protocol,
"close notification protocol",
);
let _ = self.inbound.close().await;
let _ = self.outbound.close().await;
if std::matches!(notify_protocol, NotifyProtocol::Yes) {
let _ = self.conn_closed_tx.send(self.peer).await;
}
self.event_handle.report_notification_stream_closed(self.peer).await;
}
pub async fn start(mut self) {
tracing::debug!(
target: LOG_TARGET,
peer = ?self.peer,
"start connection event loop",
);
loop {
match self.next().await {
None
| Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::Yes,
}) => return self.close_connection(NotifyProtocol::Yes).await,
Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::No,
}) => return self.close_connection(NotifyProtocol::No).await,
Some(ConnectionEvent::NotificationReceived { notification }) => {
if let Err(_) = self.notif_tx.send_item((self.peer, notification)) {
return self.close_connection(NotifyProtocol::Yes).await;
}
}
}
}
}
}
/// Connection events.
pub enum ConnectionEvent {
/// Close connection.
///
/// If `NotificationProtocol` requested [`Connection`] to be closed, it doesn't need to be
/// notified. If, on the other hand, connection closes because it encountered an error or one
/// of the substreams was closed, `NotificationProtocol` must be informed so it can inform the
/// user.
CloseConnection {
/// Whether to notify `NotificationProtocol` or not.
notify: NotifyProtocol,
},
/// Notification read from the inbound substream.
///
/// NOTE: [`Connection`] uses `PollSender::send_item()` to send the notification to user.
/// `PollSender::poll_reserve()` must be called before calling `PollSender::send_item()` or it
/// will panic. `PollSender::poll_reserve()` is called in the `Stream` implementation below
/// before polling the inbound substream to ensure the channel has capacity to receive a
/// notification.
NotificationReceived {
/// Notification.
notification: BytesMut,
},
}
impl Stream for Connection {
type Item = ConnectionEvent;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let this = Pin::into_inner(self);
if let Poll::Ready(_) = this.rx.poll_unpin(cx) {
return Poll::Ready(Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::No,
}));
}
loop {
let notification = match this.next_notification.take() {
Some(notification) => Some(notification),
None => {
let future = async {
tokio::select! {
notification = this.async_rx.recv() => notification,
notification = this.sync_rx.recv() => notification,
}
};
futures::pin_mut!(future);
match future.poll_unpin(cx) {
Poll::Pending => None,
Poll::Ready(None) =>
return Poll::Ready(Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::Yes,
})),
Poll::Ready(Some(notification)) => Some(notification),
}
}
};
let Some(notification) = notification else {
break;
};
match this.outbound.poll_ready_unpin(cx) {
Poll::Ready(Ok(())) => {}
Poll::Pending => {
this.next_notification = Some(notification);
break;
}
Poll::Ready(Err(_)) =>
return Poll::Ready(Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::Yes,
})),
}
if let Err(_) = this.outbound.start_send_unpin(notification.into()) {
return Poll::Ready(Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::Yes,
}));
}
}
match this.outbound.poll_flush_unpin(cx) {
Poll::Ready(Err(_)) =>
return Poll::Ready(Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::Yes,
})),
Poll::Ready(Ok(())) | Poll::Pending => {}
}
if let Err(_) = futures::ready!(this.notif_tx.poll_reserve(cx)) {
return Poll::Ready(Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::Yes,
}));
}
match futures::ready!(this.inbound.poll_next_unpin(cx)) {
None | Some(Err(_)) => Poll::Ready(Some(ConnectionEvent::CloseConnection {
notify: NotifyProtocol::Yes,
})),
Some(Ok(notification)) =>
Poll::Ready(Some(ConnectionEvent::NotificationReceived { notification })),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/types.rs | src/protocol/notification/types.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::notification::handle::NotificationSink, types::protocol::ProtocolName, PeerId,
};
use bytes::BytesMut;
use tokio::sync::oneshot;
use std::collections::HashSet;
/// Default channel size for synchronous notifications.
pub(super) const SYNC_CHANNEL_SIZE: usize = 2048;
/// Default channel size for asynchronous notifications.
pub(super) const ASYNC_CHANNEL_SIZE: usize = 8;
/// Direction of the connection.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum Direction {
/// Connection is considered inbound, i.e., it was initiated by the remote node.
Inbound,
/// Connection is considered outbound, i.e., it was initiated by the local node.
Outbound,
}
/// Validation result.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum ValidationResult {
/// Accept the inbound substream.
Accept,
/// Reject the inbound substream.
Reject,
}
/// Notification error.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum NotificationError {
/// Remote rejected the substream.
Rejected,
/// Connection to peer doesn't exist.
NoConnection,
/// Synchronous notification channel is clogged.
ChannelClogged,
/// Validation for a previous substream still pending.
ValidationPending,
/// Failed to dial peer.
DialFailure,
/// Notification protocol has been closed.
EssentialTaskClosed,
}
/// Notification events.
pub(crate) enum InnerNotificationEvent {
/// Validate substream.
ValidateSubstream {
/// Protocol name.
protocol: ProtocolName,
/// Fallback, if the substream was negotiated using a fallback protocol.
fallback: Option<ProtocolName>,
/// Peer ID.
peer: PeerId,
/// Handshake.
handshake: Vec<u8>,
/// `oneshot::Sender` for sending the validation result back to the protocol.
tx: oneshot::Sender<ValidationResult>,
},
/// Notification stream opened.
NotificationStreamOpened {
/// Protocol name.
protocol: ProtocolName,
/// Fallback, if the substream was negotiated using a fallback protocol.
fallback: Option<ProtocolName>,
/// Direction of the substream.
direction: Direction,
/// Peer ID.
peer: PeerId,
/// Handshake.
handshake: Vec<u8>,
/// Notification sink.
sink: NotificationSink,
},
/// Notification stream closed.
NotificationStreamClosed {
/// Peer ID.
peer: PeerId,
},
/// Failed to open notification stream.
NotificationStreamOpenFailure {
/// Peer ID.
peer: PeerId,
/// Error.
error: NotificationError,
},
}
/// Notification events.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum NotificationEvent {
/// Validate substream.
ValidateSubstream {
/// Protocol name.
protocol: ProtocolName,
/// Fallback, if the substream was negotiated using a fallback protocol.
fallback: Option<ProtocolName>,
/// Peer ID.
peer: PeerId,
/// Handshake.
handshake: Vec<u8>,
},
/// Notification stream opened.
NotificationStreamOpened {
/// Protocol name.
protocol: ProtocolName,
/// Fallback, if the substream was negotiated using a fallback protocol.
fallback: Option<ProtocolName>,
/// Direction of the substream.
///
/// [`Direction::Inbound`](crate::protocol::Direction::Outbound) indicates that the
/// substream was opened by the remote peer and
/// [`Direction::Outbound`](crate::protocol::Direction::Outbound) that it was
/// opened by the local node.
direction: Direction,
/// Peer ID.
peer: PeerId,
/// Handshake.
handshake: Vec<u8>,
},
/// Notification stream closed.
NotificationStreamClosed {
/// Peer ID.
peer: PeerId,
},
/// Failed to open notification stream.
NotificationStreamOpenFailure {
/// Peer ID.
peer: PeerId,
/// Error.
error: NotificationError,
},
/// Notification received.
NotificationReceived {
/// Peer ID.
peer: PeerId,
/// Notification.
notification: BytesMut,
},
}
/// Notification commands sent to the protocol.
#[derive(Debug)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub enum NotificationCommand {
/// Open substreams to one or more peers.
OpenSubstream {
/// Peer IDs.
peers: HashSet<PeerId>,
},
/// Close substreams to one or more peers.
CloseSubstream {
/// Peer IDs.
peers: HashSet<PeerId>,
},
/// Force close the connection because notification channel is clogged.
ForceClose {
/// Peer to disconnect.
peer: PeerId,
},
#[cfg(feature = "fuzz")]
SendNotification { notif: Vec<u8>, peer_id: PeerId },
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/mod.rs | src/protocol/notification/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Notification protocol implementation.
use crate::{
error::{Error, SubstreamError},
executor::Executor,
protocol::{
self,
notification::{
connection::Connection,
handle::NotificationEventHandle,
negotiation::{HandshakeEvent, HandshakeService},
},
TransportEvent, TransportService,
},
substream::Substream,
types::{protocol::ProtocolName, SubstreamId},
PeerId, DEFAULT_CHANNEL_SIZE,
};
use bytes::BytesMut;
use futures::{future::BoxFuture, stream::FuturesUnordered, StreamExt};
use multiaddr::Multiaddr;
use tokio::sync::{
mpsc::{channel, Receiver, Sender},
oneshot,
};
use std::{collections::HashMap, sync::Arc, time::Duration};
pub use config::{Config, ConfigBuilder};
pub use handle::{NotificationHandle, NotificationSink};
pub use types::{
Direction, NotificationCommand, NotificationError, NotificationEvent, ValidationResult,
};
mod config;
mod connection;
mod handle;
mod negotiation;
mod types;
#[cfg(test)]
mod tests;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::notification";
/// Connection state.
///
/// Used to track transport level connectivity state when there is a pending validation.
/// See [`PeerState::ValidationPending`] for more details.
#[derive(Debug, PartialEq, Eq)]
enum ConnectionState {
/// There is a active, transport-level connection open to the peer.
Open,
/// There is no transport-level connection open to the peer.
Closed,
}
/// Inbound substream state.
#[derive(Debug)]
enum InboundState {
/// Substream is closed.
Closed,
/// Handshake is being read from the remote node.
ReadingHandshake,
/// Substream and its handshake are being validated by the user protocol.
Validating {
/// Inbound substream.
inbound: Substream,
},
/// Handshake is being sent to the remote node.
SendingHandshake,
/// Substream is open.
Open {
/// Inbound substream.
inbound: Substream,
},
}
/// Outbound substream state.
#[derive(Debug)]
enum OutboundState {
/// Substream is closed.
Closed,
/// Outbound substream initiated.
OutboundInitiated {
/// Substream ID.
substream: SubstreamId,
},
/// Substream is in the state of being negotiated.
///
/// This process entails sending local node's handshake and reading back the remote node's
/// handshake if they've accepted the substream or detecting that the substream was closed
/// in case the substream was rejected.
Negotiating,
/// Substream is open.
Open {
/// Received handshake.
handshake: Vec<u8>,
/// Outbound substream.
outbound: Substream,
},
}
impl OutboundState {
/// Get pending outboud substream ID, if it exists.
fn pending_open(&self) -> Option<SubstreamId> {
match &self {
OutboundState::OutboundInitiated { substream } => Some(*substream),
_ => None,
}
}
}
#[derive(Debug)]
enum PeerState {
/// Peer state is poisoned due to invalid state transition.
Poisoned,
/// Validation for an inbound substream is still pending.
///
/// In order to enforce valid state transitions, `NotificationProtocol` keeps track of pending
/// validations across connectivity events (open/closed) and enforces that no activity happens
/// for any peer that is still awaiting validation for their inbound substream.
///
/// If connection closes while the substream is being validated, instead of removing peer from
/// `peers`, the peer state is set as `ValidationPending` which indicates to the state machine
/// that a response for a inbound substream is pending validation. The substream itself will be
/// dead by the time validation is received if the peer state is `ValidationPending` since the
/// substream was part of a previous, now-closed substream but this state allows
/// `NotificationProtocol` to enforce correct state transitions by, e.g., rejecting new inbound
/// substream while a previous substream is still being validated or rejecting outbound
/// substreams on new connections if that same condition holds.
ValidationPending {
/// What is current connectivity state of the peer.
///
/// If `state` is `ConnectionState::Closed` when the validation is finally received, peer
/// is removed from `peer` and if the `state` is `ConnectionState::Open`, peer is moved to
/// state `PeerState::Closed` and user is allowed to retry opening an outbound substream.
state: ConnectionState,
},
/// Connection to peer is closed.
Closed {
/// Connection might have been closed while there was an outbound substream still pending.
///
/// To handle this state transition correctly in case the substream opens after the
/// connection is considered closed, store the `SubstreamId` to that it can be verified in
/// case the substream ever opens.
pending_open: Option<SubstreamId>,
},
/// Peer is being dialed in order to open an outbound substream to them.
Dialing,
/// Outbound substream initiated.
OutboundInitiated {
/// Substream ID.
substream: SubstreamId,
},
/// Substream is being validated.
Validating {
/// Protocol.
protocol: ProtocolName,
/// Fallback protocol, if the substream was negotiated using a fallback name.
fallback: Option<ProtocolName>,
/// Outbound protocol state.
outbound: OutboundState,
/// Inbound protocol state.
inbound: InboundState,
/// Direction.
direction: Direction,
},
/// Notification stream has been opened.
Open {
/// `Oneshot::Sender` for shutting down the connection.
shutdown: oneshot::Sender<()>,
},
}
/// Peer context.
#[derive(Debug)]
struct PeerContext {
/// Peer state.
state: PeerState,
}
impl PeerContext {
/// Create new [`PeerContext`].
fn new() -> Self {
Self {
state: PeerState::Closed { pending_open: None },
}
}
}
pub(crate) struct NotificationProtocol {
/// Transport service.
service: TransportService,
/// Protocol.
protocol: ProtocolName,
/// Auto accept inbound substream if the outbound substream was initiated by the local node.
auto_accept: bool,
/// TX channel passed to the protocol used for sending events.
event_handle: NotificationEventHandle,
/// TX channel for sending shut down notifications from connection handlers to
/// [`NotificationProtocol`].
shutdown_tx: Sender<PeerId>,
/// RX channel for receiving shutdown notifications from the connection handlers.
shutdown_rx: Receiver<PeerId>,
/// RX channel passed to the protocol used for receiving commands.
command_rx: Receiver<NotificationCommand>,
/// TX channel given to connection handlers for sending notifications.
notif_tx: Sender<(PeerId, BytesMut)>,
/// Connected peers.
peers: HashMap<PeerId, PeerContext>,
/// Pending outbound substreams.
pending_outbound: HashMap<SubstreamId, PeerId>,
/// Handshaking service which reads and writes the handshakes to inbound
/// and outbound substreams asynchronously.
negotiation: HandshakeService,
/// Synchronous channel size.
sync_channel_size: usize,
/// Asynchronous channel size.
async_channel_size: usize,
/// Executor for connection handlers.
executor: Arc<dyn Executor>,
/// Pending substream validations.
pending_validations: FuturesUnordered<BoxFuture<'static, (PeerId, ValidationResult)>>,
/// Timers for pending outbound substreams.
timers: FuturesUnordered<BoxFuture<'static, PeerId>>,
/// Should `NotificationProtocol` attempt to dial the peer.
should_dial: bool,
}
impl NotificationProtocol {
pub(crate) fn new(
service: TransportService,
config: Config,
executor: Arc<dyn Executor>,
) -> Self {
let (shutdown_tx, shutdown_rx) = channel(DEFAULT_CHANNEL_SIZE);
Self {
service,
shutdown_tx,
shutdown_rx,
executor,
peers: HashMap::new(),
protocol: config.protocol_name,
auto_accept: config.auto_accept,
pending_validations: FuturesUnordered::new(),
timers: FuturesUnordered::new(),
event_handle: NotificationEventHandle::new(config.event_tx),
notif_tx: config.notif_tx,
command_rx: config.command_rx,
pending_outbound: HashMap::new(),
negotiation: HandshakeService::new(config.handshake),
sync_channel_size: config.sync_channel_size,
async_channel_size: config.async_channel_size,
should_dial: config.should_dial,
}
}
/// Connection established to remote node.
///
/// If the peer already exists, the only valid state for it is `Dialing` as it indicates that
/// the user tried to open a substream to a peer who was not connected to local node.
///
/// Any other state indicates that there's an error in the state transition logic.
async fn on_connection_established(&mut self, peer: PeerId) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, protocol = %self.protocol, "connection established");
let Some(context) = self.peers.get_mut(&peer) else {
self.peers.insert(peer, PeerContext::new());
return Ok(());
};
match std::mem::replace(&mut context.state, PeerState::Poisoned) {
PeerState::Dialing => {
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"dial succeeded, open substream to peer",
);
context.state = PeerState::Closed { pending_open: None };
self.on_open_substream(peer).await
}
// connection established but validation is still pending
//
// update the connection state so that `NotificationProtocol` can proceed
// to correct state after the validation result has beern received
PeerState::ValidationPending { state } => {
debug_assert_eq!(state, ConnectionState::Closed);
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"new connection established while validation still pending",
);
context.state = PeerState::ValidationPending {
state: ConnectionState::Open,
};
Ok(())
}
state => {
tracing::error!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?state,
"state mismatch: peer already exists",
);
debug_assert!(false);
Err(Error::PeerAlreadyExists(peer))
}
}
}
/// Connection closed to remote node.
///
/// If the connection was considered open (both substreams were open), user is notified that
/// the notification stream was closed.
///
/// If the connection was still in progress (either substream was not fully open), the user is
/// reported about it only if they had opened an outbound substream (outbound is either fully
/// open, it had been initiated or the substream was under negotiation).
async fn on_connection_closed(&mut self, peer: PeerId) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, protocol = %self.protocol, "connection closed");
self.pending_outbound.retain(|_, p| p != &peer);
let Some(context) = self.peers.remove(&peer) else {
tracing::error!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"state mismatch: peer doesn't exist",
);
debug_assert!(false);
return Err(Error::PeerDoesntExist(peer));
};
// clean up all pending state for the peer
self.negotiation.remove_outbound(&peer);
self.negotiation.remove_inbound(&peer);
match context.state {
// outbound initiated, report open failure to peer
PeerState::OutboundInitiated { .. } => {
self.event_handle
.report_notification_stream_open_failure(peer, NotificationError::Rejected)
.await;
}
// substream fully open, report that the notification stream is closed
PeerState::Open { shutdown } => {
let _ = shutdown.send(());
}
// if the substream was being validated, user must be notified that the substream is
// now considered rejected if they had been made aware of the existence of the pending
// connection
PeerState::Validating {
outbound, inbound, ..
} => {
match (outbound, inbound) {
// substream was being validated by the protocol when the connection was closed
(OutboundState::Closed, InboundState::Validating { .. }) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"connection closed while validation pending",
);
self.peers.insert(
peer,
PeerContext {
state: PeerState::ValidationPending {
state: ConnectionState::Closed,
},
},
);
}
// user either initiated an outbound substream or an outbound substream was
// opened/being opened as a result of an accepted inbound substream but was not
// yet fully open
//
// to have consistent state tracking in the user protocol, substream rejection
// must be reported to the user
(
OutboundState::OutboundInitiated { .. }
| OutboundState::Negotiating
| OutboundState::Open { .. },
_,
) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"connection closed outbound substream under negotiation",
);
self.event_handle
.report_notification_stream_open_failure(
peer,
NotificationError::Rejected,
)
.await;
}
(_, _) => {}
}
}
// pending validations must be tracked across connection open/close events
PeerState::ValidationPending { .. } => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"validation pending while connection closed",
);
self.peers.insert(
peer,
PeerContext {
state: PeerState::ValidationPending {
state: ConnectionState::Closed,
},
},
);
}
_ => {}
}
Ok(())
}
/// Local node opened a substream to remote node.
///
/// The connection can be in three different states:
/// - this is the first substream that was opened and thus the connection was initiated by the
/// local node
/// - this is a response to a previously received inbound substream which the local node
/// accepted and as a result, opened its own substream
/// - local and remote nodes opened substreams at the same time
///
/// In the first case, the local node's handshake is sent to remote node and the substream is
/// polled in the background until they either send their handshake or close the substream.
///
/// For the second case, the connection was initiated by the remote node and the substream was
/// accepted by the local node which initiated an outbound substream to the remote node.
/// The only valid states for this case are [`InboundState::Open`],
/// and [`InboundState::SendingHandshake`] as they imply
/// that the inbound substream have been accepted by the local node and this opened outbound
/// substream is a result of a valid state transition.
///
/// For the third case, if the nodes have opened substreams at the same time, the outbound state
/// must be [`OutboundState::OutboundInitiated`] to ascertain that the an outbound substream was
/// actually opened. Any other state would be a state mismatch and would mean that the
/// connection is opening substreams without the permission of the protocol handler.
async fn on_outbound_substream(
&mut self,
protocol: ProtocolName,
fallback: Option<ProtocolName>,
peer: PeerId,
substream_id: SubstreamId,
outbound: Substream,
) -> crate::Result<()> {
tracing::debug!(
target: LOG_TARGET,
?peer,
?protocol,
?substream_id,
"handle outbound substream",
);
// peer must exist since an outbound substream was received from them
let Some(context) = self.peers.get_mut(&peer) else {
tracing::error!(target: LOG_TARGET, ?peer, "peer doesn't exist for outbound substream");
debug_assert!(false);
return Err(Error::PeerDoesntExist(peer));
};
let pending_peer = self.pending_outbound.remove(&substream_id);
match std::mem::replace(&mut context.state, PeerState::Poisoned) {
// the connection was initiated by the local node, send handshake to remote and wait to
// receive their handshake back
PeerState::OutboundInitiated { substream } => {
debug_assert!(substream == substream_id);
debug_assert!(pending_peer == Some(peer));
tracing::trace!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
?fallback,
?substream_id,
"negotiate outbound protocol",
);
self.negotiation.negotiate_outbound(peer, outbound);
context.state = PeerState::Validating {
protocol,
fallback,
inbound: InboundState::Closed,
outbound: OutboundState::Negotiating,
direction: Direction::Outbound,
};
}
PeerState::Validating {
protocol,
fallback,
inbound,
direction,
outbound: outbound_state,
} => {
// the inbound substream has been accepted by the local node since the handshake has
// been read and the local handshake has either already been sent or
// it's in the process of being sent.
match inbound {
InboundState::SendingHandshake | InboundState::Open { .. } => {
context.state = PeerState::Validating {
protocol,
fallback,
inbound,
direction,
outbound: OutboundState::Negotiating,
};
self.negotiation.negotiate_outbound(peer, outbound);
}
// nodes have opened substreams at the same time
inbound_state => match outbound_state {
OutboundState::OutboundInitiated { substream } => {
debug_assert!(substream == substream_id);
context.state = PeerState::Validating {
protocol,
fallback,
direction,
inbound: inbound_state,
outbound: OutboundState::Negotiating,
};
self.negotiation.negotiate_outbound(peer, outbound);
}
// invalid state: more than one outbound substream has been opened
inner_state => {
tracing::error!(
target: LOG_TARGET,
?peer,
%protocol,
?substream_id,
?inbound_state,
?inner_state,
"invalid state, expected `OutboundInitiated`",
);
let _ = outbound.close().await;
debug_assert!(false);
}
},
}
}
// the connection may have been closed while an outbound substream was pending
// if the outbound substream was initiated successfully, close it and reset
// `pending_open`
PeerState::Closed { pending_open } if pending_open == Some(substream_id) => {
let _ = outbound.close().await;
context.state = PeerState::Closed { pending_open: None };
}
state => {
tracing::error!(
target: LOG_TARGET,
?peer,
%protocol,
?substream_id,
?state,
"invalid state: more than one outbound substream opened",
);
let _ = outbound.close().await;
debug_assert!(false);
}
}
Ok(())
}
/// Remote opened a substream to local node.
///
/// The peer can be in four different states for the inbound substream to be considered valid:
/// - the connection is closed
/// - conneection is open but substream validation from a previous connection is still pending
/// - outbound substream has been opened but not yet acknowledged by the remote peer
/// - outbound substream has been opened and acknowledged by the remote peer and it's being
/// negotiated
///
/// If remote opened more than one substream, the new substream is simply discarded.
async fn on_inbound_substream(
&mut self,
protocol: ProtocolName,
fallback: Option<ProtocolName>,
peer: PeerId,
substream: Substream,
) -> crate::Result<()> {
// peer must exist since an inbound substream was received from them
let Some(context) = self.peers.get_mut(&peer) else {
tracing::error!(target: LOG_TARGET, ?peer, "peer doesn't exist for inbound substream");
debug_assert!(false);
return Err(Error::PeerDoesntExist(peer));
};
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?fallback,
state = ?context.state,
"handle inbound substream",
);
match std::mem::replace(&mut context.state, PeerState::Poisoned) {
// inbound substream of a previous connection is still pending validation,
// reject any new inbound substreams until an answer is heard from the user
state @ PeerState::ValidationPending { .. } => {
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?fallback,
?state,
"validation for previous substream still pending",
);
let _ = substream.close().await;
context.state = state;
}
// outbound substream for previous connection still pending, reject inbound substream
// and wait for the outbound substream state to conclude as either succeeded or failed
// before accepting any inbound substreams.
PeerState::Closed {
pending_open: Some(substream_id),
} => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"received inbound substream while outbound substream opening, rejecting",
);
let _ = substream.close().await;
context.state = PeerState::Closed {
pending_open: Some(substream_id),
};
}
// the peer state is closed so this is a fresh inbound substream.
PeerState::Closed { pending_open: None } => {
self.negotiation.read_handshake(peer, substream);
context.state = PeerState::Validating {
protocol,
fallback,
direction: Direction::Inbound,
inbound: InboundState::ReadingHandshake,
outbound: OutboundState::Closed,
};
}
// if the connection is under validation (so an outbound substream has been opened and
// it's still pending or under negotiation), the only valid state for the
// inbound state is closed as it indicates that there isn't an inbound substream yet for
// the remote node duplicate substreams are prohibited.
PeerState::Validating {
protocol,
fallback,
outbound,
direction,
inbound: InboundState::Closed,
} => {
self.negotiation.read_handshake(peer, substream);
context.state = PeerState::Validating {
protocol,
fallback,
outbound,
direction,
inbound: InboundState::ReadingHandshake,
};
}
// outbound substream may have been initiated by the local node while a remote node also
// opened a substream roughly at the same time
PeerState::OutboundInitiated {
substream: outbound,
} => {
self.negotiation.read_handshake(peer, substream);
context.state = PeerState::Validating {
protocol,
fallback,
direction: Direction::Outbound,
outbound: OutboundState::OutboundInitiated {
substream: outbound,
},
inbound: InboundState::ReadingHandshake,
};
}
// new inbound substream opend while validation for the previous substream was still
// pending
//
// the old substream can be considered dead because remote wouldn't open a new substream
// to us unless they had discarded the previous substream.
//
// set peer state to `ValidationPending` to indicate that the peer is "blocked" until a
// validation for the substream is heard, blocking any further activity for
// the connection and once the validation is received and in case the
// substream is accepted, it will be reported as open failure to to the peer
// because the states have gone out of sync.
PeerState::Validating {
outbound: OutboundState::Closed,
inbound:
InboundState::Validating {
inbound: pending_substream,
},
..
} => {
tracing::debug!(
target: LOG_TARGET,
?peer,
protocol = %self.protocol,
"remote opened substream while previous was still pending, connection failed",
);
let _ = substream.close().await;
let _ = pending_substream.close().await;
context.state = PeerState::ValidationPending {
state: ConnectionState::Open,
};
}
// remote opened another inbound substream, close it and otherwise ignore the event
// as this is a non-serious protocol violation.
state => {
tracing::debug!(
target: LOG_TARGET,
?peer,
%protocol,
?fallback,
?state,
"remote opened more than one inbound substreams, discarding",
);
let _ = substream.close().await;
context.state = state;
}
}
Ok(())
}
/// Failed to open substream to remote node.
///
/// If the substream was initiated by the local node, it must be reported that the substream
/// failed to open. Otherwise the peer state can silently be converted to `Closed`.
async fn on_substream_open_failure(
&mut self,
substream_id: SubstreamId,
error: SubstreamError,
) {
tracing::debug!(
target: LOG_TARGET,
protocol = %self.protocol,
?substream_id,
?error,
"failed to open substream"
);
let Some(peer) = self.pending_outbound.remove(&substream_id) else {
tracing::warn!(
target: LOG_TARGET,
protocol = %self.protocol,
?substream_id,
"pending outbound substream doesn't exist",
);
debug_assert!(false);
return;
};
// peer must exist since an outbound substream failure was received from them
let Some(context) = self.peers.get_mut(&peer) else {
tracing::warn!(target: LOG_TARGET, ?peer, "peer doesn't exist");
debug_assert!(false);
return;
};
match &mut context.state {
PeerState::OutboundInitiated { .. } => {
context.state = PeerState::Closed { pending_open: None };
self.event_handle
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/negotiation.rs | src/protocol/notification/negotiation.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Implementation of the notification handshaking.
use crate::{substream::Substream, PeerId};
use futures::{FutureExt, Sink, Stream};
use futures_timer::Delay;
use parking_lot::RwLock;
use std::{
collections::{HashMap, VecDeque},
pin::Pin,
sync::Arc,
task::{Context, Poll},
time::Duration,
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::notification::negotiation";
/// Maximum timeout wait before for handshake before operation is considered failed.
const NEGOTIATION_TIMEOUT: Duration = Duration::from_secs(10);
/// Substream direction.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub enum Direction {
/// Outbound substream, opened by local node.
Outbound,
/// Inbound substream, opened by remote node.
Inbound,
}
/// Events emitted by [`HandshakeService`].
#[derive(Debug)]
pub enum HandshakeEvent {
/// Substream has been negotiated.
Negotiated {
/// Peer ID.
peer: PeerId,
/// Handshake.
handshake: Vec<u8>,
/// Substream.
substream: Substream,
/// Direction.
direction: Direction,
},
/// Outbound substream has been negotiated.
NegotiationError {
/// Peer ID.
peer: PeerId,
/// Direction.
direction: Direction,
},
}
/// Outbound substream's handshake state
enum HandshakeState {
/// Send handshake to remote peer.
SendHandshake,
/// Sink is ready for the handshake to be sent.
SinkReady,
/// Handshake has been sent.
HandshakeSent,
/// Read handshake from remote peer.
ReadHandshake,
}
/// Handshake service.
pub(crate) struct HandshakeService {
/// Handshake.
handshake: Arc<RwLock<Vec<u8>>>,
/// Pending outbound substreams.
/// Substreams:
substreams: HashMap<(PeerId, Direction), (Substream, Delay, HandshakeState)>,
/// Ready substreams.
ready: VecDeque<(PeerId, Direction, Vec<u8>)>,
}
impl HandshakeService {
/// Create new [`HandshakeService`].
pub fn new(handshake: Arc<RwLock<Vec<u8>>>) -> Self {
Self {
handshake,
ready: VecDeque::new(),
substreams: HashMap::new(),
}
}
/// Remove outbound substream from [`HandshakeService`].
pub fn remove_outbound(&mut self, peer: &PeerId) -> Option<Substream> {
self.substreams
.remove(&(*peer, Direction::Outbound))
.map(|(substream, _, _)| substream)
}
/// Remove inbound substream from [`HandshakeService`].
pub fn remove_inbound(&mut self, peer: &PeerId) -> Option<Substream> {
self.substreams
.remove(&(*peer, Direction::Inbound))
.map(|(substream, _, _)| substream)
}
/// Negotiate outbound handshake.
pub fn negotiate_outbound(&mut self, peer: PeerId, substream: Substream) {
tracing::trace!(target: LOG_TARGET, ?peer, "negotiate outbound");
self.substreams.insert(
(peer, Direction::Outbound),
(
substream,
Delay::new(NEGOTIATION_TIMEOUT),
HandshakeState::SendHandshake,
),
);
}
/// Read handshake from remote peer.
pub fn read_handshake(&mut self, peer: PeerId, substream: Substream) {
tracing::trace!(target: LOG_TARGET, ?peer, "read handshake");
self.substreams.insert(
(peer, Direction::Inbound),
(
substream,
Delay::new(NEGOTIATION_TIMEOUT),
HandshakeState::ReadHandshake,
),
);
}
/// Write handshake to remote peer.
pub fn send_handshake(&mut self, peer: PeerId, substream: Substream) {
tracing::trace!(target: LOG_TARGET, ?peer, "send handshake");
self.substreams.insert(
(peer, Direction::Inbound),
(
substream,
Delay::new(NEGOTIATION_TIMEOUT),
HandshakeState::SendHandshake,
),
);
}
/// Returns `true` if [`HandshakeService`] contains no elements.
pub fn is_empty(&self) -> bool {
self.substreams.is_empty()
}
/// Pop event from the event queue.
///
/// The substream may not exist in the queue anymore as it may have been removed
/// by `NotificationProtocol` if either one of the substreams failed to negotiate.
fn pop_event(&mut self) -> Option<(PeerId, HandshakeEvent)> {
while let Some((peer, direction, handshake)) = self.ready.pop_front() {
if let Some((substream, _, _)) = self.substreams.remove(&(peer, direction)) {
return Some((
peer,
HandshakeEvent::Negotiated {
peer,
handshake,
substream,
direction,
},
));
}
}
None
}
}
impl Stream for HandshakeService {
type Item = (PeerId, HandshakeEvent);
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let inner = Pin::into_inner(self);
if let Some(event) = inner.pop_event() {
return Poll::Ready(Some(event));
}
if inner.substreams.is_empty() {
return Poll::Pending;
}
'outer: for ((peer, direction), (ref mut substream, ref mut timer, state)) in
inner.substreams.iter_mut()
{
if let Poll::Ready(()) = timer.poll_unpin(cx) {
return Poll::Ready(Some((
*peer,
HandshakeEvent::NegotiationError {
peer: *peer,
direction: *direction,
},
)));
}
loop {
let pinned = Pin::new(&mut *substream);
match state {
HandshakeState::SendHandshake => match pinned.poll_ready(cx) {
Poll::Ready(Ok(())) => {
*state = HandshakeState::SinkReady;
continue;
}
Poll::Ready(Err(_)) =>
return Poll::Ready(Some((
*peer,
HandshakeEvent::NegotiationError {
peer: *peer,
direction: *direction,
},
))),
Poll::Pending => continue 'outer,
},
HandshakeState::SinkReady => {
match pinned.start_send((*inner.handshake.read()).clone().into()) {
Ok(()) => {
*state = HandshakeState::HandshakeSent;
continue;
}
Err(_) =>
return Poll::Ready(Some((
*peer,
HandshakeEvent::NegotiationError {
peer: *peer,
direction: *direction,
},
))),
}
}
HandshakeState::HandshakeSent => match pinned.poll_flush(cx) {
Poll::Ready(Ok(())) => match direction {
Direction::Outbound => {
*state = HandshakeState::ReadHandshake;
continue;
}
Direction::Inbound => {
inner.ready.push_back((*peer, *direction, vec![]));
continue 'outer;
}
},
Poll::Ready(Err(_)) =>
return Poll::Ready(Some((
*peer,
HandshakeEvent::NegotiationError {
peer: *peer,
direction: *direction,
},
))),
Poll::Pending => continue 'outer,
},
HandshakeState::ReadHandshake => match pinned.poll_next(cx) {
Poll::Ready(Some(Ok(handshake))) => {
inner.ready.push_back((*peer, *direction, handshake.freeze().into()));
continue 'outer;
}
Poll::Ready(Some(Err(_))) | Poll::Ready(None) => {
return Poll::Ready(Some((
*peer,
HandshakeEvent::NegotiationError {
peer: *peer,
direction: *direction,
},
)));
}
Poll::Pending => continue 'outer,
},
}
}
}
if let Some((peer, direction, handshake)) = inner.ready.pop_front() {
let (substream, _, _) =
inner.substreams.remove(&(peer, direction)).expect("peer to exist");
return Poll::Ready(Some((
peer,
HandshakeEvent::Negotiated {
peer,
handshake,
substream,
direction,
},
)));
}
Poll::Pending
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
mock::substream::{DummySubstream, MockSubstream},
types::SubstreamId,
};
use futures::StreamExt;
#[tokio::test]
async fn substream_error_when_sending_handshake() {
let mut service = HandshakeService::new(Arc::new(RwLock::new(vec![1, 2, 3, 4])));
futures::future::poll_fn(|cx| match service.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event received"),
})
.await;
let mut substream = MockSubstream::new();
substream.expect_poll_ready().times(1).return_once(|_| Poll::Ready(Ok(())));
substream
.expect_start_send()
.times(1)
.return_once(|_| Err(crate::error::SubstreamError::ConnectionClosed));
let peer = PeerId::random();
let substream = Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream));
service.send_handshake(peer, substream);
match service.next().await {
Some((
failed_peer,
HandshakeEvent::NegotiationError {
peer: event_peer,
direction,
},
)) => {
assert_eq!(failed_peer, peer);
assert_eq!(event_peer, peer);
assert_eq!(direction, Direction::Inbound);
}
_ => panic!("invalid event received"),
}
}
#[tokio::test]
async fn substream_error_when_flushing_substream() {
let mut service = HandshakeService::new(Arc::new(RwLock::new(vec![1, 2, 3, 4])));
futures::future::poll_fn(|cx| match service.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event received"),
})
.await;
let mut substream = MockSubstream::new();
substream.expect_poll_ready().times(1).return_once(|_| Poll::Ready(Ok(())));
substream.expect_start_send().times(1).return_once(|_| Ok(()));
substream
.expect_poll_flush()
.times(1)
.return_once(|_| Poll::Ready(Err(crate::error::SubstreamError::ConnectionClosed)));
let peer = PeerId::random();
let substream = Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream));
service.send_handshake(peer, substream);
match service.next().await {
Some((
failed_peer,
HandshakeEvent::NegotiationError {
peer: event_peer,
direction,
},
)) => {
assert_eq!(failed_peer, peer);
assert_eq!(event_peer, peer);
assert_eq!(direction, Direction::Inbound);
}
_ => panic!("invalid event received"),
}
}
// inbound substream is negotiated and it pushed into `inner` but outbound substream fails to
// negotiate
#[tokio::test]
async fn pop_event_but_substream_doesnt_exist() {
let mut service = HandshakeService::new(Arc::new(RwLock::new(vec![1, 2, 3, 4])));
let peer = PeerId::random();
// inbound substream has finished
service.ready.push_front((peer, Direction::Inbound, vec![]));
service.substreams.insert(
(peer, Direction::Inbound),
(
Substream::new_mock(
peer,
SubstreamId::from(1337usize),
Box::new(DummySubstream::new()),
),
Delay::new(NEGOTIATION_TIMEOUT),
HandshakeState::HandshakeSent,
),
);
service.substreams.insert(
(peer, Direction::Outbound),
(
Substream::new_mock(
peer,
SubstreamId::from(1337usize),
Box::new(DummySubstream::new()),
),
Delay::new(NEGOTIATION_TIMEOUT),
HandshakeState::SendHandshake,
),
);
// outbound substream failed and `NotificationProtocol` removes
// both substreams from `HandshakeService`
assert!(service.remove_outbound(&peer).is_some());
assert!(service.remove_inbound(&peer).is_some());
futures::future::poll_fn(|cx| match service.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event received"),
})
.await
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/handle.rs | src/protocol/notification/handle.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
error::Error,
protocol::notification::types::{
Direction, InnerNotificationEvent, NotificationCommand, NotificationError,
NotificationEvent, ValidationResult,
},
types::protocol::ProtocolName,
PeerId,
};
use bytes::BytesMut;
use futures::Stream;
use parking_lot::RwLock;
use tokio::sync::{
mpsc::{error::TrySendError, Receiver, Sender},
oneshot,
};
use std::{
collections::{HashMap, HashSet},
pin::Pin,
sync::Arc,
task::{Context, Poll},
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::notification::handle";
#[derive(Debug, Clone)]
pub(crate) struct NotificationEventHandle {
tx: Sender<InnerNotificationEvent>,
}
impl NotificationEventHandle {
/// Create new [`NotificationEventHandle`].
pub(crate) fn new(tx: Sender<InnerNotificationEvent>) -> Self {
Self { tx }
}
/// Validate inbound substream.
pub(crate) async fn report_inbound_substream(
&self,
protocol: ProtocolName,
fallback: Option<ProtocolName>,
peer: PeerId,
handshake: Vec<u8>,
tx: oneshot::Sender<ValidationResult>,
) {
let _ = self
.tx
.send(InnerNotificationEvent::ValidateSubstream {
protocol,
fallback,
peer,
handshake,
tx,
})
.await;
}
/// Notification stream opened.
pub(crate) async fn report_notification_stream_opened(
&self,
protocol: ProtocolName,
fallback: Option<ProtocolName>,
direction: Direction,
peer: PeerId,
handshake: Vec<u8>,
sink: NotificationSink,
) {
let _ = self
.tx
.send(InnerNotificationEvent::NotificationStreamOpened {
protocol,
fallback,
direction,
peer,
handshake,
sink,
})
.await;
}
/// Notification stream closed.
pub(crate) async fn report_notification_stream_closed(&self, peer: PeerId) {
let _ = self.tx.send(InnerNotificationEvent::NotificationStreamClosed { peer }).await;
}
/// Failed to open notification stream.
pub(crate) async fn report_notification_stream_open_failure(
&self,
peer: PeerId,
error: NotificationError,
) {
let _ = self
.tx
.send(InnerNotificationEvent::NotificationStreamOpenFailure { peer, error })
.await;
}
}
/// Notification sink.
///
/// Allows the user to send notifications both synchronously and asynchronously.
#[derive(Debug, Clone)]
pub struct NotificationSink {
/// Peer ID.
peer: PeerId,
/// TX channel for sending notifications synchronously.
sync_tx: Sender<Vec<u8>>,
/// TX channel for sending notifications asynchronously.
async_tx: Sender<Vec<u8>>,
}
impl NotificationSink {
/// Create new [`NotificationSink`].
pub(crate) fn new(peer: PeerId, sync_tx: Sender<Vec<u8>>, async_tx: Sender<Vec<u8>>) -> Self {
Self {
peer,
async_tx,
sync_tx,
}
}
/// Send notification to `peer` synchronously.
///
/// If the channel is clogged, [`NotificationError::ChannelClogged`] is returned.
pub fn send_sync_notification(&self, notification: Vec<u8>) -> Result<(), NotificationError> {
self.sync_tx.try_send(notification).map_err(|error| match error {
TrySendError::Closed(_) => NotificationError::NoConnection,
TrySendError::Full(_) => NotificationError::ChannelClogged,
})
}
/// Send notification to `peer` asynchronously, waiting for the channel to have capacity
/// if it's clogged.
///
/// Returns [`Error::PeerDoesntExist(PeerId)`](crate::error::Error::PeerDoesntExist)
/// if the connection has been closed.
pub async fn send_async_notification(&self, notification: Vec<u8>) -> crate::Result<()> {
self.async_tx
.send(notification)
.await
.map_err(|_| Error::PeerDoesntExist(self.peer))
}
}
/// Handle allowing the user protocol to interact with the notification protocol.
#[derive(Debug)]
pub struct NotificationHandle {
/// RX channel for receiving events from the notification protocol.
event_rx: Receiver<InnerNotificationEvent>,
/// RX channel for receiving notifications from connection handlers.
notif_rx: Receiver<(PeerId, BytesMut)>,
/// TX channel for sending commands to the notification protocol.
command_tx: Sender<NotificationCommand>,
/// Peers.
peers: HashMap<PeerId, NotificationSink>,
/// Clogged peers.
clogged: HashSet<PeerId>,
/// Pending validations.
pending_validations: HashMap<PeerId, oneshot::Sender<ValidationResult>>,
/// Handshake.
handshake: Arc<RwLock<Vec<u8>>>,
}
impl NotificationHandle {
/// Create new [`NotificationHandle`].
pub(crate) fn new(
event_rx: Receiver<InnerNotificationEvent>,
notif_rx: Receiver<(PeerId, BytesMut)>,
command_tx: Sender<NotificationCommand>,
handshake: Arc<RwLock<Vec<u8>>>,
) -> Self {
Self {
event_rx,
notif_rx,
command_tx,
handshake,
peers: HashMap::new(),
clogged: HashSet::new(),
pending_validations: HashMap::new(),
}
}
/// Open substream to `peer`.
///
/// Returns [`Error::PeerAlreadyExists(PeerId)`](crate::error::Error::PeerAlreadyExists) if
/// substream is already open to `peer`.
///
/// If connection to peer is closed, `NotificationProtocol` tries to dial the peer and if the
/// dial succeeds, tries to open a substream. This behavior can be disabled with
/// [`ConfigBuilder::with_dialing_enabled(false)`](super::config::ConfigBuilder::with_dialing_enabled()).
pub async fn open_substream(&self, peer: PeerId) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, "open substream");
if self.peers.contains_key(&peer) {
return Err(Error::PeerAlreadyExists(peer));
}
self.command_tx
.send(NotificationCommand::OpenSubstream {
peers: HashSet::from_iter([peer]),
})
.await
.map_or(Ok(()), |_| Ok(()))
}
/// Open substreams to multiple peers.
///
/// Similar to [`NotificationHandle::open_substream()`] but multiple substreams are initiated
/// using a single call to `NotificationProtocol`.
///
/// Peers who are already connected are ignored and returned as `Err(HashSet<PeerId>>)`.
pub async fn open_substream_batch(
&self,
peers: impl Iterator<Item = PeerId>,
) -> Result<(), HashSet<PeerId>> {
let (to_add, to_ignore): (Vec<_>, Vec<_>) = peers
.map(|peer| match self.peers.contains_key(&peer) {
true => (None, Some(peer)),
false => (Some(peer), None),
})
.unzip();
let to_add = to_add.into_iter().flatten().collect::<HashSet<_>>();
let to_ignore = to_ignore.into_iter().flatten().collect::<HashSet<_>>();
tracing::trace!(
target: LOG_TARGET,
peers_to_add = ?to_add.len(),
peers_to_ignore = ?to_ignore.len(),
"open substream",
);
let _ = self.command_tx.send(NotificationCommand::OpenSubstream { peers: to_add }).await;
match to_ignore.is_empty() {
true => Ok(()),
false => Err(to_ignore),
}
}
/// Try to open substreams to multiple peers.
///
/// Similar to [`NotificationHandle::open_substream()`] but multiple substreams are initiated
/// using a single call to `NotificationProtocol`.
///
/// If the channel is clogged, peers for whom a connection is not yet open are returned as
/// `Err(HashSet<PeerId>)`.
pub fn try_open_substream_batch(
&self,
peers: impl Iterator<Item = PeerId>,
) -> Result<(), HashSet<PeerId>> {
let (to_add, to_ignore): (Vec<_>, Vec<_>) = peers
.map(|peer| match self.peers.contains_key(&peer) {
true => (None, Some(peer)),
false => (Some(peer), None),
})
.unzip();
let to_add = to_add.into_iter().flatten().collect::<HashSet<_>>();
let to_ignore = to_ignore.into_iter().flatten().collect::<HashSet<_>>();
tracing::trace!(
target: LOG_TARGET,
peers_to_add = ?to_add.len(),
peers_to_ignore = ?to_ignore.len(),
"open substream",
);
self.command_tx
.try_send(NotificationCommand::OpenSubstream {
peers: to_add.clone(),
})
.map_err(|_| to_add)
}
/// Close substream to `peer`.
pub async fn close_substream(&self, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?peer, "close substream");
if !self.peers.contains_key(&peer) {
return;
}
let _ = self
.command_tx
.send(NotificationCommand::CloseSubstream {
peers: HashSet::from_iter([peer]),
})
.await;
}
/// Close substream to multiple peers.
///
/// Similar to [`NotificationHandle::close_substream()`] but multiple substreams are closed
/// using a single call to `NotificationProtocol`.
pub async fn close_substream_batch(&self, peers: impl Iterator<Item = PeerId>) {
let peers = peers.filter(|peer| self.peers.contains_key(peer)).collect::<HashSet<_>>();
if peers.is_empty() {
return;
}
tracing::trace!(
target: LOG_TARGET,
?peers,
"close substreams",
);
let _ = self.command_tx.send(NotificationCommand::CloseSubstream { peers }).await;
}
/// Try close substream to multiple peers.
///
/// Similar to [`NotificationHandle::close_substream()`] but multiple substreams are closed
/// using a single call to `NotificationProtocol`.
///
/// If the channel is clogged, `peers` is returned as `Err(HashSet<PeerId>)`.
///
/// If `peers` is empty after filtering all already-connected peers,
/// `Err(HashMap::new())` is returned.
pub fn try_close_substream_batch(
&self,
peers: impl Iterator<Item = PeerId>,
) -> Result<(), HashSet<PeerId>> {
let peers = peers.filter(|peer| self.peers.contains_key(peer)).collect::<HashSet<_>>();
if peers.is_empty() {
return Err(HashSet::new());
}
tracing::trace!(
target: LOG_TARGET,
?peers,
"close substreams",
);
self.command_tx
.try_send(NotificationCommand::CloseSubstream {
peers: peers.clone(),
})
.map_err(|_| peers)
}
/// Set new handshake.
pub fn set_handshake(&mut self, handshake: Vec<u8>) {
tracing::trace!(target: LOG_TARGET, ?handshake, "set handshake");
*self.handshake.write() = handshake;
}
/// Send validation result to the notification protocol for an inbound substream received from
/// `peer`.
pub fn send_validation_result(&mut self, peer: PeerId, result: ValidationResult) {
tracing::trace!(target: LOG_TARGET, ?peer, ?result, "send validation result");
self.pending_validations.remove(&peer).map(|tx| tx.send(result));
}
/// Send notification to `peer` synchronously.
///
/// If the channel is clogged, [`NotificationError::ChannelClogged`] is returned.
pub fn send_sync_notification(
&mut self,
peer: PeerId,
notification: Vec<u8>,
) -> Result<(), NotificationError> {
match self.peers.get_mut(&peer) {
Some(sink) => match sink.send_sync_notification(notification) {
Ok(()) => Ok(()),
Err(error) => match error {
NotificationError::NoConnection => Err(NotificationError::NoConnection),
NotificationError::ChannelClogged => {
let _ = self.clogged.insert(peer).then(|| {
self.command_tx.try_send(NotificationCommand::ForceClose { peer })
});
Err(NotificationError::ChannelClogged)
}
// sink doesn't emit any other `NotificationError`s
_ => unreachable!(),
},
},
None => Ok(()),
}
}
/// Send notification to `peer` asynchronously, waiting for the channel to have capacity
/// if it's clogged.
///
/// Returns [`Error::PeerDoesntExist(PeerId)`](crate::error::Error::PeerDoesntExist) if the
/// connection has been closed.
pub async fn send_async_notification(
&mut self,
peer: PeerId,
notification: Vec<u8>,
) -> crate::Result<()> {
match self.peers.get_mut(&peer) {
Some(sink) => sink.send_async_notification(notification).await,
None => Err(Error::PeerDoesntExist(peer)),
}
}
/// Get a copy of the underlying notification sink for the peer.
///
/// `None` is returned if `peer` doesn't exist.
pub fn notification_sink(&self, peer: PeerId) -> Option<NotificationSink> {
self.peers.get(&peer).cloned()
}
#[cfg(feature = "fuzz")]
/// Expose functionality for fuzzing
pub async fn fuzz_send_message(&mut self, command: NotificationCommand) -> crate::Result<()> {
if let NotificationCommand::SendNotification { peer_id, notif } = command {
self.send_async_notification(peer_id, notif).await?;
} else {
let _ = self.command_tx.send(command).await;
}
Ok(())
}
}
impl Stream for NotificationHandle {
type Item = NotificationEvent;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
loop {
match self.event_rx.poll_recv(cx) {
Poll::Pending => {}
Poll::Ready(None) => return Poll::Ready(None),
Poll::Ready(Some(event)) => match event {
InnerNotificationEvent::NotificationStreamOpened {
protocol,
fallback,
direction,
peer,
handshake,
sink,
} => {
self.peers.insert(peer, sink);
return Poll::Ready(Some(NotificationEvent::NotificationStreamOpened {
protocol,
fallback,
direction,
peer,
handshake,
}));
}
InnerNotificationEvent::NotificationStreamClosed { peer } => {
self.peers.remove(&peer);
self.clogged.remove(&peer);
return Poll::Ready(Some(NotificationEvent::NotificationStreamClosed {
peer,
}));
}
InnerNotificationEvent::ValidateSubstream {
protocol,
fallback,
peer,
handshake,
tx,
} => {
self.pending_validations.insert(peer, tx);
return Poll::Ready(Some(NotificationEvent::ValidateSubstream {
protocol,
fallback,
peer,
handshake,
}));
}
InnerNotificationEvent::NotificationStreamOpenFailure { peer, error } =>
return Poll::Ready(Some(
NotificationEvent::NotificationStreamOpenFailure { peer, error },
)),
},
}
match futures::ready!(self.notif_rx.poll_recv(cx)) {
None => return Poll::Ready(None),
Some((peer, notification)) =>
if self.peers.contains_key(&peer) {
return Poll::Ready(Some(NotificationEvent::NotificationReceived {
peer,
notification,
}));
},
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/tests/notification.rs | src/protocol/notification/tests/notification.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
mock::substream::{DummySubstream, MockSubstream},
protocol::{
self,
connection::ConnectionHandle,
notification::{
negotiation::HandshakeEvent,
tests::make_notification_protocol,
types::{Direction, NotificationError, NotificationEvent},
ConnectionState, InboundState, NotificationProtocol, OutboundState, PeerContext,
PeerState, ValidationResult,
},
InnerTransportEvent, ProtocolCommand, SubstreamError,
},
substream::Substream,
transport::Endpoint,
types::{protocol::ProtocolName, ConnectionId, SubstreamId},
PeerId,
};
use futures::StreamExt;
use multiaddr::Multiaddr;
use tokio::sync::{
mpsc::{channel, Receiver, Sender},
oneshot,
};
use std::{task::Poll, time::Duration};
fn next_inbound_state(state: usize) -> InboundState {
match state {
0 => InboundState::Closed,
1 => InboundState::ReadingHandshake,
2 => InboundState::Validating {
inbound: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
},
3 => InboundState::SendingHandshake,
4 => InboundState::Open {
inbound: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
},
_ => panic!(),
}
}
fn next_outbound_state(state: usize) -> OutboundState {
match state {
0 => OutboundState::Closed,
1 => OutboundState::OutboundInitiated {
substream: SubstreamId::new(),
},
2 => OutboundState::Negotiating,
3 => OutboundState::Open {
handshake: vec![1, 3, 3, 7],
outbound: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
},
_ => panic!(),
}
}
#[tokio::test]
async fn connection_closed_for_outbound_open_substream() {
let peer = PeerId::random();
for i in 0..5 {
connection_closed(
peer,
PeerState::Validating {
direction: Direction::Inbound,
protocol: ProtocolName::from("/notif/1"),
fallback: None,
outbound: OutboundState::Open {
handshake: vec![1, 2, 3, 4],
outbound: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
},
inbound: next_inbound_state(i),
},
Some(NotificationEvent::NotificationStreamOpenFailure {
peer,
error: NotificationError::Rejected,
}),
)
.await;
}
}
#[tokio::test]
async fn connection_closed_for_outbound_initiated_substream() {
let peer = PeerId::random();
for i in 0..5 {
connection_closed(
peer,
PeerState::Validating {
direction: Direction::Inbound,
protocol: ProtocolName::from("/notif/1"),
fallback: None,
outbound: OutboundState::OutboundInitiated {
substream: SubstreamId::from(0usize),
},
inbound: next_inbound_state(i),
},
Some(NotificationEvent::NotificationStreamOpenFailure {
peer,
error: NotificationError::Rejected,
}),
)
.await;
}
}
#[tokio::test]
async fn connection_closed_for_outbound_negotiated_substream() {
let peer = PeerId::random();
for i in 0..5 {
connection_closed(
peer,
PeerState::Validating {
direction: Direction::Inbound,
protocol: ProtocolName::from("/notif/1"),
fallback: None,
outbound: OutboundState::Negotiating,
inbound: next_inbound_state(i),
},
Some(NotificationEvent::NotificationStreamOpenFailure {
peer,
error: NotificationError::Rejected,
}),
)
.await;
}
}
#[tokio::test]
async fn connection_closed_for_initiated_substream() {
let peer = PeerId::random();
connection_closed(
peer,
PeerState::OutboundInitiated {
substream: SubstreamId::new(),
},
Some(NotificationEvent::NotificationStreamOpenFailure {
peer,
error: NotificationError::Rejected,
}),
)
.await;
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn connection_established_twice() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
let peer = PeerId::random();
assert!(notif.on_connection_established(peer).await.is_ok());
assert!(notif.on_connection_established(peer).await.is_err());
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn connection_closed_twice() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
let peer = PeerId::random();
assert!(notif.on_connection_closed(peer).await.is_ok());
assert!(notif.on_connection_closed(peer).await.is_err());
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn substream_open_failure_for_unknown_substream() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
notif
.on_substream_open_failure(SubstreamId::new(), SubstreamError::ConnectionClosed)
.await;
}
#[tokio::test]
async fn close_substream_to_unknown_peer() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
let peer = PeerId::random();
assert!(!notif.peers.contains_key(&peer));
notif.on_close_substream(peer).await;
assert!(!notif.peers.contains_key(&peer));
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn handshake_event_unknown_peer() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
let peer = PeerId::random();
assert!(!notif.peers.contains_key(&peer));
notif
.on_handshake_event(
peer,
HandshakeEvent::Negotiated {
peer,
handshake: vec![1, 3, 3, 7],
substream: Substream::new_mock(
peer,
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
direction: protocol::notification::negotiation::Direction::Inbound,
},
)
.await;
assert!(!notif.peers.contains_key(&peer));
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn handshake_event_invalid_state_for_outbound_substream() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, mut tx) = make_notification_protocol();
let (peer, _receiver) = register_peer(&mut notif, &mut tx).await;
notif
.on_handshake_event(
peer,
HandshakeEvent::Negotiated {
peer,
handshake: vec![1, 3, 3, 7],
substream: Substream::new_mock(
peer,
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
direction: protocol::notification::negotiation::Direction::Outbound,
},
)
.await;
}
#[tokio::test]
#[cfg(debug_assertions)]
#[should_panic]
async fn substream_open_failure_for_unknown_peer() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
let peer = PeerId::random();
let substream_id = SubstreamId::from(1337usize);
notif.pending_outbound.insert(substream_id, peer);
notif
.on_substream_open_failure(substream_id, SubstreamError::ConnectionClosed)
.await;
}
#[tokio::test]
async fn dial_failure_for_non_dialing_peer() {
let (mut notif, mut handle, _sender, mut tx) = make_notification_protocol();
let (peer, _receiver) = register_peer(&mut notif, &mut tx).await;
// dial failure for the peer even though it's not dialing
notif.on_dial_failure(peer, vec![]).await;
assert!(std::matches!(
notif.peers.get(&peer),
Some(PeerContext {
state: PeerState::Closed { .. }
})
));
futures::future::poll_fn(|cx| match handle.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
_ => panic!("invalid event"),
})
.await;
}
// inbound state is ignored
async fn connection_closed(peer: PeerId, state: PeerState, event: Option<NotificationEvent>) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _sender, _tx) = make_notification_protocol();
notif.peers.insert(peer, PeerContext { state });
notif.on_connection_closed(peer).await.unwrap();
if let Some(expected) = event {
assert_eq!(handle.next().await.unwrap(), expected);
}
assert!(!notif.peers.contains_key(&peer))
}
// register new connection to `NotificationProtocol`
async fn register_peer(
notif: &mut NotificationProtocol,
sender: &mut Sender<InnerTransportEvent>,
) -> (PeerId, Receiver<ProtocolCommand>) {
let peer = PeerId::random();
let (conn_tx, conn_rx) = channel(64);
sender
.send(InnerTransportEvent::ConnectionEstablished {
peer,
connection: ConnectionId::new(),
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(0usize)),
sender: ConnectionHandle::new(ConnectionId::from(0usize), conn_tx),
})
.await
.unwrap();
// poll the protocol to register the peer
notif.next_event().await;
assert!(std::matches!(
notif.peers.get(&peer),
Some(PeerContext {
state: PeerState::Closed { .. }
})
));
(peer, conn_rx)
}
#[tokio::test]
async fn open_substream_connection_closed() {
open_substream(PeerState::Closed { pending_open: None }, true).await;
}
#[tokio::test]
async fn open_substream_already_initiated() {
open_substream(
PeerState::OutboundInitiated {
substream: SubstreamId::new(),
},
false,
)
.await;
}
#[tokio::test]
async fn open_substream_already_open() {
let (shutdown, _rx) = oneshot::channel();
open_substream(PeerState::Open { shutdown }, false).await;
}
#[tokio::test]
async fn open_substream_under_validation() {
for i in 0..5 {
for k in 0..4 {
open_substream(
PeerState::Validating {
direction: Direction::Inbound,
protocol: ProtocolName::from("/notif/1"),
fallback: None,
outbound: next_outbound_state(k),
inbound: next_inbound_state(i),
},
false,
)
.await;
}
}
}
async fn open_substream(state: PeerState, succeeds: bool) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, mut tx) = make_notification_protocol();
let (peer, mut receiver) = register_peer(&mut notif, &mut tx).await;
let context = notif.peers.get_mut(&peer).unwrap();
context.state = state;
notif.on_open_substream(peer).await.unwrap();
assert!(receiver.try_recv().is_ok() == succeeds);
}
#[tokio::test]
async fn open_substream_no_connection() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
assert!(notif.on_open_substream(PeerId::random()).await.is_err());
}
#[tokio::test]
async fn remote_opens_multiple_inbound_substreams() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let protocol = ProtocolName::from("/notif/1");
let (mut notif, _handle, _sender, mut tx) = make_notification_protocol();
let (peer, _receiver) = register_peer(&mut notif, &mut tx).await;
// open substream, poll the result and verify that the peer is in correct state
tx.send(InnerTransportEvent::SubstreamOpened {
peer,
protocol: protocol.clone(),
fallback: None,
direction: protocol::Direction::Inbound,
substream: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
connection_id: ConnectionId::from(0usize),
})
.await
.unwrap();
notif.next_event().await;
match notif.peers.get(&peer) {
Some(PeerContext {
state:
PeerState::Validating {
direction: Direction::Inbound,
protocol,
fallback: None,
outbound: OutboundState::Closed,
inbound: InboundState::ReadingHandshake,
},
}) => {
assert_eq!(protocol, &ProtocolName::from("/notif/1"));
}
state => panic!("invalid state: {state:?}"),
}
// try to open another substream and verify it's discarded and the state is otherwise
// preserved
let mut substream = MockSubstream::new();
substream.expect_poll_close().times(1).return_once(|_| Poll::Ready(Ok(())));
tx.send(InnerTransportEvent::SubstreamOpened {
peer,
protocol: protocol.clone(),
fallback: None,
direction: protocol::Direction::Inbound,
substream: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(substream),
),
connection_id: ConnectionId::from(0usize),
})
.await
.unwrap();
notif.next_event().await;
match notif.peers.get(&peer) {
Some(PeerContext {
state:
PeerState::Validating {
direction: Direction::Inbound,
protocol,
fallback: None,
outbound: OutboundState::Closed,
inbound: InboundState::ReadingHandshake,
},
}) => {
assert_eq!(protocol, &ProtocolName::from("/notif/1"));
}
state => panic!("invalid state: {state:?}"),
}
}
#[tokio::test]
async fn pending_outbound_tracked_correctly() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let protocol = ProtocolName::from("/notif/1");
let (mut notif, _handle, _sender, mut tx) = make_notification_protocol();
let (peer, _receiver) = register_peer(&mut notif, &mut tx).await;
// open outbound substream
notif.on_open_substream(peer).await.unwrap();
match notif.peers.get(&peer) {
Some(PeerContext {
state: PeerState::OutboundInitiated { substream },
}) => {
assert_eq!(substream, &SubstreamId::new());
}
state => panic!("invalid state: {state:?}"),
}
// then register inbound substream and verify that the state is changed to `Validating`
notif
.on_inbound_substream(
protocol.clone(),
None,
peer,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
)
.await
.unwrap();
match notif.peers.get(&peer) {
Some(PeerContext {
state:
PeerState::Validating {
direction: Direction::Outbound,
outbound: OutboundState::OutboundInitiated { .. },
inbound: InboundState::ReadingHandshake,
..
},
}) => {}
state => panic!("invalid state: {state:?}"),
}
// then negotiation event for the inbound handshake
notif
.on_handshake_event(
peer,
HandshakeEvent::Negotiated {
peer,
handshake: vec![1, 3, 3, 7],
substream: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
direction: protocol::notification::negotiation::Direction::Inbound,
},
)
.await;
match notif.peers.get(&peer) {
Some(PeerContext {
state:
PeerState::Validating {
direction: Direction::Outbound,
outbound: OutboundState::OutboundInitiated { .. },
inbound: InboundState::Validating { .. },
..
},
}) => {}
state => panic!("invalid state: {state:?}"),
}
// then reject the inbound peer even though an outbound substream was already established
notif.on_validation_result(peer, ValidationResult::Reject).await.unwrap();
match notif.peers.get(&peer) {
Some(PeerContext {
state: PeerState::Closed { pending_open },
}) => {
assert_eq!(pending_open, &Some(SubstreamId::new()));
}
state => panic!("invalid state: {state:?}"),
}
// finally the outbound substream registers, verify that `pending_open` is set to `None`
notif
.on_outbound_substream(
protocol,
None,
peer,
SubstreamId::new(),
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
)
.await
.unwrap();
match notif.peers.get(&peer) {
Some(PeerContext {
state: PeerState::Closed { pending_open },
}) => {
assert!(pending_open.is_none());
}
state => panic!("invalid state: {state:?}"),
}
}
#[tokio::test]
async fn inbound_accepted_outbound_fails_to_open() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let protocol = ProtocolName::from("/notif/1");
let (mut notif, mut handle, sender, mut tx) = make_notification_protocol();
let (peer, receiver) = register_peer(&mut notif, &mut tx).await;
// register inbound substream and verify that the state is `Validating`
notif
.on_inbound_substream(
protocol.clone(),
None,
peer,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
)
.await
.unwrap();
match notif.peers.get(&peer) {
Some(PeerContext {
state:
PeerState::Validating {
direction: Direction::Inbound,
outbound: OutboundState::Closed,
inbound: InboundState::ReadingHandshake,
..
},
}) => {}
state => panic!("invalid state: {state:?}"),
}
// then negotiation event for the inbound handshake
notif
.on_handshake_event(
peer,
HandshakeEvent::Negotiated {
peer,
handshake: vec![1, 3, 3, 7],
substream: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
direction: protocol::notification::negotiation::Direction::Inbound,
},
)
.await;
match notif.peers.get(&peer) {
Some(PeerContext {
state:
PeerState::Validating {
direction: Direction::Inbound,
outbound: OutboundState::Closed,
inbound: InboundState::Validating { .. },
..
},
}) => {}
state => panic!("invalid state: {state:?}"),
}
// discard the validation event
assert!(tokio::time::timeout(Duration::from_secs(5), handle.next()).await.is_ok());
// before the validation event is registered, close the connection
drop(sender);
drop(receiver);
drop(tx);
// then reject the inbound peer even though an outbound substream was already established
assert!(notif.on_validation_result(peer, ValidationResult::Accept).await.is_err());
match notif.peers.get(&peer) {
Some(PeerContext {
state: PeerState::Closed { pending_open },
}) => {
assert!(pending_open.is_none());
}
state => panic!("invalid state: {state:?}"),
}
// verify that the user is not reported anything
match tokio::time::timeout(Duration::from_secs(1), handle.next()).await {
Err(_) => panic!("unexpected timeout"),
Ok(Some(NotificationEvent::NotificationStreamOpenFailure {
peer: event_peer,
error,
})) => {
assert_eq!(peer, event_peer);
assert_eq!(error, NotificationError::Rejected)
}
_ => panic!("invalid event"),
}
}
#[tokio::test]
async fn open_substream_on_closed_connection() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, sender, mut tx) = make_notification_protocol();
let (peer, receiver) = register_peer(&mut notif, &mut tx).await;
// before processing the open substream event, close the connection
drop(sender);
drop(receiver);
drop(tx);
// open outbound substream
notif.on_open_substream(peer).await.unwrap();
match notif.peers.get(&peer) {
Some(PeerContext {
state: PeerState::Closed { pending_open: None },
}) => {}
state => panic!("invalid state: {state:?}"),
}
match tokio::time::timeout(Duration::from_secs(5), handle.next())
.await
.expect("operation to succeed")
{
Some(NotificationEvent::NotificationStreamOpenFailure { error, .. }) => {
assert_eq!(error, NotificationError::NoConnection);
}
event => panic!("invalid event received: {event:?}"),
}
}
// `NotificationHandle` may have an inconsistent view of the peer state and connection to peer may
// already been closed by the time `close_substream()` is called but this event hasn't yet been
// registered to `NotificationHandle` which causes it to send a stale disconnection request to
// `NotificationProtocol`.
//
// verify that `NotificationProtocol` ignores stale disconnection requests
#[tokio::test]
async fn close_already_closed_connection() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _, mut tx) = make_notification_protocol();
let (peer, _) = register_peer(&mut notif, &mut tx).await;
notif.peers.insert(
peer,
PeerContext {
state: PeerState::Validating {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
outbound: OutboundState::Open {
handshake: vec![1, 2, 3, 4],
outbound: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
},
inbound: InboundState::SendingHandshake,
},
},
);
notif
.on_handshake_event(
peer,
HandshakeEvent::Negotiated {
peer,
handshake: vec![1],
substream: Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(MockSubstream::new()),
),
direction: protocol::notification::negotiation::Direction::Inbound,
},
)
.await;
match handle.next().await {
Some(NotificationEvent::NotificationStreamOpened { .. }) => {}
_ => panic!("invalid event received"),
}
// close the substream but don't poll the `NotificationHandle`
notif.shutdown_tx.send(peer).await.unwrap();
// close the connection using the handle
handle.close_substream(peer).await;
// process the events
notif.next_event().await;
notif.next_event().await;
match notif.peers.get(&peer) {
Some(PeerContext {
state: PeerState::Closed { pending_open: None },
}) => {}
state => panic!("invalid state: {state:?}"),
}
}
/// Notification state was not reset correctly if the outbound substream failed to open after
/// inbound substream had been negotiated, causing `NotificationProtocol` to report open failure
/// twice, once when the failure occurred and again when the connection was closed.
#[tokio::test]
async fn open_failure_reported_once() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _, mut tx) = make_notification_protocol();
let (peer, _) = register_peer(&mut notif, &mut tx).await;
// move `peer` to state where the inbound substream has been negotiated
// and the local node has initiated an outbound substream
notif.peers.insert(
peer,
PeerContext {
state: PeerState::Validating {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
outbound: OutboundState::OutboundInitiated {
substream: SubstreamId::from(1337usize),
},
inbound: InboundState::Open {
inbound: Substream::new_mock(
peer,
SubstreamId::from(0usize),
Box::new(DummySubstream::new()),
),
},
},
},
);
notif.pending_outbound.insert(SubstreamId::from(1337usize), peer);
notif
.on_substream_open_failure(
SubstreamId::from(1337usize),
SubstreamError::ConnectionClosed,
)
.await;
match handle.next().await {
Some(NotificationEvent::NotificationStreamOpenFailure {
peer: failed_peer,
error,
}) => {
assert_eq!(failed_peer, peer);
assert_eq!(error, NotificationError::Rejected);
}
_ => panic!("invalid event received"),
}
match notif.peers.get(&peer) {
Some(PeerContext {
state: PeerState::Closed { pending_open },
}) => {
assert_eq!(pending_open, &Some(SubstreamId::from(1337usize)));
}
state => panic!("invalid state for peer: {state:?}"),
}
// connection to `peer` is closed
notif.on_connection_closed(peer).await.unwrap();
futures::future::poll_fn(|cx| match handle.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
result => panic!("didn't expect event from channel, got {result:?}"),
})
.await;
}
// inboud substrem was received and it was sent to user for validation
//
// the validation took so long that remote opened another substream while validation for the
// previous inbound substrem was still pending
//
// verify that the new substream is rejected and that the peer state is set to `ValidationPending`
#[tokio::test]
async fn second_inbound_substream_rejected() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _, mut tx) = make_notification_protocol();
let (peer, _) = register_peer(&mut notif, &mut tx).await;
// move peer state to `Validating`
let mut substream1 = MockSubstream::new();
substream1.expect_poll_close().times(1).return_once(|_| Poll::Ready(Ok(())));
notif.peers.insert(
peer,
PeerContext {
state: PeerState::Validating {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
outbound: OutboundState::Closed,
inbound: InboundState::Validating {
inbound: Substream::new_mock(
peer,
SubstreamId::from(0usize),
Box::new(substream1),
),
},
},
},
);
// open a new inbound substream because validation took so long that `peer` decided
// to open a new substream
let mut substream2 = MockSubstream::new();
substream2.expect_poll_close().times(1).return_once(|_| Poll::Ready(Ok(())));
notif
.on_inbound_substream(
ProtocolName::from("/notif/1"),
None,
peer,
Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream2)),
)
.await
.unwrap();
// verify that peer is moved to `ValidationPending`
match notif.peers.get(&peer) {
Some(PeerContext {
state:
PeerState::ValidationPending {
state: ConnectionState::Open,
},
}) => {}
state => panic!("invalid state for peer: {state:?}"),
}
// user decide to reject the substream, verify that nothing is received over the event handle
notif.on_validation_result(peer, ValidationResult::Reject).await.unwrap();
notif.on_connection_closed(peer).await.unwrap();
futures::future::poll_fn(|cx| match handle.poll_next_unpin(cx) {
Poll::Pending => Poll::Ready(()),
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/tests/mod.rs | src/protocol/notification/tests/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
executor::DefaultExecutor,
protocol::{
notification::{
handle::NotificationHandle, Config as NotificationConfig, NotificationProtocol,
},
InnerTransportEvent, ProtocolCommand, TransportService,
},
transport::{
manager::{TransportManager, TransportManagerBuilder},
KEEP_ALIVE_TIMEOUT,
},
types::protocol::ProtocolName,
PeerId,
};
use tokio::sync::mpsc::{channel, Receiver, Sender};
#[cfg(test)]
mod notification;
#[cfg(test)]
mod substream_validation;
/// create new `NotificationProtocol`
fn make_notification_protocol() -> (
NotificationProtocol,
NotificationHandle,
TransportManager,
Sender<InnerTransportEvent>,
) {
let manager = TransportManagerBuilder::new().build();
let peer = PeerId::random();
let (transport_service, tx) = TransportService::new(
peer,
ProtocolName::from("/notif/1"),
Vec::new(),
std::sync::Arc::new(Default::default()),
manager.transport_manager_handle(),
KEEP_ALIVE_TIMEOUT,
);
let (config, handle) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
(
NotificationProtocol::new(
transport_service,
config,
std::sync::Arc::new(DefaultExecutor {}),
),
handle,
manager,
tx,
)
}
/// add new peer to `NotificationProtocol`
fn add_peer() -> (PeerId, (), Receiver<ProtocolCommand>) {
let (_tx, rx) = channel(64);
(PeerId::random(), (), rx)
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/notification/tests/substream_validation.rs | src/protocol/notification/tests/substream_validation.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
error::{Error, SubstreamError},
mock::substream::MockSubstream,
protocol::{
connection::ConnectionHandle,
notification::{
negotiation::HandshakeEvent,
tests::{add_peer, make_notification_protocol},
types::{Direction, NotificationEvent, ValidationResult},
InboundState, OutboundState, PeerContext, PeerState,
},
InnerTransportEvent, ProtocolCommand,
},
substream::Substream,
transport::Endpoint,
types::{protocol::ProtocolName, ConnectionId, SubstreamId},
PeerId,
};
use bytes::BytesMut;
use futures::StreamExt;
use multiaddr::Multiaddr;
use tokio::sync::{mpsc::channel, oneshot};
use std::task::Poll;
#[tokio::test]
async fn non_existent_peer() {
let (mut notif, _handle, _sender, _) = make_notification_protocol();
if let Err(err) = notif.on_validation_result(PeerId::random(), ValidationResult::Accept).await {
assert!(std::matches!(err, Error::PeerDoesntExist(_)));
}
}
#[tokio::test]
async fn substream_accepted() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _sender, tx) = make_notification_protocol();
let (peer, _service, _receiver) = add_peer();
let handshake = BytesMut::from(&b"hello"[..]);
let mut substream = MockSubstream::new();
substream
.expect_poll_next()
.times(1)
.return_once(|_| Poll::Ready(Some(Ok(BytesMut::from(&b"hello"[..])))));
substream.expect_poll_ready().times(1).return_once(|_| Poll::Ready(Ok(())));
substream.expect_start_send().times(1).return_once(|_| Ok(()));
substream.expect_poll_flush().times(1).return_once(|_| Poll::Ready(Ok(())));
let (proto_tx, mut proto_rx) = channel(256);
tx.send(InnerTransportEvent::ConnectionEstablished {
peer,
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(0usize)),
sender: ConnectionHandle::new(ConnectionId::from(0usize), proto_tx.clone()),
connection: ConnectionId::from(0usize),
})
.await
.unwrap();
// connect peer and verify it's in closed state
notif.next_event().await;
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Closed { .. } => {}
state => panic!("invalid state for peer: {state:?}"),
}
// open inbound substream and verify that peer state has changed to `Validating`
notif
.on_inbound_substream(
ProtocolName::from("/notif/1"),
None,
peer,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(substream),
),
)
.await
.unwrap();
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Validating {
direction: Direction::Inbound,
protocol: _,
fallback: None,
inbound: InboundState::ReadingHandshake,
outbound: OutboundState::Closed,
} => {}
state => panic!("invalid state for peer: {state:?}"),
}
// get negotiation event
let (peer, event) = notif.negotiation.next().await.unwrap();
notif.on_handshake_event(peer, event).await;
// user protocol receives the protocol accepts it
assert_eq!(
handle.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer,
handshake: handshake.into()
},
);
notif.on_validation_result(peer, ValidationResult::Accept).await.unwrap();
// poll negotiation to finish the handshake
let (peer, event) = notif.negotiation.next().await.unwrap();
notif.on_handshake_event(peer, event).await;
// protocol asks for outbound substream to be opened and its state is changed accordingly
let ProtocolCommand::OpenSubstream {
protocol,
substream_id,
..
} = proto_rx.recv().await.unwrap()
else {
panic!("invalid commnd received");
};
assert_eq!(protocol, ProtocolName::from("/notif/1"));
assert_eq!(substream_id, SubstreamId::from(0usize));
let expected = SubstreamId::from(0usize);
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Validating {
direction: Direction::Inbound,
protocol: _,
fallback: None,
inbound: InboundState::Open { .. },
outbound: OutboundState::OutboundInitiated { substream },
} => {
assert_eq!(substream, &expected);
}
state => panic!("invalid state for peer: {state:?}"),
}
}
#[tokio::test]
async fn substream_rejected() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _sender, _tx) = make_notification_protocol();
let (peer, _service, mut receiver) = add_peer();
let handshake = BytesMut::from(&b"hello"[..]);
let mut substream = MockSubstream::new();
substream
.expect_poll_next()
.times(1)
.return_once(|_| Poll::Ready(Some(Ok(BytesMut::from(&b"hello"[..])))));
substream.expect_poll_close().times(1).return_once(|_| Poll::Ready(Ok(())));
// connect peer and verify it's in closed state
notif.on_connection_established(peer).await.unwrap();
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Closed { .. } => {}
state => panic!("invalid state for peer: {state:?}"),
}
// open inbound substream and verify that peer state has changed to `Validating`
notif
.on_inbound_substream(
ProtocolName::from("/notif/1"),
None,
peer,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(substream),
),
)
.await
.unwrap();
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Validating {
direction: Direction::Inbound,
protocol: _,
fallback: None,
inbound: InboundState::ReadingHandshake,
outbound: OutboundState::Closed,
} => {}
state => panic!("invalid state for peer: {state:?}"),
}
// get negotiation event
let (peer, event) = notif.negotiation.next().await.unwrap();
notif.on_handshake_event(peer, event).await;
// user protocol receives the protocol accepts it
assert_eq!(
handle.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer,
handshake: handshake.into()
},
);
notif.on_validation_result(peer, ValidationResult::Reject).await.unwrap();
// substream is rejected so no outbound substraem is opened and peer is converted to closed
// state
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Closed { .. } => {}
state => panic!("invalid state for peer: {state:?}"),
}
assert!(receiver.try_recv().is_err());
}
#[tokio::test]
async fn accept_fails_due_to_closed_substream() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _sender, tx) = make_notification_protocol();
let (peer, _service, _receiver) = add_peer();
let handshake = BytesMut::from(&b"hello"[..]);
let mut substream = MockSubstream::new();
substream
.expect_poll_next()
.times(1)
.return_once(|_| Poll::Ready(Some(Ok(BytesMut::from(&b"hello"[..])))));
substream
.expect_poll_ready()
.times(1)
.return_once(|_| Poll::Ready(Err(SubstreamError::ConnectionClosed)));
let (proto_tx, _proto_rx) = channel(256);
tx.send(InnerTransportEvent::ConnectionEstablished {
peer,
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(0usize)),
sender: ConnectionHandle::new(ConnectionId::from(0usize), proto_tx),
connection: ConnectionId::from(0usize),
})
.await
.unwrap();
// connect peer and verify it's in closed state
notif.next_event().await;
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Closed { .. } => {}
state => panic!("invalid state for peer: {state:?}"),
}
// open inbound substream and verify that peer state has changed to `InboundOpen`
notif
.on_inbound_substream(
ProtocolName::from("/notif/1"),
None,
peer,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(substream),
),
)
.await
.unwrap();
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Validating {
direction: Direction::Inbound,
protocol: _,
fallback: None,
inbound: InboundState::ReadingHandshake,
outbound: OutboundState::Closed,
} => {}
state => panic!("invalid state for peer: {state:?}"),
}
// get negotiation event
let (peer, event) = notif.negotiation.next().await.unwrap();
notif.on_handshake_event(peer, event).await;
// user protocol receives the protocol accepts it
assert_eq!(
handle.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer,
handshake: handshake.into()
},
);
notif.on_validation_result(peer, ValidationResult::Accept).await.unwrap();
// get negotiation event
let (event_peer, event) = notif.negotiation.next().await.unwrap();
match &event {
HandshakeEvent::NegotiationError { peer, .. } => {
assert_eq!(*peer, event_peer);
}
event => panic!("invalid event for peer: {event:?}"),
}
notif.on_handshake_event(peer, event).await;
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Closed { .. } => {}
state => panic!("invalid state for peer: {state:?}"),
}
}
#[tokio::test]
async fn accept_fails_due_to_closed_connection() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut notif, mut handle, _sender, tx) = make_notification_protocol();
let (peer, _service, _receiver) = add_peer();
let handshake = BytesMut::from(&b"hello"[..]);
let mut substream = MockSubstream::new();
substream
.expect_poll_next()
.times(1)
.return_once(|_| Poll::Ready(Some(Ok(BytesMut::from(&b"hello"[..])))));
substream.expect_poll_close().times(1).return_once(|_| Poll::Ready(Ok(())));
let (proto_tx, proto_rx) = channel(256);
tx.send(InnerTransportEvent::ConnectionEstablished {
peer,
endpoint: Endpoint::dialer(Multiaddr::empty(), ConnectionId::from(0usize)),
sender: ConnectionHandle::new(ConnectionId::from(0usize), proto_tx),
connection: ConnectionId::from(0usize),
})
.await
.unwrap();
// connect peer and verify it's in closed state
notif.next_event().await;
match notif.peers.get(&peer).unwrap().state {
PeerState::Closed { .. } => {}
_ => panic!("invalid state for peer"),
}
// open inbound substream and verify that peer state has changed to `InboundOpen`
notif
.on_inbound_substream(
ProtocolName::from("/notif/1"),
None,
peer,
Substream::new_mock(
PeerId::random(),
SubstreamId::from(0usize),
Box::new(substream),
),
)
.await
.unwrap();
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Validating {
direction: Direction::Inbound,
protocol: _,
fallback: None,
inbound: InboundState::ReadingHandshake,
outbound: OutboundState::Closed,
} => {}
state => panic!("invalid state for peer: {state:?}"),
}
// get negotiation event
let (peer, event) = notif.negotiation.next().await.unwrap();
notif.on_handshake_event(peer, event).await;
// user protocol receives the protocol accepts it
assert_eq!(
handle.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer,
handshake: handshake.into()
},
);
// drop the connection and verify that the protocol doesn't make any outbound substream
// requests and instead marks the connection as closed
drop(proto_rx);
assert!(notif.on_validation_result(peer, ValidationResult::Accept).await.is_err());
match ¬if.peers.get(&peer).unwrap().state {
PeerState::Closed { .. } => {}
state => panic!("invalid state for peer: {state:?}"),
}
}
#[tokio::test]
#[should_panic]
#[cfg(debug_assertions)]
async fn open_substream_accepted() {
use tokio::sync::oneshot;
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
let (peer, _service, _receiver) = add_peer();
let (shutdown, _rx) = oneshot::channel();
notif.peers.insert(
peer,
PeerContext {
state: PeerState::Open { shutdown },
},
);
// try to accept a closed substream
notif.on_close_substream(peer).await;
assert!(notif.on_validation_result(peer, ValidationResult::Accept).await.is_err());
}
#[tokio::test]
#[should_panic]
#[cfg(debug_assertions)]
async fn open_substream_rejected() {
let (mut notif, _handle, _sender, _tx) = make_notification_protocol();
let (peer, _service, _receiver) = add_peer();
let (shutdown, _rx) = oneshot::channel();
notif.peers.insert(
peer,
PeerContext {
state: PeerState::Open { shutdown },
},
);
// try to reject a closed substream
notif.on_close_substream(peer).await;
assert!(notif.on_validation_result(peer, ValidationResult::Reject).await.is_err());
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/identify.rs | src/protocol/libp2p/identify.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! [`/ipfs/identify/1.0.0`](https://github.com/libp2p/specs/blob/master/identify/README.md) implementation.
use crate::{
codec::ProtocolCodec,
crypto::PublicKey,
error::{Error, SubstreamError},
protocol::{Direction, TransportEvent, TransportService},
substream::Substream,
transport::Endpoint,
types::{protocol::ProtocolName, SubstreamId},
utils::futures_stream::FuturesStream,
PeerId, DEFAULT_CHANNEL_SIZE,
};
use futures::{future::BoxFuture, Stream, StreamExt};
use multiaddr::Multiaddr;
use prost::Message;
use tokio::sync::mpsc::{channel, Sender};
use tokio_stream::wrappers::ReceiverStream;
use std::{
collections::{HashMap, HashSet},
time::Duration,
};
/// Log target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::identify";
/// IPFS Identify protocol name
const PROTOCOL_NAME: &str = "/ipfs/id/1.0.0";
/// IPFS Identify push protocol name.
const _PUSH_PROTOCOL_NAME: &str = "/ipfs/id/push/1.0.0";
/// Default agent version.
const DEFAULT_AGENT: &str = "litep2p/1.0.0";
/// Size for `/ipfs/ping/1.0.0` payloads.
// TODO: https://github.com/paritytech/litep2p/issues/334 what is the max size?
const IDENTIFY_PAYLOAD_SIZE: usize = 4096;
mod identify_schema {
include!(concat!(env!("OUT_DIR"), "/identify.rs"));
}
/// Identify configuration.
pub struct Config {
/// Protocol name.
pub(crate) protocol: ProtocolName,
/// Codec used by the protocol.
pub(crate) codec: ProtocolCodec,
/// TX channel for sending events to the user protocol.
tx_event: Sender<IdentifyEvent>,
// Public key of the local node, filled by `Litep2p`.
pub(crate) public: Option<PublicKey>,
/// Protocols supported by the local node, filled by `Litep2p`.
pub(crate) protocols: Vec<ProtocolName>,
/// Protocol version.
pub(crate) protocol_version: String,
/// User agent.
pub(crate) user_agent: Option<String>,
}
impl Config {
/// Create new [`Config`].
///
/// Returns a config that is given to `Litep2pConfig` and an event stream for
/// [`IdentifyEvent`]s.
pub fn new(
protocol_version: String,
user_agent: Option<String>,
) -> (Self, Box<dyn Stream<Item = IdentifyEvent> + Send + Unpin>) {
let (tx_event, rx_event) = channel(DEFAULT_CHANNEL_SIZE);
(
Self {
tx_event,
public: None,
protocol_version,
user_agent,
codec: ProtocolCodec::UnsignedVarint(Some(IDENTIFY_PAYLOAD_SIZE)),
protocols: Vec::new(),
protocol: ProtocolName::from(PROTOCOL_NAME),
},
Box::new(ReceiverStream::new(rx_event)),
)
}
}
/// Events emitted by Identify protocol.
#[derive(Debug)]
pub enum IdentifyEvent {
/// Peer identified.
PeerIdentified {
/// Peer ID.
peer: PeerId,
/// Protocol version.
protocol_version: Option<String>,
/// User agent.
user_agent: Option<String>,
/// Supported protocols.
supported_protocols: HashSet<ProtocolName>,
/// Observed address.
observed_address: Multiaddr,
/// Listen addresses.
listen_addresses: Vec<Multiaddr>,
},
}
/// Identify response received from remote.
struct IdentifyResponse {
/// Remote peer ID.
peer: PeerId,
/// Protocol version.
protocol_version: Option<String>,
/// User agent.
user_agent: Option<String>,
/// Protocols supported by remote.
supported_protocols: HashSet<String>,
/// Remote's listen addresses.
listen_addresses: Vec<Multiaddr>,
/// Observed address.
observed_address: Option<Multiaddr>,
}
pub(crate) struct Identify {
// Connection service.
service: TransportService,
/// TX channel for sending events to the user protocol.
tx: Sender<IdentifyEvent>,
/// Connected peers and their observed addresses.
peers: HashMap<PeerId, Endpoint>,
// Public key of the local node, filled by `Litep2p`.
public: PublicKey,
/// Local peer ID.
local_peer_id: PeerId,
/// Protocol version.
protocol_version: String,
/// User agent.
user_agent: String,
/// Protocols supported by the local node, filled by `Litep2p`.
protocols: Vec<String>,
/// Pending outbound substreams.
pending_outbound: FuturesStream<BoxFuture<'static, crate::Result<IdentifyResponse>>>,
/// Pending inbound substreams.
pending_inbound: FuturesStream<BoxFuture<'static, ()>>,
}
impl Identify {
/// Create new [`Identify`] protocol.
pub(crate) fn new(service: TransportService, config: Config) -> Self {
// The public key is always supplied by litep2p and is the one
// used to identify the local peer. This is a similar story to the
// supported protocols.
let public = config.public.expect("public key to always be supplied by litep2p; qed");
let local_peer_id = public.to_peer_id();
Self {
service,
tx: config.tx_event,
peers: HashMap::new(),
public,
local_peer_id,
protocol_version: config.protocol_version,
user_agent: config.user_agent.unwrap_or(DEFAULT_AGENT.to_string()),
pending_inbound: FuturesStream::new(),
pending_outbound: FuturesStream::new(),
protocols: config.protocols.iter().map(|protocol| protocol.to_string()).collect(),
}
}
/// Connection established to remote peer.
fn on_connection_established(&mut self, peer: PeerId, endpoint: Endpoint) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, ?endpoint, "connection established");
self.service.open_substream(peer)?;
self.peers.insert(peer, endpoint);
Ok(())
}
/// Connection closed to remote peer.
fn on_connection_closed(&mut self, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?peer, "connection closed");
self.peers.remove(&peer);
}
/// Inbound substream opened.
fn on_inbound_substream(
&mut self,
peer: PeerId,
protocol: ProtocolName,
mut substream: Substream,
) {
tracing::trace!(
target: LOG_TARGET,
?peer,
?protocol,
"inbound substream opened"
);
let observed_addr = match self.peers.get(&peer) {
Some(endpoint) => Some(endpoint.address().to_vec()),
None => {
tracing::warn!(
target: LOG_TARGET,
?peer,
%protocol,
"inbound identify substream opened for peer who doesn't exist",
);
None
}
};
let mut listen_addr: HashSet<_> =
self.service.listen_addresses().into_iter().map(|addr| addr.to_vec()).collect();
listen_addr
.extend(self.service.public_addresses().inner.read().iter().map(|addr| addr.to_vec()));
let identify = identify_schema::Identify {
protocol_version: Some(self.protocol_version.clone()),
agent_version: Some(self.user_agent.clone()),
public_key: Some(self.public.to_protobuf_encoding()),
listen_addrs: listen_addr.into_iter().collect(),
observed_addr,
protocols: self.protocols.clone(),
};
tracing::trace!(
target: LOG_TARGET,
?peer,
?identify,
"sending identify response",
);
let mut msg = Vec::with_capacity(identify.encoded_len());
identify.encode(&mut msg).expect("`msg` to have enough capacity");
self.pending_inbound.push(Box::pin(async move {
match tokio::time::timeout(Duration::from_secs(10), substream.send_framed(msg.into()))
.await
{
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
?error,
"timed out while sending ipfs identify response",
);
}
Ok(Err(error)) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
?error,
"failed to send ipfs identify response",
);
}
Ok(_) => {
substream.close().await;
}
}
}))
}
/// Outbound substream opened.
fn on_outbound_substream(
&mut self,
peer: PeerId,
protocol: ProtocolName,
substream_id: SubstreamId,
mut substream: Substream,
) {
tracing::trace!(
target: LOG_TARGET,
?peer,
?protocol,
?substream_id,
"outbound substream opened"
);
let local_peer_id = self.local_peer_id;
self.pending_outbound.push(Box::pin(async move {
let payload =
match tokio::time::timeout(Duration::from_secs(10), substream.next()).await {
Err(_) => return Err(Error::Timeout),
Ok(None) =>
return Err(Error::SubstreamError(SubstreamError::ReadFailure(Some(
substream_id,
)))),
Ok(Some(Err(error))) => return Err(error.into()),
Ok(Some(Ok(payload))) => payload,
};
let info = identify_schema::Identify::decode(payload.to_vec().as_slice()).map_err(
|err| {
tracing::debug!(target: LOG_TARGET, ?peer, ?err, "peer identified provided undecodable identify response");
err
})?;
tracing::trace!(target: LOG_TARGET, ?peer, ?info, "peer identified");
let listen_addresses = info
.listen_addrs
.iter()
.filter_map(|address| {
let address = Multiaddr::try_from(address.clone()).ok()?;
// Ensure the address ends with the provided peer ID and is not empty.
if address.is_empty() {
tracing::debug!(target: LOG_TARGET, ?peer, ?address, "peer identified provided empty listen address");
return None;
}
if let Some(multiaddr::Protocol::P2p(peer_id)) = address.iter().last() {
if peer_id != peer.into() {
tracing::debug!(target: LOG_TARGET, ?peer, ?address, "peer identified provided listen address with incorrect peer ID; discarding the address");
return None;
}
}
Some(address)
})
.collect();
let observed_address =
info.observed_addr.and_then(|address| {
let address = Multiaddr::try_from(address).ok()?;
if address.is_empty() {
tracing::debug!(target: LOG_TARGET, ?peer, ?address, "peer identified provided empty observed address");
return None;
}
if let Some(multiaddr::Protocol::P2p(peer_id)) = address.iter().last() {
if peer_id != local_peer_id.into() {
tracing::debug!(target: LOG_TARGET, ?peer, ?address, "peer identified provided observed address with peer ID not matching our peer ID; discarding address");
return None;
}
}
Some(address)
});
let protocol_version = info.protocol_version;
let user_agent = info.agent_version;
Ok(IdentifyResponse {
peer,
protocol_version,
user_agent,
supported_protocols: HashSet::from_iter(info.protocols),
observed_address,
listen_addresses,
})
}));
}
/// Start [`Identify`] event loop.
pub async fn run(mut self) {
tracing::debug!(target: LOG_TARGET, "starting identify event loop");
loop {
tokio::select! {
event = self.service.next() => match event {
None => {
tracing::warn!(target: LOG_TARGET, "transport service stream ended, terminating identify event loop");
return
},
Some(TransportEvent::ConnectionEstablished { peer, endpoint }) => {
let _ = self.on_connection_established(peer, endpoint);
}
Some(TransportEvent::ConnectionClosed { peer }) => {
self.on_connection_closed(peer);
}
Some(TransportEvent::SubstreamOpened {
peer,
protocol,
direction,
substream,
..
}) => match direction {
Direction::Inbound => self.on_inbound_substream(peer, protocol, substream),
Direction::Outbound(substream_id) => self.on_outbound_substream(peer, protocol, substream_id, substream),
},
_ => {}
},
_ = self.pending_inbound.next(), if !self.pending_inbound.is_empty() => {}
event = self.pending_outbound.next(), if !self.pending_outbound.is_empty() => match event {
Some(Ok(response)) => {
let _ = self.tx
.send(IdentifyEvent::PeerIdentified {
peer: response.peer,
protocol_version: response.protocol_version,
user_agent: response.user_agent,
supported_protocols: response.supported_protocols.into_iter().map(From::from).collect(),
observed_address: response.observed_address.map_or(Multiaddr::empty(), |address| address),
listen_addresses: response.listen_addresses,
})
.await;
}
Some(Err(error)) => tracing::debug!(target: LOG_TARGET, ?error, "failed to read ipfs identify response"),
None => {}
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{config::ConfigBuilder, transport::tcp::config::Config as TcpConfig, Litep2p};
use multiaddr::{Multiaddr, Protocol};
fn create_litep2p() -> (
Litep2p,
Box<dyn Stream<Item = IdentifyEvent> + Send + Unpin>,
PeerId,
) {
let (identify_config, identify) =
Config::new("1.0.0".to_string(), Some("litep2p/1.0.0".to_string()));
let keypair = crate::crypto::ed25519::Keypair::generate();
let peer = PeerId::from_public_key(&crate::crypto::PublicKey::Ed25519(keypair.public()));
let config = ConfigBuilder::new()
.with_keypair(keypair)
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_identify(identify_config)
.build();
(Litep2p::new(config).unwrap(), identify, peer)
}
#[tokio::test]
async fn update_identify_addresses() {
// Create two instances of litep2p
let (mut litep2p1, mut event_stream1, peer1) = create_litep2p();
let (mut litep2p2, mut event_stream2, _peer2) = create_litep2p();
let litep2p1_address = litep2p1.listen_addresses().next().unwrap();
let multiaddr: Multiaddr = "/ip6/::9/tcp/111".parse().unwrap();
// Litep2p1 is now reporting the new address.
assert!(litep2p1.public_addresses().add_address(multiaddr.clone()).unwrap());
// Dial `litep2p1`
litep2p2.dial_address(litep2p1_address.clone()).await.unwrap();
let expected_multiaddr = multiaddr.with(Protocol::P2p(peer1.into()));
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {}
_event = event_stream1.next() => {}
}
}
});
loop {
tokio::select! {
_ = litep2p2.next_event() => {}
event = event_stream2.next() => match event {
Some(IdentifyEvent::PeerIdentified {
listen_addresses,
..
}) => {
assert!(listen_addresses.iter().any(|address| address == &expected_multiaddr));
break;
}
_ => {}
}
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/mod.rs | src/protocol/libp2p/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Supported [`libp2p`](https://libp2p.io/) protocols.
pub mod bitswap;
pub mod identify;
pub mod kademlia;
pub mod ping;
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/ping/config.rs | src/protocol/libp2p/ping/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
codec::ProtocolCodec, protocol::libp2p::ping::PingEvent, types::protocol::ProtocolName,
DEFAULT_CHANNEL_SIZE,
};
use futures::Stream;
use tokio::sync::mpsc::{channel, Sender};
use tokio_stream::wrappers::ReceiverStream;
/// IPFS Ping protocol name as a string.
pub const PROTOCOL_NAME: &str = "/ipfs/ping/1.0.0";
/// Size for `/ipfs/ping/1.0.0` payloads.
const PING_PAYLOAD_SIZE: usize = 32;
/// Maximum PING failures.
const MAX_FAILURES: usize = 3;
/// Ping configuration.
pub struct Config {
/// Protocol name.
pub(crate) protocol: ProtocolName,
/// Codec used by the protocol.
pub(crate) codec: ProtocolCodec,
/// Maximum failures before the peer is considered unreachable.
pub(crate) max_failures: usize,
/// TX channel for sending events to the user protocol.
pub(crate) tx_event: Sender<PingEvent>,
}
impl Config {
/// Create new [`Config`] with default values.
///
/// Returns a config that is given to `Litep2pConfig` and an event stream for [`PingEvent`]s.
pub fn default() -> (Self, Box<dyn Stream<Item = PingEvent> + Send + Unpin>) {
let (tx_event, rx_event) = channel(DEFAULT_CHANNEL_SIZE);
(
Self {
tx_event,
max_failures: MAX_FAILURES,
protocol: ProtocolName::from(PROTOCOL_NAME),
codec: ProtocolCodec::Identity(PING_PAYLOAD_SIZE),
},
Box::new(ReceiverStream::new(rx_event)),
)
}
}
/// Ping configuration builder.
pub struct ConfigBuilder {
/// Protocol name.
protocol: ProtocolName,
/// Codec used by the protocol.
codec: ProtocolCodec,
/// Maximum failures before the peer is considered unreachable.
max_failures: usize,
}
impl Default for ConfigBuilder {
fn default() -> Self {
Self::new()
}
}
impl ConfigBuilder {
/// Create new default [`Config`] which can be modified by the user.
pub fn new() -> Self {
Self {
max_failures: MAX_FAILURES,
protocol: ProtocolName::from(PROTOCOL_NAME),
codec: ProtocolCodec::Identity(PING_PAYLOAD_SIZE),
}
}
/// Set maximum failures the protocol.
pub fn with_max_failure(mut self, max_failures: usize) -> Self {
self.max_failures = max_failures;
self
}
/// Build [`Config`].
pub fn build(self) -> (Config, Box<dyn Stream<Item = PingEvent> + Send + Unpin>) {
let (tx_event, rx_event) = channel(DEFAULT_CHANNEL_SIZE);
(
Config {
tx_event,
max_failures: self.max_failures,
protocol: self.protocol,
codec: self.codec,
},
Box::new(ReceiverStream::new(rx_event)),
)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/ping/mod.rs | src/protocol/libp2p/ping/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! [`/ipfs/ping/1.0.0`](https://github.com/libp2p/specs/blob/master/ping/ping.md) implementation.
use crate::{
error::{Error, SubstreamError},
protocol::{Direction, TransportEvent, TransportService},
substream::Substream,
types::SubstreamId,
PeerId,
};
use futures::{future::BoxFuture, stream::FuturesUnordered, StreamExt};
use tokio::sync::mpsc::Sender;
use std::{
collections::HashSet,
time::{Duration, Instant},
};
pub use config::{Config, ConfigBuilder};
mod config;
// TODO: https://github.com/paritytech/litep2p/issues/132 let the user handle max failures
/// Log target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::ping";
/// Events emitted by the ping protocol.
#[derive(Debug)]
pub enum PingEvent {
/// Ping time with remote peer.
Ping {
/// Peer ID.
peer: PeerId,
/// Measured ping time with the peer.
ping: Duration,
},
}
/// Ping protocol.
pub(crate) struct Ping {
/// Maximum failures before the peer is considered unreachable.
_max_failures: usize,
// Connection service.
service: TransportService,
/// TX channel for sending events to the user protocol.
tx: Sender<PingEvent>,
/// Connected peers.
peers: HashSet<PeerId>,
/// Pending outbound substreams.
pending_outbound: FuturesUnordered<BoxFuture<'static, crate::Result<(PeerId, Duration)>>>,
/// Pending inbound substreams.
pending_inbound: FuturesUnordered<BoxFuture<'static, crate::Result<()>>>,
}
impl Ping {
/// Create new [`Ping`] protocol.
pub fn new(service: TransportService, config: Config) -> Self {
Self {
service,
tx: config.tx_event,
peers: HashSet::new(),
pending_outbound: FuturesUnordered::new(),
pending_inbound: FuturesUnordered::new(),
_max_failures: config.max_failures,
}
}
/// Connection established to remote peer.
fn on_connection_established(&mut self, peer: PeerId) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, "connection established");
self.service.open_substream(peer)?;
self.peers.insert(peer);
Ok(())
}
/// Connection closed to remote peer.
fn on_connection_closed(&mut self, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?peer, "connection closed");
self.peers.remove(&peer);
}
/// Handle outbound substream.
fn on_outbound_substream(
&mut self,
peer: PeerId,
substream_id: SubstreamId,
mut substream: Substream,
) {
tracing::trace!(target: LOG_TARGET, ?peer, "handle outbound substream");
self.pending_outbound.push(Box::pin(async move {
let future = async move {
// TODO: https://github.com/paritytech/litep2p/issues/134 generate random payload and verify it
substream.send_framed(vec![0u8; 32].into()).await?;
let now = Instant::now();
let _ = substream.next().await.ok_or(Error::SubstreamError(
SubstreamError::ReadFailure(Some(substream_id)),
))?;
let _ = substream.close().await;
Ok(now.elapsed())
};
match tokio::time::timeout(Duration::from_secs(10), future).await {
Err(_) => Err(Error::Timeout),
Ok(Err(error)) => Err(error),
Ok(Ok(elapsed)) => Ok((peer, elapsed)),
}
}));
}
/// Substream opened to remote peer.
fn on_inbound_substream(&mut self, peer: PeerId, mut substream: Substream) {
tracing::trace!(target: LOG_TARGET, ?peer, "handle inbound substream");
self.pending_inbound.push(Box::pin(async move {
let future = async move {
let payload = substream
.next()
.await
.ok_or(Error::SubstreamError(SubstreamError::ReadFailure(None)))??;
substream.send_framed(payload.freeze()).await?;
let _ = substream.next().await.map(|_| ());
Ok(())
};
match tokio::time::timeout(Duration::from_secs(10), future).await {
Err(_) => Err(Error::Timeout),
Ok(Err(error)) => Err(error),
Ok(Ok(())) => Ok(()),
}
}));
}
/// Start [`Ping`] event loop.
pub async fn run(mut self) {
tracing::debug!(target: LOG_TARGET, "starting ping event loop");
loop {
tokio::select! {
event = self.service.next() => match event {
Some(TransportEvent::ConnectionEstablished { peer, .. }) => {
let _ = self.on_connection_established(peer);
}
Some(TransportEvent::ConnectionClosed { peer }) => {
self.on_connection_closed(peer);
}
Some(TransportEvent::SubstreamOpened {
peer,
substream,
direction,
..
}) => match direction {
Direction::Inbound => {
self.on_inbound_substream(peer, substream);
}
Direction::Outbound(substream_id) => {
self.on_outbound_substream(peer, substream_id, substream);
}
},
Some(_) => {}
None => return,
},
_event = self.pending_inbound.next(), if !self.pending_inbound.is_empty() => {}
event = self.pending_outbound.next(), if !self.pending_outbound.is_empty() => {
match event {
Some(Ok((peer, elapsed))) => {
let _ = self
.tx
.send(PingEvent::Ping {
peer,
ping: elapsed,
})
.await;
}
event => tracing::debug!(target: LOG_TARGET, "failed to handle ping for an outbound peer: {event:?}"),
}
}
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/bitswap/config.rs | src/protocol/libp2p/bitswap/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
codec::ProtocolCodec,
protocol::libp2p::bitswap::{BitswapCommand, BitswapEvent, BitswapHandle},
types::protocol::ProtocolName,
DEFAULT_CHANNEL_SIZE,
};
use tokio::sync::mpsc::{channel, Receiver, Sender};
/// IPFS Bitswap protocol name as a string.
pub const PROTOCOL_NAME: &str = "/ipfs/bitswap/1.2.0";
/// Maximum Size for `/ipfs/bitswap/1.2.0` payloads.
const MAX_PAYLOAD_SIZE: usize = 2_097_152;
/// Bitswap configuration.
#[derive(Debug)]
pub struct Config {
/// Protocol name.
pub(crate) protocol: ProtocolName,
/// Protocol codec.
pub(crate) codec: ProtocolCodec,
/// TX channel for sending events to the user protocol.
pub(super) event_tx: Sender<BitswapEvent>,
/// RX channel for receiving commands from the user.
pub(super) cmd_rx: Receiver<BitswapCommand>,
}
impl Config {
/// Create new [`Config`].
pub fn new() -> (Self, BitswapHandle) {
let (event_tx, event_rx) = channel(DEFAULT_CHANNEL_SIZE);
let (cmd_tx, cmd_rx) = channel(DEFAULT_CHANNEL_SIZE);
(
Self {
cmd_rx,
event_tx,
protocol: ProtocolName::from(PROTOCOL_NAME),
codec: ProtocolCodec::UnsignedVarint(Some(MAX_PAYLOAD_SIZE)),
},
BitswapHandle::new(event_rx, cmd_tx),
)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/bitswap/mod.rs | src/protocol/libp2p/bitswap/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! [`/ipfs/bitswap/1.2.0`](https://github.com/ipfs/specs/blob/main/BITSWAP.md) implementation.
use crate::{
error::{Error, ImmediateDialError},
protocol::{Direction, TransportEvent, TransportService},
substream::Substream,
types::{
multihash::{Code, MultihashDigest},
SubstreamId,
},
PeerId,
};
use cid::{Cid, Version};
use prost::Message;
use tokio::sync::mpsc::{Receiver, Sender};
use tokio_stream::{StreamExt, StreamMap};
pub use config::Config;
pub use handle::{BitswapCommand, BitswapEvent, BitswapHandle, ResponseType};
pub use schema::bitswap::{wantlist::WantType, BlockPresenceType};
use std::{
collections::{hash_map::Entry, HashMap, HashSet},
time::Duration,
};
mod config;
mod handle;
mod schema {
pub(super) mod bitswap {
include!(concat!(env!("OUT_DIR"), "/bitswap.rs"));
}
}
/// Log target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::bitswap";
/// Write timeout for outbound messages.
const WRITE_TIMEOUT: Duration = Duration::from_secs(15);
/// Bitswap metadata.
#[derive(Debug)]
struct Prefix {
/// CID version.
version: Version,
/// CID codec.
codec: u64,
/// CID multihash type.
multihash_type: u64,
/// CID multihash length.
multihash_len: u8,
}
impl Prefix {
/// Convert the prefix to encoded bytes.
pub fn to_bytes(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(4 * 10);
let mut buf = unsigned_varint::encode::u64_buffer();
let version = unsigned_varint::encode::u64(self.version.into(), &mut buf);
res.extend_from_slice(version);
let mut buf = unsigned_varint::encode::u64_buffer();
let codec = unsigned_varint::encode::u64(self.codec, &mut buf);
res.extend_from_slice(codec);
let mut buf = unsigned_varint::encode::u64_buffer();
let multihash_type = unsigned_varint::encode::u64(self.multihash_type, &mut buf);
res.extend_from_slice(multihash_type);
let mut buf = unsigned_varint::encode::u64_buffer();
let multihash_len = unsigned_varint::encode::u64(self.multihash_len as u64, &mut buf);
res.extend_from_slice(multihash_len);
res
}
/// Parse byte representation of prefix.
pub fn from_bytes(prefix_bytes: &[u8]) -> Option<Prefix> {
let (version, rest) = unsigned_varint::decode::u64(prefix_bytes).ok()?;
let (codec, rest) = unsigned_varint::decode::u64(rest).ok()?;
let (multihash_type, rest) = unsigned_varint::decode::u64(rest).ok()?;
let (multihash_len, rest) = unsigned_varint::decode::u64(rest).ok()?;
if !rest.is_empty() {
return None;
}
let version = Version::try_from(version).ok()?;
let multihash_len = u8::try_from(multihash_len).ok()?;
Some(Prefix {
version,
codec,
multihash_type,
multihash_len,
})
}
}
/// Action to perform when substream is opened.
#[derive(Debug)]
enum SubstreamAction {
/// Send a request.
SendRequest(Vec<(Cid, WantType)>),
/// Send a response.
SendResponse(Vec<ResponseType>),
}
/// Bitswap protocol.
pub(crate) struct Bitswap {
// Connection service.
service: TransportService,
/// TX channel for sending events to the user protocol.
event_tx: Sender<BitswapEvent>,
/// RX channel for receiving commands from `BitswapHandle`.
cmd_rx: Receiver<BitswapCommand>,
/// Pending outbound actions.
pending_outbound: HashMap<PeerId, Vec<SubstreamAction>>,
/// Inbound substreams.
inbound: StreamMap<PeerId, Substream>,
/// Outbound substreams.
outbound: HashMap<PeerId, Substream>,
/// Peers waiting for dial.
pending_dials: HashSet<PeerId>,
}
impl Bitswap {
/// Create new [`Bitswap`] protocol.
pub(crate) fn new(service: TransportService, config: Config) -> Self {
Self {
service,
cmd_rx: config.cmd_rx,
event_tx: config.event_tx,
pending_outbound: HashMap::new(),
inbound: StreamMap::new(),
outbound: HashMap::new(),
pending_dials: HashSet::new(),
}
}
/// Substream opened to remote peer.
fn on_inbound_substream(&mut self, peer: PeerId, substream: Substream) {
tracing::debug!(target: LOG_TARGET, ?peer, "handle inbound substream");
if self.inbound.insert(peer, substream).is_some() {
// Only one inbound substream per peer is allowed in order to constrain resources.
tracing::debug!(
target: LOG_TARGET,
?peer,
"dropping inbound substream as remote opened a new one",
);
}
}
/// Message received from remote peer.
async fn on_message_received(
&mut self,
peer: PeerId,
message: bytes::BytesMut,
) -> Result<(), Error> {
tracing::trace!(target: LOG_TARGET, ?peer, "handle inbound message");
let message = schema::bitswap::Message::decode(message)?;
// Check if this is a request (has wantlist with entries).
if let Some(wantlist) = &message.wantlist {
if !wantlist.entries.is_empty() {
let cids = wantlist
.entries
.iter()
.filter_map(|entry| {
let cid = Cid::read_bytes(entry.block.as_slice()).ok()?;
let want_type = match entry.want_type {
0 => WantType::Block,
1 => WantType::Have,
_ => return None,
};
Some((cid, want_type))
})
.collect::<Vec<_>>();
if !cids.is_empty() {
let _ = self.event_tx.send(BitswapEvent::Request { peer, cids }).await;
}
}
}
// Check if this is a response (has payload or block presences).
if !message.payload.is_empty() || !message.block_presences.is_empty() {
let mut responses = Vec::new();
// Process payload (blocks).
for block in message.payload {
let Some(Prefix {
version,
codec,
multihash_type,
multihash_len: _,
}) = Prefix::from_bytes(&block.prefix)
else {
tracing::trace!(target: LOG_TARGET, ?peer, "invalid CID prefix received");
continue;
};
// Create multihash from the block data.
let Ok(code) = Code::try_from(multihash_type) else {
tracing::trace!(
target: LOG_TARGET,
?peer,
multihash_type,
"usupported multihash type",
);
continue;
};
let multihash = code.digest(&block.data);
// We need to convert multihash to version supported by `cid` crate.
let Ok(multihash) =
cid::multihash::Multihash::wrap(multihash.code(), multihash.digest())
else {
tracing::trace!(
target: LOG_TARGET,
?peer,
multihash_type,
"multihash size > 64 unsupported",
);
continue;
};
match Cid::new(version, codec, multihash) {
Ok(cid) => responses.push(ResponseType::Block {
cid,
block: block.data,
}),
Err(error) => tracing::trace!(
target: LOG_TARGET,
?peer,
?error,
"invalid CID received",
),
}
}
// Process block presences.
for presence in message.block_presences {
if let Ok(cid) = Cid::read_bytes(&presence.cid[..]) {
let presence_type = match presence.r#type {
0 => BlockPresenceType::Have,
1 => BlockPresenceType::DontHave,
_ => continue,
};
responses.push(ResponseType::Presence {
cid,
presence: presence_type,
});
}
}
if !responses.is_empty() {
let _ = self.event_tx.send(BitswapEvent::Response { peer, responses }).await;
}
}
Ok(())
}
/// Handle opened outbound substream.
async fn on_outbound_substream(
&mut self,
peer: PeerId,
substream_id: SubstreamId,
mut substream: Substream,
) {
let Some(actions) = self.pending_outbound.remove(&peer) else {
tracing::warn!(target: LOG_TARGET, ?peer, ?substream_id, "pending outbound entry doesn't exist");
return;
};
tracing::trace!(target: LOG_TARGET, ?peer, "handle outbound substream");
for action in actions {
match action {
SubstreamAction::SendRequest(cids) => {
if send_request(&mut substream, cids).await.is_err() {
// Drop the substream and all actions in case of sending error.
tracing::debug!(target: LOG_TARGET, ?peer, "bitswap request failed");
return;
}
}
SubstreamAction::SendResponse(entries) => {
if send_response(&mut substream, entries).await.is_err() {
// Drop the substream and all actions in case of sending error.
tracing::debug!(target: LOG_TARGET, ?peer, "bitswap response failed");
return;
}
}
}
}
self.outbound.insert(peer, substream);
}
/// Handle connection established event.
fn on_connection_established(&mut self, peer: PeerId) {
// If we have pending actions for this peer, open a substream.
if self.pending_dials.remove(&peer) {
tracing::trace!(
target: LOG_TARGET,
?peer,
"open substream after connection established",
);
if let Err(error) = self.service.open_substream(peer) {
tracing::debug!(
target: LOG_TARGET,
?peer,
?error,
"failed to open substream after connection established",
);
// Drop all pending actions; they are not going to be handled anyway, and we need
// the entry to be empty to properly open subsequent substreams.
self.pending_outbound.remove(&peer);
}
}
}
/// Open substream or dial a peer.
fn open_substream_or_dial(&mut self, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?peer, "open substream");
if let Err(error) = self.service.open_substream(peer) {
tracing::trace!(
target: LOG_TARGET,
?peer,
?error,
"failed to open substream, dialing peer",
);
// Failed to open substream, try to dial the peer.
match self.service.dial(&peer) {
Ok(()) => {
// Store the peer to open a substream once it is connected.
self.pending_dials.insert(peer);
}
Err(ImmediateDialError::AlreadyConnected) => {
// By the time we tried to dial peer, it got connected.
if let Err(error) = self.service.open_substream(peer) {
tracing::trace!(
target: LOG_TARGET,
?peer,
?error,
"failed to open substream for a second time",
);
}
}
Err(error) => {
tracing::debug!(target: LOG_TARGET, ?peer, ?error, "failed to dial peer");
}
}
}
}
/// Handle bitswap request.
async fn on_bitswap_request(&mut self, peer: PeerId, cids: Vec<(Cid, WantType)>) {
// Try to send request over existing substream first.
if let Entry::Occupied(mut entry) = self.outbound.entry(peer) {
if send_request(entry.get_mut(), cids.clone()).await.is_ok() {
return;
} else {
tracing::debug!(
target: LOG_TARGET,
?peer,
"failed to send request over existing substream",
);
entry.remove();
}
}
// Store pending actions for once the substream is opened.
let pending_actions = self.pending_outbound.entry(peer).or_default();
// If we inserted the default empty entry above, this means no pending substream
// was requested by previous calls to `on_bitswap_request`. We will request a substream
// in this case below.
let no_substream_pending = pending_actions.is_empty();
pending_actions.push(SubstreamAction::SendRequest(cids));
if no_substream_pending {
self.open_substream_or_dial(peer);
}
}
/// Handle bitswap response.
async fn on_bitswap_response(&mut self, peer: PeerId, responses: Vec<ResponseType>) {
// Try to send response over existing substream first.
if let Entry::Occupied(mut entry) = self.outbound.entry(peer) {
if send_response(entry.get_mut(), responses.clone()).await.is_ok() {
return;
} else {
tracing::debug!(
target: LOG_TARGET,
?peer,
"failed to send response over existing substream",
);
entry.remove();
}
}
// Store pending actions for later and open substream if not requested already.
let pending_actions = self.pending_outbound.entry(peer).or_default();
let no_pending_substream = pending_actions.is_empty();
pending_actions.push(SubstreamAction::SendResponse(responses));
if no_pending_substream {
self.open_substream_or_dial(peer);
}
}
/// Start [`Bitswap`] event loop.
pub async fn run(mut self) {
tracing::debug!(target: LOG_TARGET, "starting bitswap event loop");
loop {
tokio::select! {
event = self.service.next() => match event {
Some(TransportEvent::ConnectionEstablished { peer, .. }) => {
self.on_connection_established(peer);
}
Some(TransportEvent::SubstreamOpened {
peer,
substream,
direction,
..
}) => match direction {
Direction::Inbound => self.on_inbound_substream(peer, substream),
Direction::Outbound(substream_id) =>
self.on_outbound_substream(peer, substream_id, substream).await,
},
None => return,
event => tracing::trace!(target: LOG_TARGET, ?event, "unhandled event"),
},
command = self.cmd_rx.recv() => match command {
Some(BitswapCommand::SendRequest { peer, cids }) => {
self.on_bitswap_request(peer, cids).await;
}
Some(BitswapCommand::SendResponse { peer, responses }) => {
self.on_bitswap_response(peer, responses).await;
}
None => return,
},
Some((peer, message)) = self.inbound.next(), if !self.inbound.is_empty() => {
match message {
Ok(message) => if let Err(e) = self.on_message_received(peer, message).await {
tracing::trace!(
target: LOG_TARGET,
?peer,
?e,
"error handling inbound message, dropping substream",
);
self.inbound.remove(&peer);
},
Err(e) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?e,
"inbound substream closed",
);
self.inbound.remove(&peer);
},
}
}
}
}
}
}
async fn send_request(substream: &mut Substream, cids: Vec<(Cid, WantType)>) -> Result<(), ()> {
let request = schema::bitswap::Message {
wantlist: Some(schema::bitswap::Wantlist {
entries: cids
.into_iter()
.map(|(cid, want_type)| schema::bitswap::wantlist::Entry {
block: cid.to_bytes(),
priority: 1,
cancel: false,
want_type: want_type as i32,
send_dont_have: false,
})
.collect(),
full: false,
}),
..Default::default()
};
let message = request.encode_to_vec().into();
if let Ok(Ok(())) = tokio::time::timeout(WRITE_TIMEOUT, substream.send_framed(message)).await {
Ok(())
} else {
Err(())
}
}
async fn send_response(substream: &mut Substream, entries: Vec<ResponseType>) -> Result<(), ()> {
let mut response = schema::bitswap::Message {
// `wantlist` field must always be present. This is what the official Kubo
// IPFS implementation does.
wantlist: Some(Default::default()),
..Default::default()
};
for entry in entries {
match entry {
ResponseType::Block { cid, block } => {
let prefix = Prefix {
version: cid.version(),
codec: cid.codec(),
multihash_type: cid.hash().code(),
multihash_len: cid.hash().size(),
}
.to_bytes();
response.payload.push(schema::bitswap::Block {
prefix,
data: block,
});
}
ResponseType::Presence { cid, presence } => {
response.block_presences.push(schema::bitswap::BlockPresence {
cid: cid.to_bytes(),
r#type: presence as i32,
});
}
}
}
let message = response.encode_to_vec().into();
if let Ok(Ok(())) = tokio::time::timeout(WRITE_TIMEOUT, substream.send_framed(message)).await {
Ok(())
} else {
Err(())
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/bitswap/handle.rs | src/protocol/libp2p/bitswap/handle.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Bitswap handle for communicating with the bitswap protocol implementation.
use crate::{
protocol::libp2p::bitswap::{BlockPresenceType, WantType},
PeerId,
};
use cid::Cid;
use tokio::sync::mpsc::{Receiver, Sender};
use std::{
pin::Pin,
task::{Context, Poll},
};
/// Events emitted by the bitswap protocol.
#[derive(Debug)]
pub enum BitswapEvent {
/// Bitswap request.
Request {
/// Peer ID.
peer: PeerId,
/// Requested CIDs.
cids: Vec<(Cid, WantType)>,
},
/// Bitswap response.
Response {
/// Peer ID.
peer: PeerId,
/// Response entries: vector of CIDs with either block data or block presence.
responses: Vec<ResponseType>,
},
}
/// Response type for received bitswap request.
#[derive(Debug, Clone)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub enum ResponseType {
/// Block.
Block {
/// CID.
cid: Cid,
/// Found block.
block: Vec<u8>,
},
/// Presense.
Presence {
/// CID.
cid: Cid,
/// Whether the requested block exists or not.
presence: BlockPresenceType,
},
}
/// Commands sent from the user to `Bitswap`.
#[derive(Debug)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub enum BitswapCommand {
/// Send bitswap request.
SendRequest {
/// Peer ID.
peer: PeerId,
/// Requested CIDs.
cids: Vec<(Cid, WantType)>,
},
/// Send bitswap response.
SendResponse {
/// Peer ID.
peer: PeerId,
/// CIDs.
responses: Vec<ResponseType>,
},
}
/// Handle for communicating with the bitswap protocol.
pub struct BitswapHandle {
/// RX channel for receiving bitswap events.
event_rx: Receiver<BitswapEvent>,
/// TX channel for sending commads to `Bitswap`.
cmd_tx: Sender<BitswapCommand>,
}
impl BitswapHandle {
/// Create new [`BitswapHandle`].
pub(super) fn new(event_rx: Receiver<BitswapEvent>, cmd_tx: Sender<BitswapCommand>) -> Self {
Self { event_rx, cmd_tx }
}
/// Send `request` to `peer`.
pub async fn send_request(&self, peer: PeerId, cids: Vec<(Cid, WantType)>) {
let _ = self.cmd_tx.send(BitswapCommand::SendRequest { peer, cids }).await;
}
/// Send `response` to `peer`.
pub async fn send_response(&self, peer: PeerId, responses: Vec<ResponseType>) {
let _ = self.cmd_tx.send(BitswapCommand::SendResponse { peer, responses }).await;
}
#[cfg(feature = "fuzz")]
/// Expose functionality for fuzzing
pub async fn fuzz_send_message(&mut self, command: BitswapCommand) -> crate::Result<()> {
let _ = self.cmd_tx.try_send(command);
Ok(())
}
}
impl futures::Stream for BitswapHandle {
type Item = BitswapEvent;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
Pin::new(&mut self.event_rx).poll_recv(cx)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/config.rs | src/protocol/libp2p/kademlia/config.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
codec::ProtocolCodec,
protocol::libp2p::kademlia::{
handle::{
IncomingRecordValidationMode, KademliaCommand, KademliaEvent, KademliaHandle,
RoutingTableUpdateMode,
},
store::MemoryStoreConfig,
},
types::protocol::ProtocolName,
PeerId, DEFAULT_CHANNEL_SIZE,
};
use multiaddr::Multiaddr;
use tokio::sync::mpsc::{channel, Receiver, Sender};
use std::{
collections::HashMap,
sync::{atomic::AtomicUsize, Arc},
time::Duration,
};
/// Default TTL for the records.
const DEFAULT_TTL: Duration = Duration::from_secs(36 * 60 * 60);
/// Default max number of records.
pub(super) const DEFAULT_MAX_RECORDS: usize = 1024;
/// Default max record size.
pub(super) const DEFAULT_MAX_RECORD_SIZE_BYTES: usize = 65 * 1024;
/// Default max provider keys.
pub(super) const DEFAULT_MAX_PROVIDER_KEYS: usize = 1024;
/// Default max provider addresses.
pub(super) const DEFAULT_MAX_PROVIDER_ADDRESSES: usize = 30;
/// Default max providers per key.
pub(super) const DEFAULT_MAX_PROVIDERS_PER_KEY: usize = 20;
/// Default provider republish interval.
pub(super) const DEFAULT_PROVIDER_REFRESH_INTERVAL: Duration = Duration::from_secs(22 * 60 * 60);
/// Default provider record TTL.
pub(super) const DEFAULT_PROVIDER_TTL: Duration = Duration::from_secs(48 * 60 * 60);
/// Protocol name.
const PROTOCOL_NAME: &str = "/ipfs/kad/1.0.0";
/// Kademlia replication factor.
const REPLICATION_FACTOR: usize = 20usize;
/// Kademlia maximum message size. Should fit 64 KiB value + 4 KiB key.
const DEFAULT_MAX_MESSAGE_SIZE: usize = 70 * 1024;
/// Kademlia configuration.
#[derive(Debug)]
pub struct Config {
// Protocol name.
// pub(crate) protocol: ProtocolName,
/// Protocol names.
pub(crate) protocol_names: Vec<ProtocolName>,
/// Protocol codec.
pub(crate) codec: ProtocolCodec,
/// Replication factor.
pub(super) replication_factor: usize,
/// Known peers.
pub(super) known_peers: HashMap<PeerId, Vec<Multiaddr>>,
/// Routing table update mode.
pub(super) update_mode: RoutingTableUpdateMode,
/// Incoming records validation mode.
pub(super) validation_mode: IncomingRecordValidationMode,
/// Default record TTL.
pub(super) record_ttl: Duration,
/// Provider record TTL.
pub(super) memory_store_config: MemoryStoreConfig,
/// TX channel for sending events to `KademliaHandle`.
pub(super) event_tx: Sender<KademliaEvent>,
/// RX channel for receiving commands from `KademliaHandle`.
pub(super) cmd_rx: Receiver<KademliaCommand>,
/// Next query ID counter shared with the handle.
pub(super) next_query_id: Arc<AtomicUsize>,
}
impl Config {
fn new(
replication_factor: usize,
known_peers: HashMap<PeerId, Vec<Multiaddr>>,
mut protocol_names: Vec<ProtocolName>,
update_mode: RoutingTableUpdateMode,
validation_mode: IncomingRecordValidationMode,
record_ttl: Duration,
memory_store_config: MemoryStoreConfig,
max_message_size: usize,
) -> (Self, KademliaHandle) {
let (cmd_tx, cmd_rx) = channel(DEFAULT_CHANNEL_SIZE);
let (event_tx, event_rx) = channel(DEFAULT_CHANNEL_SIZE);
let next_query_id = Arc::new(AtomicUsize::new(0usize));
// if no protocol names were provided, use the default protocol
if protocol_names.is_empty() {
protocol_names.push(ProtocolName::from(PROTOCOL_NAME));
}
(
Config {
protocol_names,
update_mode,
validation_mode,
record_ttl,
memory_store_config,
codec: ProtocolCodec::UnsignedVarint(Some(max_message_size)),
replication_factor,
known_peers,
cmd_rx,
event_tx,
next_query_id: next_query_id.clone(),
},
KademliaHandle::new(cmd_tx, event_rx, next_query_id),
)
}
/// Build default Kademlia configuration.
pub fn default() -> (Self, KademliaHandle) {
Self::new(
REPLICATION_FACTOR,
HashMap::new(),
Vec::new(),
RoutingTableUpdateMode::Automatic,
IncomingRecordValidationMode::Automatic,
DEFAULT_TTL,
Default::default(),
DEFAULT_MAX_MESSAGE_SIZE,
)
}
}
/// Configuration builder for Kademlia.
#[derive(Debug)]
pub struct ConfigBuilder {
/// Replication factor.
pub(super) replication_factor: usize,
/// Routing table update mode.
pub(super) update_mode: RoutingTableUpdateMode,
/// Incoming records validation mode.
pub(super) validation_mode: IncomingRecordValidationMode,
/// Known peers.
pub(super) known_peers: HashMap<PeerId, Vec<Multiaddr>>,
/// Protocol names.
pub(super) protocol_names: Vec<ProtocolName>,
/// Default TTL for the records.
pub(super) record_ttl: Duration,
/// Memory store configuration.
pub(super) memory_store_config: MemoryStoreConfig,
/// Maximum message size.
pub(crate) max_message_size: usize,
}
impl Default for ConfigBuilder {
fn default() -> Self {
Self::new()
}
}
impl ConfigBuilder {
/// Create new [`ConfigBuilder`].
pub fn new() -> Self {
Self {
replication_factor: REPLICATION_FACTOR,
known_peers: HashMap::new(),
protocol_names: Vec::new(),
update_mode: RoutingTableUpdateMode::Automatic,
validation_mode: IncomingRecordValidationMode::Automatic,
record_ttl: DEFAULT_TTL,
memory_store_config: Default::default(),
max_message_size: DEFAULT_MAX_MESSAGE_SIZE,
}
}
/// Set replication factor.
pub fn with_replication_factor(mut self, replication_factor: usize) -> Self {
self.replication_factor = replication_factor;
self
}
/// Seed Kademlia with one or more known peers.
pub fn with_known_peers(mut self, peers: HashMap<PeerId, Vec<Multiaddr>>) -> Self {
self.known_peers = peers;
self
}
/// Set routing table update mode.
pub fn with_routing_table_update_mode(mut self, mode: RoutingTableUpdateMode) -> Self {
self.update_mode = mode;
self
}
/// Set incoming records validation mode.
pub fn with_incoming_records_validation_mode(
mut self,
mode: IncomingRecordValidationMode,
) -> Self {
self.validation_mode = mode;
self
}
/// Set Kademlia protocol names, overriding the default protocol name.
///
/// The order of the protocol names signifies preference so if, for example, there are two
/// protocols:
/// * `/kad/2.0.0`
/// * `/kad/1.0.0`
///
/// Where `/kad/2.0.0` is the preferred version, then that should be in `protocol_names` before
/// `/kad/1.0.0`.
pub fn with_protocol_names(mut self, protocol_names: Vec<ProtocolName>) -> Self {
self.protocol_names = protocol_names;
self
}
/// Set default TTL for the records.
///
/// If unspecified, the default TTL is 36 hours.
pub fn with_record_ttl(mut self, record_ttl: Duration) -> Self {
self.record_ttl = record_ttl;
self
}
/// Set maximum number of records in the memory store.
///
/// If unspecified, the default maximum number of records is 1024.
pub fn with_max_records(mut self, max_records: usize) -> Self {
self.memory_store_config.max_records = max_records;
self
}
/// Set maximum record size in bytes.
///
/// If unspecified, the default maximum record size is 65 KiB.
pub fn with_max_record_size(mut self, max_record_size_bytes: usize) -> Self {
self.memory_store_config.max_record_size_bytes = max_record_size_bytes;
self
}
/// Set maximum number of provider keys in the memory store.
///
/// If unspecified, the default maximum number of provider keys is 1024.
pub fn with_max_provider_keys(mut self, max_provider_keys: usize) -> Self {
self.memory_store_config.max_provider_keys = max_provider_keys;
self
}
/// Set maximum number of provider addresses per provider in the memory store.
///
/// If unspecified, the default maximum number of provider addresses is 30.
pub fn with_max_provider_addresses(mut self, max_provider_addresses: usize) -> Self {
self.memory_store_config.max_provider_addresses = max_provider_addresses;
self
}
/// Set maximum number of providers per key in the memory store.
///
/// If unspecified, the default maximum number of providers per key is 20.
pub fn with_max_providers_per_key(mut self, max_providers_per_key: usize) -> Self {
self.memory_store_config.max_providers_per_key = max_providers_per_key;
self
}
/// Set TTL for the provider records. Recommended value is 2 * (refresh interval) + 10%.
///
/// If unspecified, the default TTL is 48 hours.
pub fn with_provider_record_ttl(mut self, provider_record_ttl: Duration) -> Self {
self.memory_store_config.provider_ttl = provider_record_ttl;
self
}
/// Set the refresh (republish) interval for provider records.
///
/// If unspecified, the default interval is 22 hours.
pub fn with_provider_refresh_interval(mut self, provider_refresh_interval: Duration) -> Self {
self.memory_store_config.provider_refresh_interval = provider_refresh_interval;
self
}
/// Set the maximum Kademlia message size.
///
/// Should fit `MemoryStore` max record size. If unspecified, the default maximum message size
/// is 70 KiB.
pub fn with_max_message_size(mut self, max_message_size: usize) -> Self {
self.max_message_size = max_message_size;
self
}
/// Build Kademlia [`Config`].
pub fn build(self) -> (Config, KademliaHandle) {
Config::new(
self.replication_factor,
self.known_peers,
self.protocol_names,
self.update_mode,
self.validation_mode,
self.record_ttl,
self.memory_store_config,
self.max_message_size,
)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/routing_table.rs | src/protocol/libp2p/kademlia/routing_table.rs | // Copyright 2018 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Kademlia routing table implementation.
use crate::{
protocol::libp2p::kademlia::{
bucket::{KBucket, KBucketEntry},
types::{ConnectionType, Distance, KademliaPeer, Key, U256},
},
transport::{
manager::address::{scores, AddressRecord},
Endpoint,
},
PeerId,
};
use multiaddr::{Multiaddr, Protocol};
use multihash::Multihash;
/// Number of k-buckets.
const NUM_BUCKETS: usize = 256;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::routing_table";
pub struct RoutingTable {
/// Local key.
local_key: Key<PeerId>,
/// K-buckets.
buckets: Vec<KBucket>,
}
/// A (type-safe) index into a `KBucketsTable`, i.e. a non-negative integer in the
/// interval `[0, NUM_BUCKETS)`.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
struct BucketIndex(usize);
impl BucketIndex {
/// Creates a new `BucketIndex` for a `Distance`.
///
/// The given distance is interpreted as the distance from a `local_key` of
/// a `KBucketsTable`. If the distance is zero, `None` is returned, in
/// recognition of the fact that the only key with distance `0` to a
/// `local_key` is the `local_key` itself, which does not belong in any
/// bucket.
fn new(d: &Distance) -> Option<BucketIndex> {
d.ilog2().map(|i| BucketIndex(i as usize))
}
/// Gets the index value as an unsigned integer.
fn get(&self) -> usize {
self.0
}
/// Returns the minimum inclusive and maximum inclusive [`Distance`]
/// included in the bucket for this index.
fn _range(&self) -> (Distance, Distance) {
let min = Distance(U256::pow(U256::from(2), U256::from(self.0)));
if self.0 == usize::from(u8::MAX) {
(min, Distance(U256::MAX))
} else {
let max = Distance(U256::pow(U256::from(2), U256::from(self.0 + 1)) - 1);
(min, max)
}
}
/// Generates a random distance that falls into the bucket for this index.
#[cfg(test)]
fn rand_distance(&self, rng: &mut impl rand::Rng) -> Distance {
let mut bytes = [0u8; 32];
let quot = self.0 / 8;
for i in 0..quot {
bytes[31 - i] = rng.gen();
}
let rem = (self.0 % 8) as u32;
let lower = usize::pow(2, rem);
let upper = usize::pow(2, rem + 1);
bytes[31 - quot] = rng.gen_range(lower..upper) as u8;
Distance(U256::from_big_endian(&bytes))
}
}
impl RoutingTable {
/// Create new [`RoutingTable`].
pub fn new(local_key: Key<PeerId>) -> Self {
RoutingTable {
local_key,
buckets: (0..NUM_BUCKETS).map(|_| KBucket::new()).collect(),
}
}
/// Returns the local key.
pub fn _local_key(&self) -> &Key<PeerId> {
&self.local_key
}
/// Get an entry for `peer` into a k-bucket.
pub fn entry(&mut self, key: Key<PeerId>) -> KBucketEntry<'_> {
let Some(index) = BucketIndex::new(&self.local_key.distance(&key)) else {
return KBucketEntry::LocalNode;
};
self.buckets[index.get()].entry(key)
}
/// Update the addresses of the peer on dial failures.
///
/// The addresses are updated with a negative score making them subject to removal.
pub fn on_dial_failure(&mut self, key: Key<PeerId>, addresses: &[Multiaddr]) {
tracing::trace!(
target: LOG_TARGET,
?key,
?addresses,
"on dial failure"
);
if let KBucketEntry::Occupied(entry) = self.entry(key) {
for address in addresses {
entry.address_store.insert(AddressRecord::from_raw_multiaddr_with_score(
address.clone(),
scores::CONNECTION_FAILURE,
));
}
}
}
/// Update the status of the peer on connection established.
///
/// If the peer exists in the routing table, the connection is set to `Connected`.
/// If the endpoint represents an address we have dialed, the address score
/// is updated in the store of the peer, making it more likely to be used in the future.
pub fn on_connection_established(&mut self, key: Key<PeerId>, endpoint: Endpoint) {
tracing::trace!(target: LOG_TARGET, ?key, ?endpoint, "on connection established");
if let KBucketEntry::Occupied(entry) = self.entry(key) {
entry.connection = ConnectionType::Connected;
if let Endpoint::Dialer { address, .. } = endpoint {
entry.address_store.insert(AddressRecord::from_raw_multiaddr_with_score(
address,
scores::CONNECTION_ESTABLISHED,
));
}
}
}
/// Add known peer to [`RoutingTable`].
///
/// In order to bootstrap the lookup process, the routing table must be aware of
/// at least one node and of its addresses.
///
/// The operation is ignored when:
/// - the provided addresses are empty
/// - the local node is being added
/// - the routing table is full
pub fn add_known_peer(
&mut self,
peer: PeerId,
addresses: Vec<Multiaddr>,
connection: ConnectionType,
) {
tracing::trace!(
target: LOG_TARGET,
?peer,
?addresses,
?connection,
"add known peer"
);
// TODO: https://github.com/paritytech/litep2p/issues/337 this has to be moved elsewhere at some point
let addresses: Vec<Multiaddr> = addresses
.into_iter()
.filter_map(|address| {
let last = address.iter().last();
if std::matches!(last, Some(Protocol::P2p(_))) {
Some(address)
} else {
Some(address.with(Protocol::P2p(Multihash::from_bytes(&peer.to_bytes()).ok()?)))
}
})
.collect();
if addresses.is_empty() {
tracing::debug!(
target: LOG_TARGET,
?peer,
"tried to add zero addresses to the routing table"
);
return;
}
match self.entry(Key::from(peer)) {
KBucketEntry::Occupied(entry) => {
entry.push_addresses(addresses);
entry.connection = connection;
}
mut entry @ KBucketEntry::Vacant(_) => {
entry.insert(KademliaPeer::new(peer, addresses, connection));
}
KBucketEntry::LocalNode => tracing::warn!(
target: LOG_TARGET,
?peer,
"tried to add local node to routing table",
),
KBucketEntry::NoSlot => tracing::trace!(
target: LOG_TARGET,
?peer,
"routing table full, cannot add new entry",
),
}
}
/// Get `limit` closest peers to `target` from the k-buckets.
pub fn closest<K: Clone>(&mut self, target: &Key<K>, limit: usize) -> Vec<KademliaPeer> {
ClosestBucketsIter::new(self.local_key.distance(&target))
.flat_map(|index| self.buckets[index.get()].closest_iter(target))
.take(limit)
.cloned()
.collect()
}
}
/// An iterator over the bucket indices, in the order determined by the `Distance` of a target from
/// the `local_key`, such that the entries in the buckets are incrementally further away from the
/// target, starting with the bucket covering the target.
/// The original implementation is taken from `rust-libp2p`, see [issue#1117][1] for the explanation
/// of the algorithm used.
///
/// [1]: https://github.com/libp2p/rust-libp2p/pull/1117#issuecomment-494694635
struct ClosestBucketsIter {
/// The distance to the `local_key`.
distance: Distance,
/// The current state of the iterator.
state: ClosestBucketsIterState,
}
/// Operating states of a `ClosestBucketsIter`.
enum ClosestBucketsIterState {
/// The starting state of the iterator yields the first bucket index and
/// then transitions to `ZoomIn`.
Start(BucketIndex),
/// The iterator "zooms in" to to yield the next bucket cotaining nodes that
/// are incrementally closer to the local node but further from the `target`.
/// These buckets are identified by a `1` in the corresponding bit position
/// of the distance bit string. When bucket `0` is reached, the iterator
/// transitions to `ZoomOut`.
ZoomIn(BucketIndex),
/// Once bucket `0` has been reached, the iterator starts "zooming out"
/// to buckets containing nodes that are incrementally further away from
/// both the local key and the target. These are identified by a `0` in
/// the corresponding bit position of the distance bit string. When bucket
/// `255` is reached, the iterator transitions to state `Done`.
ZoomOut(BucketIndex),
/// The iterator is in this state once it has visited all buckets.
Done,
}
impl ClosestBucketsIter {
fn new(distance: Distance) -> Self {
let state = match BucketIndex::new(&distance) {
Some(i) => ClosestBucketsIterState::Start(i),
None => ClosestBucketsIterState::Start(BucketIndex(0)),
};
Self { distance, state }
}
fn next_in(&self, i: BucketIndex) -> Option<BucketIndex> {
(0..i.get())
.rev()
.find_map(|i| self.distance.0.bit(i).then_some(BucketIndex(i)))
}
fn next_out(&self, i: BucketIndex) -> Option<BucketIndex> {
(i.get() + 1..NUM_BUCKETS).find_map(|i| (!self.distance.0.bit(i)).then_some(BucketIndex(i)))
}
}
impl Iterator for ClosestBucketsIter {
type Item = BucketIndex;
fn next(&mut self) -> Option<Self::Item> {
match self.state {
ClosestBucketsIterState::Start(i) => {
self.state = ClosestBucketsIterState::ZoomIn(i);
Some(i)
}
ClosestBucketsIterState::ZoomIn(i) =>
if let Some(i) = self.next_in(i) {
self.state = ClosestBucketsIterState::ZoomIn(i);
Some(i)
} else {
let i = BucketIndex(0);
self.state = ClosestBucketsIterState::ZoomOut(i);
Some(i)
},
ClosestBucketsIterState::ZoomOut(i) =>
if let Some(i) = self.next_out(i) {
self.state = ClosestBucketsIterState::ZoomOut(i);
Some(i)
} else {
self.state = ClosestBucketsIterState::Done;
None
},
ClosestBucketsIterState::Done => None,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::protocol::libp2p::kademlia::types::ConnectionType;
#[test]
fn closest_peers() {
let own_peer_id = PeerId::random();
let own_key = Key::from(own_peer_id);
let mut table = RoutingTable::new(own_key.clone());
for _ in 0..60 {
let peer = PeerId::random();
let key = Key::from(peer);
let mut entry = table.entry(key.clone());
entry.insert(KademliaPeer::new(peer, vec![], ConnectionType::Connected));
}
let target = Key::from(PeerId::random());
let closest = table.closest(&target, 60usize);
let mut prev = None;
for peer in &closest {
if let Some(value) = prev {
assert!(value < target.distance(&peer.key));
}
prev = Some(target.distance(&peer.key));
}
}
// generate random peer that falls in to specified k-bucket.
//
// NOTE: the preimage of the generated `Key` doesn't match the `Key` itself
fn random_peer(
rng: &mut impl rand::Rng,
own_key: Key<PeerId>,
bucket_index: usize,
) -> (Key<PeerId>, PeerId) {
let peer = PeerId::random();
let distance = BucketIndex(bucket_index).rand_distance(rng);
let key_bytes = own_key.for_distance(distance);
(Key::from_bytes(key_bytes, peer), peer)
}
#[test]
fn add_peer_to_empty_table() {
let own_peer_id = PeerId::random();
let own_key = Key::from(own_peer_id);
let mut table = RoutingTable::new(own_key.clone());
// verify that local peer id resolves to special entry
match table.entry(own_key.clone()) {
KBucketEntry::LocalNode => {}
state => panic!("invalid state for `KBucketEntry`: {state:?}"),
};
let peer = PeerId::random();
let key = Key::from(peer);
let mut test = table.entry(key.clone());
let addresses = vec![];
assert!(std::matches!(test, KBucketEntry::Vacant(_)));
test.insert(KademliaPeer::new(
peer,
addresses.clone(),
ConnectionType::Connected,
));
match table.entry(key.clone()) {
KBucketEntry::Occupied(entry) => {
assert_eq!(entry.key, key);
assert_eq!(entry.peer, peer);
assert_eq!(entry.addresses(), addresses);
assert_eq!(entry.connection, ConnectionType::Connected);
}
state => panic!("invalid state for `KBucketEntry`: {state:?}"),
};
// Set the connection state
match table.entry(key.clone()) {
KBucketEntry::Occupied(entry) => {
entry.connection = ConnectionType::NotConnected;
}
state => panic!("invalid state for `KBucketEntry`: {state:?}"),
}
match table.entry(key.clone()) {
KBucketEntry::Occupied(entry) => {
assert_eq!(entry.key, key);
assert_eq!(entry.peer, peer);
assert_eq!(entry.addresses(), addresses);
assert_eq!(entry.connection, ConnectionType::NotConnected);
}
state => panic!("invalid state for `KBucketEntry`: {state:?}"),
};
}
#[test]
fn full_k_bucket() {
let mut rng = rand::thread_rng();
let own_peer_id = PeerId::random();
let own_key = Key::from(own_peer_id);
let mut table = RoutingTable::new(own_key.clone());
// add 20 nodes to the same k-bucket
for _ in 0..20 {
let (key, peer) = random_peer(&mut rng, own_key.clone(), 254);
let mut entry = table.entry(key.clone());
assert!(std::matches!(entry, KBucketEntry::Vacant(_)));
entry.insert(KademliaPeer::new(peer, vec![], ConnectionType::Connected));
}
// try to add another peer and verify the peer is rejected
// because the k-bucket is full of connected nodes
let peer = PeerId::random();
let distance = BucketIndex(254).rand_distance(&mut rng);
let key_bytes = own_key.for_distance(distance);
let key = Key::from_bytes(key_bytes, peer);
let entry = table.entry(key.clone());
assert!(std::matches!(entry, KBucketEntry::NoSlot));
}
#[test]
#[ignore]
fn peer_disconnects_and_is_evicted() {
let mut rng = rand::thread_rng();
let own_peer_id = PeerId::random();
let own_key = Key::from(own_peer_id);
let mut table = RoutingTable::new(own_key.clone());
// add 20 nodes to the same k-bucket
let peers = (0..20)
.map(|_| {
let (key, peer) = random_peer(&mut rng, own_key.clone(), 253);
let mut entry = table.entry(key.clone());
assert!(std::matches!(entry, KBucketEntry::Vacant(_)));
entry.insert(KademliaPeer::new(peer, vec![], ConnectionType::Connected));
(peer, key)
})
.collect::<Vec<_>>();
// try to add another peer and verify the peer is rejected
// because the k-bucket is full of connected nodes
let peer = PeerId::random();
let distance = BucketIndex(253).rand_distance(&mut rng);
let key_bytes = own_key.for_distance(distance);
let key = Key::from_bytes(key_bytes, peer);
let entry = table.entry(key.clone());
assert!(std::matches!(entry, KBucketEntry::NoSlot));
// disconnect random peer
match table.entry(peers[3].1.clone()) {
KBucketEntry::Occupied(entry) => {
entry.connection = ConnectionType::NotConnected;
}
_ => panic!("invalid state for node"),
}
// try to add the previously rejected peer again and verify it's added
let mut entry = table.entry(key.clone());
assert!(std::matches!(entry, KBucketEntry::Vacant(_)));
entry.insert(KademliaPeer::new(
peer,
vec!["/ip6/::1/tcp/8888".parse().unwrap()],
ConnectionType::CanConnect,
));
// verify the node is still there
let entry = table.entry(key.clone());
let addresses = vec!["/ip6/::1/tcp/8888".parse().unwrap()];
match entry {
KBucketEntry::Occupied(entry) => {
assert_eq!(entry.key, key);
assert_eq!(entry.peer, peer);
assert_eq!(entry.addresses(), addresses);
assert_eq!(entry.connection, ConnectionType::CanConnect);
}
state => panic!("invalid state for `KBucketEntry`: {state:?}"),
}
}
#[test]
fn disconnected_peers_are_not_evicted_if_there_is_capacity() {
let mut rng = rand::thread_rng();
let own_peer_id = PeerId::random();
let own_key = Key::from(own_peer_id);
let mut table = RoutingTable::new(own_key.clone());
// add 19 disconnected nodes to the same k-bucket
let _peers = (0..19)
.map(|_| {
let (key, peer) = random_peer(&mut rng, own_key.clone(), 252);
let mut entry = table.entry(key.clone());
assert!(std::matches!(entry, KBucketEntry::Vacant(_)));
entry.insert(KademliaPeer::new(
peer,
vec![],
ConnectionType::NotConnected,
));
(peer, key)
})
.collect::<Vec<_>>();
// try to add another peer and verify it's accepted as there is
// still room in the k-bucket for the node
let peer = PeerId::random();
let distance = BucketIndex(252).rand_distance(&mut rng);
let key_bytes = own_key.for_distance(distance);
let key = Key::from_bytes(key_bytes, peer);
let mut entry = table.entry(key.clone());
assert!(std::matches!(entry, KBucketEntry::Vacant(_)));
entry.insert(KademliaPeer::new(
peer,
vec!["/ip6/::1/tcp/8888".parse().unwrap()],
ConnectionType::CanConnect,
));
}
#[test]
fn closest_buckets_iterator_set_lsb() {
// Test zooming-in & zooming-out of the iterator using a toy example with set LSB.
let d = Distance(U256::from(0b10011011));
let mut iter = ClosestBucketsIter::new(d);
// Note that bucket 0 is visited twice. This is, technically, a bug, but to not complicate
// the implementation and keep it consistent with `libp2p` it's kept as is. There are
// virtually no practical consequences of this, because to have bucket 0 populated we have
// to encounter two sha256 hash values differing only in one least significant bit.
let expected_buckets =
vec![7, 4, 3, 1, 0, 0, 2, 5, 6].into_iter().chain(8..=255).map(BucketIndex);
for expected in expected_buckets {
let got = iter.next().unwrap();
assert_eq!(got, expected);
}
assert!(iter.next().is_none());
}
#[test]
fn closest_buckets_iterator_unset_lsb() {
// Test zooming-in & zooming-out of the iterator using a toy example with unset LSB.
let d = Distance(U256::from(0b01011010));
let mut iter = ClosestBucketsIter::new(d);
let expected_buckets =
vec![6, 4, 3, 1, 0, 2, 5, 7].into_iter().chain(8..=255).map(BucketIndex);
for expected in expected_buckets {
let got = iter.next().unwrap();
assert_eq!(got, expected);
}
assert!(iter.next().is_none());
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/store.rs | src/protocol/libp2p/kademlia/store.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Memory store implementation for Kademlia.
use crate::{
protocol::libp2p::kademlia::{
config::{
DEFAULT_MAX_PROVIDERS_PER_KEY, DEFAULT_MAX_PROVIDER_ADDRESSES,
DEFAULT_MAX_PROVIDER_KEYS, DEFAULT_MAX_RECORDS, DEFAULT_MAX_RECORD_SIZE_BYTES,
DEFAULT_PROVIDER_REFRESH_INTERVAL, DEFAULT_PROVIDER_TTL,
},
record::{ContentProvider, Key, ProviderRecord, Record},
types::Key as KademliaKey,
Quorum,
},
utils::futures_stream::FuturesStream,
PeerId,
};
use futures::{future::BoxFuture, StreamExt};
use std::{
collections::{hash_map::Entry, HashMap},
time::Duration,
};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::store";
/// Memory store events.
#[derive(Debug, PartialEq, Eq)]
pub enum MemoryStoreAction {
RefreshProvider {
provided_key: Key,
provider: ContentProvider,
quorum: Quorum,
},
}
/// Memory store.
pub struct MemoryStore {
/// Local peer ID. Used to track local providers.
local_peer_id: PeerId,
/// Configuration.
config: MemoryStoreConfig,
/// Records.
records: HashMap<Key, Record>,
/// Provider records.
provider_keys: HashMap<Key, Vec<ProviderRecord>>,
/// Local providers.
local_providers: HashMap<Key, (ContentProvider, Quorum)>,
/// Futures to signal it's time to republish a local provider.
pending_provider_refresh: FuturesStream<BoxFuture<'static, Key>>,
}
impl MemoryStore {
/// Create new [`MemoryStore`].
#[cfg(test)]
pub fn new(local_peer_id: PeerId) -> Self {
Self {
local_peer_id,
config: MemoryStoreConfig::default(),
records: HashMap::new(),
provider_keys: HashMap::new(),
local_providers: HashMap::new(),
pending_provider_refresh: FuturesStream::new(),
}
}
/// Create new [`MemoryStore`] with the provided configuration.
pub fn with_config(local_peer_id: PeerId, config: MemoryStoreConfig) -> Self {
Self {
local_peer_id,
config,
records: HashMap::new(),
provider_keys: HashMap::new(),
local_providers: HashMap::new(),
pending_provider_refresh: FuturesStream::new(),
}
}
/// Try to get record from local store for `key`.
pub fn get(&mut self, key: &Key) -> Option<&Record> {
let is_expired = self
.records
.get(key)
.is_some_and(|record| record.is_expired(std::time::Instant::now()));
if is_expired {
self.records.remove(key);
None
} else {
self.records.get(key)
}
}
/// Store record.
pub fn put(&mut self, record: Record) {
if record.value.len() >= self.config.max_record_size_bytes {
tracing::warn!(
target: LOG_TARGET,
key = ?record.key,
publisher = ?record.publisher,
size = record.value.len(),
max_size = self.config.max_record_size_bytes,
"discarding a DHT record that exceeds the configured size limit",
);
return;
}
let len = self.records.len();
match self.records.entry(record.key.clone()) {
Entry::Occupied(mut entry) => {
// Lean towards the new record.
if let (Some(stored_record_ttl), Some(new_record_ttl)) =
(entry.get().expires, record.expires)
{
if stored_record_ttl > new_record_ttl {
return;
}
}
entry.insert(record);
}
Entry::Vacant(entry) => {
if len >= self.config.max_records {
tracing::warn!(
target: LOG_TARGET,
max_records = self.config.max_records,
"discarding a DHT record, because maximum memory store size reached",
);
return;
}
entry.insert(record);
}
}
}
/// Try to get providers from local store for `key`.
///
/// Returns a non-empty list of providers, if any.
pub fn get_providers(&mut self, key: &Key) -> Vec<ContentProvider> {
let drop_key = self.provider_keys.get_mut(key).is_some_and(|providers| {
let now = std::time::Instant::now();
providers.retain(|p| !p.is_expired(now));
providers.is_empty()
});
if drop_key {
self.provider_keys.remove(key);
Vec::default()
} else {
self.provider_keys
.get(key)
.cloned()
.unwrap_or_else(Vec::default)
.into_iter()
.map(|p| ContentProvider {
peer: p.provider,
addresses: p.addresses,
})
.collect()
}
}
/// Try to add a provider for `key`. If there are already `max_providers_per_key` for
/// this `key`, the new provider is only inserted if its closer to `key` than
/// the furthest already inserted provider. The furthest provider is then discarded.
///
/// Returns `true` if the provider was added, `false` otherwise.
///
/// `quorum` is only relevant for local providers.
pub fn put_provider(&mut self, key: Key, provider: ContentProvider) -> bool {
// Make sure we have no more than `max_provider_addresses`.
let provider_record = {
let mut record = ProviderRecord {
key,
provider: provider.peer,
addresses: provider.addresses,
expires: std::time::Instant::now() + self.config.provider_ttl,
};
record.addresses.truncate(self.config.max_provider_addresses);
record
};
let can_insert_new_key = self.provider_keys.len() < self.config.max_provider_keys;
match self.provider_keys.entry(provider_record.key.clone()) {
Entry::Vacant(entry) =>
if can_insert_new_key {
entry.insert(vec![provider_record]);
true
} else {
tracing::warn!(
target: LOG_TARGET,
max_provider_keys = self.config.max_provider_keys,
"discarding a provider record, because the provider key limit reached",
);
false
},
Entry::Occupied(mut entry) => {
let providers = entry.get_mut();
// Providers under every key are sorted by distance from the provided key, with
// equal distances meaning peer IDs (more strictly, their hashes)
// are equal.
let provider_position =
providers.binary_search_by(|p| p.distance().cmp(&provider_record.distance()));
match provider_position {
Ok(i) => {
// Update the provider in place.
providers[i] = provider_record.clone();
true
}
Err(i) => {
// `Err(i)` contains the insertion point.
if i == self.config.max_providers_per_key {
tracing::trace!(
target: LOG_TARGET,
key = ?provider_record.key,
provider = ?provider_record.provider,
max_providers_per_key = self.config.max_providers_per_key,
"discarding a provider record, because it's further than \
existing `max_providers_per_key`",
);
false
} else {
if providers.len() == self.config.max_providers_per_key {
providers.pop();
}
providers.insert(i, provider_record.clone());
true
}
}
}
}
}
}
/// Try to add ourself as a provider for `key`.
///
/// Returns `true` if the provider was added, `false` otherwise.
pub fn put_local_provider(&mut self, key: Key, quorum: Quorum) -> bool {
let provider = ContentProvider {
peer: self.local_peer_id,
// For local providers addresses are populated when replying to `GET_PROVIDERS`
// requests.
addresses: vec![],
};
if self.put_provider(key.clone(), provider.clone()) {
let refresh_interval = self.config.provider_refresh_interval;
self.local_providers.insert(key.clone(), (provider, quorum));
self.pending_provider_refresh.push(Box::pin(async move {
tokio::time::sleep(refresh_interval).await;
key
}));
true
} else {
false
}
}
/// Remove local provider for `key`.
pub fn remove_local_provider(&mut self, key: Key) {
if self.local_providers.remove(&key).is_none() {
tracing::warn!(?key, "trying to remove nonexistent local provider",);
return;
};
match self.provider_keys.entry(key.clone()) {
Entry::Vacant(_) => {
tracing::error!(?key, "local provider key not found during removal",);
debug_assert!(false);
}
Entry::Occupied(mut entry) => {
let providers = entry.get_mut();
// Providers are sorted by distance.
let local_provider_distance =
KademliaKey::from(self.local_peer_id).distance(&KademliaKey::new(key.clone()));
let provider_position =
providers.binary_search_by(|p| p.distance().cmp(&local_provider_distance));
match provider_position {
Ok(i) => {
providers.remove(i);
}
Err(_) => {
tracing::error!(?key, "local provider not found during removal",);
debug_assert!(false);
return;
}
}
if providers.is_empty() {
entry.remove();
}
}
};
}
/// Poll next action from the store.
pub async fn next_action(&mut self) -> Option<MemoryStoreAction> {
// [`FuturesStream`] never terminates, so `and_then()` below is always triggered.
self.pending_provider_refresh.next().await.and_then(|key| {
if let Some((provider, quorum)) = self.local_providers.get(&key).cloned() {
tracing::trace!(
target: LOG_TARGET,
?key,
"refresh provider"
);
Some(MemoryStoreAction::RefreshProvider {
provided_key: key,
provider,
quorum,
})
} else {
tracing::trace!(
target: LOG_TARGET,
?key,
"it's time to refresh a provider, but we do not provide this key anymore",
);
None
}
})
}
}
#[derive(Debug)]
pub struct MemoryStoreConfig {
/// Maximum number of records to store.
pub max_records: usize,
/// Maximum size of a record in bytes.
pub max_record_size_bytes: usize,
/// Maximum number of provider keys this node stores.
pub max_provider_keys: usize,
/// Maximum number of cached addresses per provider.
pub max_provider_addresses: usize,
/// Maximum number of providers per key. Only providers with peer IDs closest to the key are
/// kept.
pub max_providers_per_key: usize,
/// Local providers republish interval.
pub provider_refresh_interval: Duration,
/// Provider record TTL.
pub provider_ttl: Duration,
}
impl Default for MemoryStoreConfig {
fn default() -> Self {
Self {
max_records: DEFAULT_MAX_RECORDS,
max_record_size_bytes: DEFAULT_MAX_RECORD_SIZE_BYTES,
max_provider_keys: DEFAULT_MAX_PROVIDER_KEYS,
max_provider_addresses: DEFAULT_MAX_PROVIDER_ADDRESSES,
max_providers_per_key: DEFAULT_MAX_PROVIDERS_PER_KEY,
provider_refresh_interval: DEFAULT_PROVIDER_REFRESH_INTERVAL,
provider_ttl: DEFAULT_PROVIDER_TTL,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{protocol::libp2p::kademlia::types::Key as KademliaKey, PeerId};
use multiaddr::multiaddr;
#[test]
fn put_get_record() {
let mut store = MemoryStore::new(PeerId::random());
let key = Key::from(vec![1, 2, 3]);
let record = Record::new(key.clone(), vec![4, 5, 6]);
store.put(record.clone());
assert_eq!(store.get(&key), Some(&record));
}
#[test]
fn max_records() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_records: 1,
max_record_size_bytes: 1024,
..Default::default()
},
);
let key1 = Key::from(vec![1, 2, 3]);
let key2 = Key::from(vec![4, 5, 6]);
let record1 = Record::new(key1.clone(), vec![4, 5, 6]);
let record2 = Record::new(key2.clone(), vec![7, 8, 9]);
store.put(record1.clone());
store.put(record2.clone());
assert_eq!(store.get(&key1), Some(&record1));
assert_eq!(store.get(&key2), None);
}
#[test]
fn expired_record_removed() {
let mut store = MemoryStore::new(PeerId::random());
let key = Key::from(vec![1, 2, 3]);
let record = Record {
key: key.clone(),
value: vec![4, 5, 6],
publisher: None,
expires: Some(std::time::Instant::now() - std::time::Duration::from_secs(5)),
};
// Record is already expired.
assert!(record.is_expired(std::time::Instant::now()));
store.put(record.clone());
assert_eq!(store.get(&key), None);
}
#[test]
fn new_record_overwrites() {
let mut store = MemoryStore::new(PeerId::random());
let key = Key::from(vec![1, 2, 3]);
let record1 = Record {
key: key.clone(),
value: vec![4, 5, 6],
publisher: None,
expires: Some(std::time::Instant::now() + std::time::Duration::from_secs(100)),
};
let record2 = Record {
key: key.clone(),
value: vec![4, 5, 6],
publisher: None,
expires: Some(std::time::Instant::now() + std::time::Duration::from_secs(1000)),
};
store.put(record1.clone());
assert_eq!(store.get(&key), Some(&record1));
store.put(record2.clone());
assert_eq!(store.get(&key), Some(&record2));
}
#[test]
fn max_record_size() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_records: 1024,
max_record_size_bytes: 2,
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let record = Record::new(key.clone(), vec![4, 5]);
store.put(record.clone());
assert_eq!(store.get(&key), None);
let record = Record::new(key.clone(), vec![4]);
store.put(record.clone());
assert_eq!(store.get(&key), Some(&record));
}
#[test]
fn put_get_provider() {
let mut store = MemoryStore::new(PeerId::random());
let key = Key::from(vec![1, 2, 3]);
let provider = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
};
store.put_provider(key.clone(), provider.clone());
assert_eq!(store.get_providers(&key), vec![provider]);
}
#[test]
fn multiple_providers_per_key() {
let mut store = MemoryStore::new(PeerId::random());
let key = Key::from(vec![1, 2, 3]);
let provider1 = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
};
let provider2 = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
};
store.put_provider(key.clone(), provider1.clone());
store.put_provider(key.clone(), provider2.clone());
let got_providers = store.get_providers(&key);
assert_eq!(got_providers.len(), 2);
assert!(got_providers.contains(&provider1));
assert!(got_providers.contains(&provider2));
}
#[test]
fn providers_sorted_by_distance() {
let mut store = MemoryStore::new(PeerId::random());
let key = Key::from(vec![1, 2, 3]);
let providers = (0..10)
.map(|_| ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
})
.collect::<Vec<_>>();
providers.iter().for_each(|p| {
store.put_provider(key.clone(), p.clone());
});
let sorted_providers = {
let target = KademliaKey::new(key.clone());
let mut providers = providers;
providers.sort_by(|p1, p2| {
KademliaKey::from(p1.peer)
.distance(&target)
.cmp(&KademliaKey::from(p2.peer).distance(&target))
});
providers
};
assert_eq!(store.get_providers(&key), sorted_providers);
}
#[test]
fn max_providers_per_key() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_providers_per_key: 10,
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let providers = (0..20)
.map(|_| ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
})
.collect::<Vec<_>>();
providers.iter().for_each(|p| {
store.put_provider(key.clone(), p.clone());
});
assert_eq!(store.get_providers(&key).len(), 10);
}
#[test]
fn closest_providers_kept() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_providers_per_key: 10,
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let providers = (0..20)
.map(|_| ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
})
.collect::<Vec<_>>();
providers.iter().for_each(|p| {
store.put_provider(key.clone(), p.clone());
});
let closest_providers = {
let target = KademliaKey::new(key.clone());
let mut providers = providers;
providers.sort_by(|p1, p2| {
KademliaKey::from(p1.peer)
.distance(&target)
.cmp(&KademliaKey::from(p2.peer).distance(&target))
});
providers.truncate(10);
providers
};
assert_eq!(store.get_providers(&key), closest_providers);
}
#[test]
fn furthest_provider_discarded() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_providers_per_key: 10,
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let providers = (0..11)
.map(|_| ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
})
.collect::<Vec<_>>();
let sorted_providers = {
let target = KademliaKey::new(key.clone());
let mut providers = providers;
providers.sort_by(|p1, p2| {
KademliaKey::from(p1.peer)
.distance(&target)
.cmp(&KademliaKey::from(p2.peer).distance(&target))
});
providers
};
// First 10 providers are inserted.
for i in 0..10 {
assert!(store.put_provider(key.clone(), sorted_providers[i].clone()));
}
assert_eq!(store.get_providers(&key), sorted_providers[..10]);
// The furthests provider doesn't fit.
assert!(!store.put_provider(key.clone(), sorted_providers[10].clone()));
assert_eq!(store.get_providers(&key), sorted_providers[..10]);
}
#[test]
fn update_provider_in_place() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_providers_per_key: 10,
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let peer_ids = (0..10).map(|_| PeerId::random()).collect::<Vec<_>>();
let peer_id0 = peer_ids[0];
let providers = peer_ids
.iter()
.map(|peer_id| ContentProvider {
peer: *peer_id,
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
})
.collect::<Vec<_>>();
providers.iter().for_each(|p| {
store.put_provider(key.clone(), p.clone());
});
let sorted_providers = {
let target = KademliaKey::new(key.clone());
let mut providers = providers;
providers.sort_by(|p1, p2| {
KademliaKey::from(p1.peer)
.distance(&target)
.cmp(&KademliaKey::from(p2.peer).distance(&target))
});
providers
};
assert_eq!(store.get_providers(&key), sorted_providers);
let provider0_new = ContentProvider {
peer: peer_id0,
addresses: vec![multiaddr!(Ip4([192, 168, 0, 1]), Tcp(20000u16))],
};
// Provider is updated in place.
assert!(store.put_provider(key.clone(), provider0_new.clone()));
let providers_new = sorted_providers
.into_iter()
.map(|p| {
if p.peer == peer_id0 {
provider0_new.clone()
} else {
p
}
})
.collect::<Vec<_>>();
assert_eq!(store.get_providers(&key), providers_new);
}
#[tokio::test]
async fn provider_record_expires() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
provider_ttl: std::time::Duration::from_secs(1),
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let provider = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
};
store.put_provider(key.clone(), provider.clone());
// Provider does not instantly expire.
assert_eq!(store.get_providers(&key), vec![provider]);
// Provider expires after 2 seconds.
tokio::time::sleep(Duration::from_secs(2)).await;
assert_eq!(store.get_providers(&key), vec![]);
}
#[tokio::test]
async fn individual_provider_record_expires() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
provider_ttl: std::time::Duration::from_secs(8),
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let provider1 = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
};
let provider2 = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16))],
};
store.put_provider(key.clone(), provider1.clone());
tokio::time::sleep(Duration::from_secs(4)).await;
store.put_provider(key.clone(), provider2.clone());
// Providers do not instantly expire.
let got_providers = store.get_providers(&key);
assert_eq!(got_providers.len(), 2);
assert!(got_providers.contains(&provider1));
assert!(got_providers.contains(&provider2));
// First provider expires.
tokio::time::sleep(Duration::from_secs(6)).await;
assert_eq!(store.get_providers(&key), vec![provider2]);
// Second provider expires.
tokio::time::sleep(Duration::from_secs(4)).await;
assert_eq!(store.get_providers(&key), vec![]);
}
#[test]
fn max_addresses_per_provider() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_provider_addresses: 2,
..Default::default()
},
);
let key = Key::from(vec![1, 2, 3]);
let provider = ContentProvider {
peer: PeerId::random(),
addresses: vec![
multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16)),
multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10001u16)),
multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10002u16)),
multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10003u16)),
multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10004u16)),
],
};
store.put_provider(key.clone(), provider);
let got_providers = store.get_providers(&key);
assert_eq!(got_providers.len(), 1);
assert_eq!(got_providers.first().unwrap().addresses.len(), 2);
}
#[test]
fn max_provider_keys() {
let mut store = MemoryStore::with_config(
PeerId::random(),
MemoryStoreConfig {
max_provider_keys: 2,
..Default::default()
},
);
let key1 = Key::from(vec![1, 1, 1]);
let provider1 = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10001u16))],
};
let key2 = Key::from(vec![2, 2, 2]);
let provider2 = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10002u16))],
};
let key3 = Key::from(vec![3, 3, 3]);
let provider3 = ContentProvider {
peer: PeerId::random(),
addresses: vec![multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10003u16))],
};
assert!(store.put_provider(key1.clone(), provider1.clone()));
assert!(store.put_provider(key2.clone(), provider2.clone()));
assert!(!store.put_provider(key3.clone(), provider3.clone()));
assert_eq!(store.get_providers(&key1), vec![provider1]);
assert_eq!(store.get_providers(&key2), vec![provider2]);
assert_eq!(store.get_providers(&key3), vec![]);
}
#[test]
fn local_provider_registered() {
let local_peer_id = PeerId::random();
let mut store = MemoryStore::new(local_peer_id);
let key = Key::from(vec![1, 2, 3]);
let local_provider = ContentProvider {
peer: local_peer_id,
addresses: vec![],
};
let quorum = Quorum::All;
assert!(store.local_providers.is_empty());
assert_eq!(store.pending_provider_refresh.len(), 0);
assert!(store.put_local_provider(key.clone(), quorum));
assert_eq!(
store.local_providers.get(&key),
Some(&(local_provider, quorum)),
);
assert_eq!(store.pending_provider_refresh.len(), 1);
}
#[test]
fn local_provider_registered_after_remote_provider() {
let local_peer_id = PeerId::random();
let mut store = MemoryStore::new(local_peer_id);
let key = Key::from(vec![1, 2, 3]);
let remote_peer_id = PeerId::random();
let remote_provider = ContentProvider {
peer: remote_peer_id,
addresses: vec![multiaddr!(Ip4([192, 168, 0, 1]), Tcp(10000u16))],
};
let local_provider = ContentProvider {
peer: local_peer_id,
addresses: vec![],
};
let quorum = Quorum::N(5.try_into().unwrap());
assert!(store.local_providers.is_empty());
assert_eq!(store.pending_provider_refresh.len(), 0);
assert!(store.put_provider(key.clone(), remote_provider.clone()));
assert!(store.put_local_provider(key.clone(), quorum));
let got_providers = store.get_providers(&key);
assert_eq!(got_providers.len(), 2);
assert!(got_providers.contains(&remote_provider));
assert!(got_providers.contains(&local_provider));
assert_eq!(
store.local_providers.get(&key),
Some(&(local_provider, quorum))
);
assert_eq!(store.pending_provider_refresh.len(), 1);
}
#[test]
fn local_provider_removed() {
let local_peer_id = PeerId::random();
let mut store = MemoryStore::new(local_peer_id);
let key = Key::from(vec![1, 2, 3]);
let local_provider = ContentProvider {
peer: local_peer_id,
addresses: vec![],
};
let quorum = Quorum::One;
assert!(store.local_providers.is_empty());
assert!(store.put_local_provider(key.clone(), quorum));
assert_eq!(
store.local_providers.get(&key),
Some(&(local_provider, quorum))
);
store.remove_local_provider(key.clone());
assert!(store.get_providers(&key).is_empty());
assert!(store.local_providers.is_empty());
}
#[test]
fn local_provider_removed_when_remote_providers_present() {
let local_peer_id = PeerId::random();
let mut store = MemoryStore::new(local_peer_id);
let key = Key::from(vec![1, 2, 3]);
let remote_peer_id = PeerId::random();
let remote_provider = ContentProvider {
peer: remote_peer_id,
addresses: vec![multiaddr!(Ip4([192, 168, 0, 1]), Tcp(10000u16))],
};
let local_provider = ContentProvider {
peer: local_peer_id,
addresses: vec![],
};
let quorum = Quorum::One;
assert!(store.put_provider(key.clone(), remote_provider.clone()));
assert!(store.put_local_provider(key.clone(), quorum));
let got_providers = store.get_providers(&key);
assert_eq!(got_providers.len(), 2);
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/bucket.rs | src/protocol/libp2p/kademlia/bucket.rs | // Copyright 2018-2019 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Kademlia k-bucket implementation.
use crate::{
protocol::libp2p::kademlia::types::{ConnectionType, KademliaPeer, Key},
PeerId,
};
/// K-bucket entry.
#[derive(Debug)]
pub enum KBucketEntry<'a> {
/// Entry points to local node.
LocalNode,
/// Occupied entry to a connected node.
Occupied(&'a mut KademliaPeer),
/// Vacant entry.
Vacant(&'a mut KademliaPeer),
/// Entry not found and any present entry cannot be replaced.
NoSlot,
}
impl<'a> KBucketEntry<'a> {
/// Insert new entry into the entry if possible.
pub fn insert(&'a mut self, new: KademliaPeer) {
if let KBucketEntry::Vacant(old) = self {
old.peer = new.peer;
old.key = Key::from(new.peer);
old.address_store = new.address_store;
old.connection = new.connection;
}
}
}
/// Kademlia k-bucket.
pub struct KBucket {
// TODO: https://github.com/paritytech/litep2p/issues/335
// store peers in a btreemap with increasing distance from local key?
nodes: Vec<KademliaPeer>,
}
impl KBucket {
/// Create new [`KBucket`].
pub fn new() -> Self {
Self {
nodes: Vec::with_capacity(20),
}
}
/// Get entry into the bucket.
// TODO: https://github.com/paritytech/litep2p/pull/184 should optimize this
pub fn entry<K: Clone>(&mut self, key: Key<K>) -> KBucketEntry<'_> {
for i in 0..self.nodes.len() {
if self.nodes[i].key == key {
return KBucketEntry::Occupied(&mut self.nodes[i]);
}
}
if self.nodes.len() < 20 {
self.nodes.push(KademliaPeer::new(
PeerId::random(),
vec![],
ConnectionType::NotConnected,
));
let len = self.nodes.len() - 1;
return KBucketEntry::Vacant(&mut self.nodes[len]);
}
for i in 0..self.nodes.len() {
match self.nodes[i].connection {
ConnectionType::NotConnected | ConnectionType::CannotConnect => {
return KBucketEntry::Vacant(&mut self.nodes[i]);
}
_ => continue,
}
}
KBucketEntry::NoSlot
}
/// Get iterator over the k-bucket, sorting the k-bucket entries in increasing order
/// by distance.
pub fn closest_iter<K: Clone>(&self, target: &Key<K>) -> impl Iterator<Item = &KademliaPeer> {
let mut nodes: Vec<_> = self.nodes.iter().collect();
nodes.sort_by(|a, b| target.distance(&a.key).cmp(&target.distance(&b.key)));
nodes.into_iter().filter(|peer| !peer.address_store.is_empty())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn closest_iter() {
let mut bucket = KBucket::new();
// add some random nodes to the bucket
let _ = (0..10)
.map(|_| {
let peer = PeerId::random();
bucket.nodes.push(KademliaPeer::new(peer, vec![], ConnectionType::Connected));
peer
})
.collect::<Vec<_>>();
let target = Key::from(PeerId::random());
let iter = bucket.closest_iter(&target);
let mut prev = None;
for node in iter {
if let Some(distance) = prev {
assert!(distance < target.distance(&node.key));
}
prev = Some(target.distance(&node.key));
}
}
#[test]
fn ignore_peers_with_no_addresses() {
let mut bucket = KBucket::new();
// add peers with no addresses to the bucket
let _ = (0..10)
.map(|_| {
let peer = PeerId::random();
bucket.nodes.push(KademliaPeer::new(
peer,
vec![],
ConnectionType::NotConnected,
));
peer
})
.collect::<Vec<_>>();
// add three peers with an address
let _ = (0..3)
.map(|_| {
let peer = PeerId::random();
bucket.nodes.push(KademliaPeer::new(
peer,
vec!["/ip6/::/tcp/0".parse().unwrap()],
ConnectionType::Connected,
));
peer
})
.collect::<Vec<_>>();
let target = Key::from(PeerId::random());
let iter = bucket.closest_iter(&target);
let mut prev = None;
let mut num_peers = 0usize;
for node in iter {
if let Some(distance) = prev {
assert!(distance < target.distance(&node.key));
}
num_peers += 1;
prev = Some(target.distance(&node.key));
}
assert_eq!(num_peers, 3usize);
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/types.rs | src/protocol/libp2p/kademlia/types.rs | // Copyright 2018-2019 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
// Note: This is coming from the external construct_uint crate.
#![allow(clippy::manual_div_ceil)]
//! Kademlia types.
use crate::{
protocol::libp2p::kademlia::schema,
transport::manager::address::{AddressRecord, AddressStore},
PeerId,
};
use multiaddr::Multiaddr;
#[allow(deprecated)]
// TODO: remove `#[allow(deprecated)] once sha2-0.11 is released.
// See https://github.com/paritytech/litep2p/issues/449.
use sha2::digest::generic_array::GenericArray;
use sha2::{digest::generic_array::typenum::U32, Digest, Sha256};
use uint::*;
use std::{
borrow::Borrow,
hash::{Hash, Hasher},
};
/// Maximum number of addresses to store for a peer.
const MAX_ADDRESSES: usize = 32;
construct_uint! {
/// 256-bit unsigned integer.
pub(super) struct U256(4);
}
/// A `Key` in the DHT keyspace with preserved preimage.
///
/// Keys in the DHT keyspace identify both the participating nodes, as well as
/// the records stored in the DHT.
///
/// `Key`s have an XOR metric as defined in the Kademlia paper, i.e. the bitwise XOR of
/// the hash digests, interpreted as an integer. See [`Key::distance`].
#[derive(Clone, Debug)]
pub struct Key<T: Clone> {
preimage: T,
bytes: KeyBytes,
}
impl<T: Clone> Key<T> {
/// Constructs a new `Key` by running the given value through a random
/// oracle.
///
/// The preimage of type `T` is preserved.
/// See [`Key::into_preimage`] for more details.
pub fn new(preimage: T) -> Key<T>
where
T: Borrow<[u8]>,
{
let bytes = KeyBytes::new(preimage.borrow());
Key { preimage, bytes }
}
/// Convert [`Key`] into its preimage.
pub fn into_preimage(self) -> T {
self.preimage
}
/// Computes the distance of the keys according to the XOR metric.
pub fn distance<U>(&self, other: &U) -> Distance
where
U: AsRef<KeyBytes>,
{
self.bytes.distance(other)
}
/// Returns the uniquely determined key with the given distance to `self`.
///
/// This implements the following equivalence:
///
/// `self xor other = distance <==> other = self xor distance`
#[cfg(test)]
pub fn for_distance(&self, d: Distance) -> KeyBytes {
self.bytes.for_distance(d)
}
/// Generate key from `KeyBytes` with a random preimage.
///
/// Only used for testing
#[cfg(test)]
pub fn from_bytes(bytes: KeyBytes, preimage: T) -> Key<T> {
Self { bytes, preimage }
}
}
impl<T: Clone> From<Key<T>> for KeyBytes {
fn from(key: Key<T>) -> KeyBytes {
key.bytes
}
}
impl From<PeerId> for Key<PeerId> {
fn from(p: PeerId) -> Self {
let bytes = KeyBytes(Sha256::digest(p.to_bytes()));
Key { preimage: p, bytes }
}
}
impl From<Vec<u8>> for Key<Vec<u8>> {
fn from(b: Vec<u8>) -> Self {
Key::new(b)
}
}
impl<T: Clone> AsRef<KeyBytes> for Key<T> {
fn as_ref(&self) -> &KeyBytes {
&self.bytes
}
}
impl<T: Clone, U: Clone> PartialEq<Key<U>> for Key<T> {
fn eq(&self, other: &Key<U>) -> bool {
self.bytes == other.bytes
}
}
impl<T: Clone> Eq for Key<T> {}
impl<T: Clone> Hash for Key<T> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.bytes.0.hash(state);
}
}
/// The raw bytes of a key in the DHT keyspace.
#[derive(PartialEq, Eq, Clone, Debug)]
#[allow(deprecated)]
// TODO: remove `#[allow(deprecated)] once sha2-0.11 is released.
// See https://github.com/paritytech/litep2p/issues/449.
pub struct KeyBytes(GenericArray<u8, U32>);
impl KeyBytes {
/// Creates a new key in the DHT keyspace by running the given
/// value through a random oracle.
pub fn new<T>(value: T) -> Self
where
T: Borrow<[u8]>,
{
KeyBytes(Sha256::digest(value.borrow()))
}
/// Computes the distance of the keys according to the XOR metric.
#[allow(deprecated)]
// TODO: remove `#[allow(deprecated)] once sha2-0.11 is released.
// See https://github.com/paritytech/litep2p/issues/449.
pub fn distance<U>(&self, other: &U) -> Distance
where
U: AsRef<KeyBytes>,
{
let a = U256::from_big_endian(self.0.as_slice());
let b = U256::from_big_endian(other.as_ref().0.as_slice());
Distance(a ^ b)
}
/// Returns the uniquely determined key with the given distance to `self`.
///
/// This implements the following equivalence:
///
/// `self xor other = distance <==> other = self xor distance`
#[cfg(test)]
#[allow(deprecated)]
// TODO: remove `#[allow(deprecated)] once sha2-0.11 is released.
// See https://github.com/paritytech/litep2p/issues/449.
pub fn for_distance(&self, d: Distance) -> KeyBytes {
let key_int = U256::from_big_endian(self.0.as_slice()) ^ d.0;
KeyBytes(GenericArray::from(key_int.to_big_endian()))
}
}
impl AsRef<KeyBytes> for KeyBytes {
fn as_ref(&self) -> &KeyBytes {
self
}
}
/// A distance between two keys in the DHT keyspace.
#[derive(Copy, Clone, PartialEq, Eq, Default, PartialOrd, Ord, Debug)]
pub struct Distance(pub(super) U256);
impl Distance {
/// Returns the integer part of the base 2 logarithm of the [`Distance`].
///
/// Returns `None` if the distance is zero.
pub fn ilog2(&self) -> Option<u32> {
(256 - self.0.leading_zeros()).checked_sub(1)
}
}
/// Connection type to peer.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum ConnectionType {
/// Sender does not have a connection to peer.
NotConnected,
/// Sender is connected to the peer.
Connected,
/// Sender has recently been connected to the peer.
CanConnect,
/// Sender is unable to connect to the peer.
CannotConnect,
}
impl TryFrom<i32> for ConnectionType {
type Error = ();
fn try_from(value: i32) -> Result<Self, Self::Error> {
match value {
0 => Ok(ConnectionType::NotConnected),
1 => Ok(ConnectionType::Connected),
2 => Ok(ConnectionType::CanConnect),
3 => Ok(ConnectionType::CannotConnect),
_ => Err(()),
}
}
}
impl From<ConnectionType> for i32 {
fn from(connection: ConnectionType) -> Self {
match connection {
ConnectionType::NotConnected => 0,
ConnectionType::Connected => 1,
ConnectionType::CanConnect => 2,
ConnectionType::CannotConnect => 3,
}
}
}
/// Kademlia peer.
#[derive(Debug, Clone)]
pub struct KademliaPeer {
/// Peer key.
pub(super) key: Key<PeerId>,
/// Peer ID.
pub(super) peer: PeerId,
/// Known addresses of peer.
pub(super) address_store: AddressStore,
/// Connection type.
pub(super) connection: ConnectionType,
}
impl KademliaPeer {
/// Create new [`KademliaPeer`].
pub fn new(peer: PeerId, addresses: Vec<Multiaddr>, connection: ConnectionType) -> Self {
let mut address_store = AddressStore::new();
for address in addresses.into_iter() {
address_store.insert(AddressRecord::from_raw_multiaddr(address));
}
Self {
peer,
address_store,
connection,
key: Key::from(peer),
}
}
/// Add the following addresses to the kademlia peer if there's enough space.
pub fn push_addresses(&mut self, addresses: impl IntoIterator<Item = Multiaddr>) {
for address in addresses {
self.address_store.insert(AddressRecord::from_raw_multiaddr(address));
}
}
/// Returns the addresses of the peer.
pub fn addresses(&self) -> Vec<Multiaddr> {
self.address_store.addresses(MAX_ADDRESSES)
}
}
impl TryFrom<&schema::kademlia::Peer> for KademliaPeer {
type Error = ();
fn try_from(record: &schema::kademlia::Peer) -> Result<Self, Self::Error> {
let peer = PeerId::from_bytes(&record.id).map_err(|_| ())?;
let mut address_store = AddressStore::new();
for address in record.addrs.iter() {
let Ok(address) = Multiaddr::try_from(address.clone()) else {
continue;
};
address_store.insert(AddressRecord::from_raw_multiaddr(address));
}
Ok(KademliaPeer {
key: Key::from(peer),
peer,
address_store,
connection: ConnectionType::try_from(record.connection)?,
})
}
}
impl From<&KademliaPeer> for schema::kademlia::Peer {
fn from(peer: &KademliaPeer) -> Self {
schema::kademlia::Peer {
id: peer.peer.to_bytes(),
addrs: peer
.address_store
.addresses(MAX_ADDRESSES)
.iter()
.map(|address| address.to_vec())
.collect(),
connection: peer.connection.into(),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/record.rs | src/protocol/libp2p/kademlia/record.rs | // Copyright 2019 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::types::{
ConnectionType, Distance, KademliaPeer, Key as KademliaKey,
},
transport::manager::address::{AddressRecord, AddressStore},
Multiaddr, PeerId,
};
use bytes::Bytes;
use multihash::Multihash;
use std::{borrow::Borrow, time::Instant};
/// The (opaque) key of a record.
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub struct Key(Bytes);
impl Key {
/// Creates a new key from the bytes of the input.
pub fn new<K: AsRef<[u8]>>(key: &K) -> Self {
Key(Bytes::copy_from_slice(key.as_ref()))
}
/// Copies the bytes of the key into a new vector.
pub fn to_vec(&self) -> Vec<u8> {
Vec::from(&self.0[..])
}
}
impl From<Key> for Vec<u8> {
fn from(k: Key) -> Vec<u8> {
Vec::from(&k.0[..])
}
}
impl Borrow<[u8]> for Key {
fn borrow(&self) -> &[u8] {
&self.0[..]
}
}
impl AsRef<[u8]> for Key {
fn as_ref(&self) -> &[u8] {
&self.0[..]
}
}
impl From<Vec<u8>> for Key {
fn from(v: Vec<u8>) -> Key {
Key(Bytes::from(v))
}
}
impl From<Multihash> for Key {
fn from(m: Multihash) -> Key {
Key::from(m.to_bytes())
}
}
/// A record stored in the DHT.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub struct Record {
/// Key of the record.
pub key: Key,
/// Value of the record.
pub value: Vec<u8>,
/// The (original) publisher of the record.
pub publisher: Option<PeerId>,
/// The expiration time as measured by a local, monotonic clock.
#[cfg_attr(feature = "fuzz", serde(with = "serde_millis"))]
pub expires: Option<Instant>,
}
impl Record {
/// Creates a new record for insertion into the DHT.
pub fn new<K>(key: K, value: Vec<u8>) -> Self
where
K: Into<Key>,
{
Record {
key: key.into(),
value,
publisher: None,
expires: None,
}
}
/// Checks whether the record is expired w.r.t. the given `Instant`.
pub fn is_expired(&self, now: Instant) -> bool {
self.expires.is_some_and(|t| now >= t)
}
}
/// A record received by the given peer.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct PeerRecord {
/// The peer from whom the record was received
pub peer: PeerId,
/// The provided record.
pub record: Record,
}
/// A record keeping information about a content provider.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct ProviderRecord {
/// Key of the record.
pub key: Key,
/// Key of the provider, based on its peer ID.
pub provider: PeerId,
/// Cached addresses of the provider.
pub addresses: Vec<Multiaddr>,
/// The expiration time of the record. The provider records must always have the expiration
/// time.
pub expires: Instant,
}
impl ProviderRecord {
/// The distance from the provider's peer ID to the provided key.
pub fn distance(&self) -> Distance {
// Note that the record key is raw (opaque bytes). In order to calculate the distance from
// the provider's peer ID to this key we must first hash both.
KademliaKey::from(self.provider).distance(&KademliaKey::new(self.key.clone()))
}
/// Checks whether the record is expired w.r.t. the given `Instant`.
pub fn is_expired(&self, now: Instant) -> bool {
now >= self.expires
}
}
/// A user-facing provider type.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct ContentProvider {
// Peer ID of the provider.
pub peer: PeerId,
// Cached addresses of the provider.
pub addresses: Vec<Multiaddr>,
}
impl From<ContentProvider> for KademliaPeer {
fn from(provider: ContentProvider) -> Self {
let mut address_store = AddressStore::new();
for address in provider.addresses.iter() {
address_store.insert(AddressRecord::from_raw_multiaddr(address.clone()));
}
Self {
key: KademliaKey::from(provider.peer),
peer: provider.peer,
address_store,
connection: ConnectionType::NotConnected,
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/executor.rs | src/protocol/libp2p/kademlia/executor.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::query::QueryId, substream::Substream,
utils::futures_stream::FuturesStream, PeerId,
};
use bytes::{Bytes, BytesMut};
use futures::{future::BoxFuture, Stream, StreamExt};
use std::{
pin::Pin,
task::{Context, Poll},
time::Duration,
};
/// Read timeout for inbound messages.
const READ_TIMEOUT: Duration = Duration::from_secs(15);
/// Write timeout for outbound messages.
const WRITE_TIMEOUT: Duration = Duration::from_secs(15);
/// Faulure reason.
#[derive(Debug)]
pub enum FailureReason {
/// Substream was closed while reading/writing message to remote peer.
SubstreamClosed,
/// Timeout while reading/writing to substream.
Timeout,
}
/// Query result.
#[derive(Debug)]
pub enum QueryResult {
/// Message was sent to remote peer successfully.
/// This result is only reported for send-only queries. Queries that include reading a
/// response won't report it and will only yield a [`QueryResult::ReadSuccess`].
SendSuccess {
/// Substream.
substream: Substream,
},
/// Failed to send message to remote peer.
SendFailure {
/// Failure reason.
reason: FailureReason,
},
/// Message was read from the remote peer successfully.
ReadSuccess {
/// Substream.
substream: Substream,
/// Read message.
message: BytesMut,
},
/// Failed to read message from remote peer.
ReadFailure {
/// Failure reason.
reason: FailureReason,
},
/// Result that must be treated as send success. This is needed as a workaround to support
/// older litep2p nodes not sending `PUT_VALUE` ACK messages and not reading them.
// TODO: remove this as part of https://github.com/paritytech/litep2p/issues/429.
AssumeSendSuccess,
}
/// Query result.
#[derive(Debug)]
pub struct QueryContext {
/// Peer ID.
pub peer: PeerId,
/// Query ID.
pub query_id: Option<QueryId>,
/// Query result.
pub result: QueryResult,
}
/// Query executor.
pub struct QueryExecutor {
/// Pending futures.
futures: FuturesStream<BoxFuture<'static, QueryContext>>,
}
impl QueryExecutor {
/// Create new [`QueryExecutor`]
pub fn new() -> Self {
Self {
futures: FuturesStream::new(),
}
}
/// Send message to remote peer.
pub fn send_message(
&mut self,
peer: PeerId,
query_id: Option<QueryId>,
message: Bytes,
mut substream: Substream,
) {
self.futures.push(Box::pin(async move {
match tokio::time::timeout(WRITE_TIMEOUT, substream.send_framed(message)).await {
// Timeout error.
Err(_) => QueryContext {
peer,
query_id,
result: QueryResult::SendFailure {
reason: FailureReason::Timeout,
},
},
// Writing message to substream failed.
Ok(Err(_)) => QueryContext {
peer,
query_id,
result: QueryResult::SendFailure {
reason: FailureReason::SubstreamClosed,
},
},
Ok(Ok(())) => QueryContext {
peer,
query_id,
result: QueryResult::SendSuccess { substream },
},
}
}));
}
/// Send message and ignore sending errors.
///
/// This is a hackish way of dealing with older litep2p nodes not expecting receiving
/// `PUT_VALUE` ACK messages. This should eventually be removed.
// TODO: remove this as part of https://github.com/paritytech/litep2p/issues/429.
pub fn send_message_eat_failure(
&mut self,
peer: PeerId,
query_id: Option<QueryId>,
message: Bytes,
mut substream: Substream,
) {
self.futures.push(Box::pin(async move {
match tokio::time::timeout(WRITE_TIMEOUT, substream.send_framed(message)).await {
// Timeout error.
Err(_) => QueryContext {
peer,
query_id,
result: QueryResult::AssumeSendSuccess,
},
// Writing message to substream failed.
Ok(Err(_)) => QueryContext {
peer,
query_id,
result: QueryResult::AssumeSendSuccess,
},
Ok(Ok(())) => QueryContext {
peer,
query_id,
result: QueryResult::SendSuccess { substream },
},
}
}));
}
/// Read message from remote peer with timeout.
pub fn read_message(
&mut self,
peer: PeerId,
query_id: Option<QueryId>,
mut substream: Substream,
) {
self.futures.push(Box::pin(async move {
match tokio::time::timeout(READ_TIMEOUT, substream.next()).await {
Err(_) => QueryContext {
peer,
query_id,
result: QueryResult::ReadFailure {
reason: FailureReason::Timeout,
},
},
Ok(Some(Ok(message))) => QueryContext {
peer,
query_id,
result: QueryResult::ReadSuccess { substream, message },
},
Ok(None) | Ok(Some(Err(_))) => QueryContext {
peer,
query_id,
result: QueryResult::ReadFailure {
reason: FailureReason::SubstreamClosed,
},
},
}
}));
}
/// Send request to remote peer and read response.
pub fn send_request_read_response(
&mut self,
peer: PeerId,
query_id: Option<QueryId>,
message: Bytes,
mut substream: Substream,
) {
self.futures.push(Box::pin(async move {
match tokio::time::timeout(WRITE_TIMEOUT, substream.send_framed(message)).await {
// Timeout error.
Err(_) =>
return QueryContext {
peer,
query_id,
result: QueryResult::SendFailure {
reason: FailureReason::Timeout,
},
},
// Writing message to substream failed.
Ok(Err(_)) => {
let _ = substream.close().await;
return QueryContext {
peer,
query_id,
result: QueryResult::SendFailure {
reason: FailureReason::SubstreamClosed,
},
};
}
// This will result in either `SendAndReadSuccess` or `SendSuccessReadFailure`.
Ok(Ok(())) => (),
};
match tokio::time::timeout(READ_TIMEOUT, substream.next()).await {
Err(_) => QueryContext {
peer,
query_id,
result: QueryResult::ReadFailure {
reason: FailureReason::Timeout,
},
},
Ok(Some(Ok(message))) => QueryContext {
peer,
query_id,
result: QueryResult::ReadSuccess { substream, message },
},
Ok(None) | Ok(Some(Err(_))) => QueryContext {
peer,
query_id,
result: QueryResult::ReadFailure {
reason: FailureReason::SubstreamClosed,
},
},
}
}));
}
/// Send request to remote peer and read the response, ignoring it and any read errors.
///
/// This is a hackish way of dealing with older litep2p nodes not sending `PUT_VALUE` ACK
/// messages. This should eventually be removed.
// TODO: remove this as part of https://github.com/paritytech/litep2p/issues/429.
pub fn send_request_eat_response_failure(
&mut self,
peer: PeerId,
query_id: Option<QueryId>,
message: Bytes,
mut substream: Substream,
) {
self.futures.push(Box::pin(async move {
match tokio::time::timeout(WRITE_TIMEOUT, substream.send_framed(message)).await {
// Timeout error.
Err(_) =>
return QueryContext {
peer,
query_id,
result: QueryResult::SendFailure {
reason: FailureReason::Timeout,
},
},
// Writing message to substream failed.
Ok(Err(_)) => {
let _ = substream.close().await;
return QueryContext {
peer,
query_id,
result: QueryResult::SendFailure {
reason: FailureReason::SubstreamClosed,
},
};
}
// This will result in either `SendAndReadSuccess` or `SendSuccessReadFailure`.
Ok(Ok(())) => (),
};
// Ignore the read result (including errors).
if let Ok(Some(Ok(message))) =
tokio::time::timeout(READ_TIMEOUT, substream.next()).await
{
QueryContext {
peer,
query_id,
result: QueryResult::ReadSuccess { substream, message },
}
} else {
QueryContext {
peer,
query_id,
result: QueryResult::AssumeSendSuccess,
}
}
}));
}
}
impl Stream for QueryExecutor {
type Item = QueryContext;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.futures.poll_next_unpin(cx)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{mock::substream::MockSubstream, types::SubstreamId};
#[tokio::test]
async fn substream_read_timeout() {
let mut executor = QueryExecutor::new();
let peer = PeerId::random();
let mut substream = MockSubstream::new();
substream.expect_poll_next().returning(|_| Poll::Pending);
let substream = Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream));
executor.read_message(peer, None, substream);
match tokio::time::timeout(Duration::from_secs(20), executor.next()).await {
Ok(Some(QueryContext {
peer: queried_peer,
query_id,
result,
})) => {
assert_eq!(peer, queried_peer);
assert!(query_id.is_none());
assert!(std::matches!(
result,
QueryResult::ReadFailure {
reason: FailureReason::Timeout
}
));
}
result => panic!("invalid result received: {result:?}"),
}
}
#[tokio::test]
async fn substream_read_substream_closed() {
let mut executor = QueryExecutor::new();
let peer = PeerId::random();
let mut substream = MockSubstream::new();
substream.expect_poll_next().times(1).return_once(|_| {
Poll::Ready(Some(Err(crate::error::SubstreamError::ConnectionClosed)))
});
executor.read_message(
peer,
Some(QueryId(1338)),
Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream)),
);
match tokio::time::timeout(Duration::from_secs(20), executor.next()).await {
Ok(Some(QueryContext {
peer: queried_peer,
query_id,
result,
})) => {
assert_eq!(peer, queried_peer);
assert_eq!(query_id, Some(QueryId(1338)));
assert!(std::matches!(
result,
QueryResult::ReadFailure {
reason: FailureReason::SubstreamClosed
}
));
}
result => panic!("invalid result received: {result:?}"),
}
}
#[tokio::test]
async fn send_succeeds_no_message_read() {
let mut executor = QueryExecutor::new();
let peer = PeerId::random();
// prepare substream which succeeds in sending the message but closes right after
let mut substream = MockSubstream::new();
substream.expect_poll_ready().times(1).return_once(|_| Poll::Ready(Ok(())));
substream.expect_start_send().times(1).return_once(|_| Ok(()));
substream.expect_poll_flush().times(1).return_once(|_| Poll::Ready(Ok(())));
substream.expect_poll_next().times(1).return_once(|_| {
Poll::Ready(Some(Err(crate::error::SubstreamError::ConnectionClosed)))
});
executor.send_request_read_response(
peer,
Some(QueryId(1337)),
Bytes::from_static(b"hello, world"),
Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream)),
);
match tokio::time::timeout(Duration::from_secs(20), executor.next()).await {
Ok(Some(QueryContext {
peer: queried_peer,
query_id,
result,
})) => {
assert_eq!(peer, queried_peer);
assert_eq!(query_id, Some(QueryId(1337)));
assert!(std::matches!(
result,
QueryResult::ReadFailure {
reason: FailureReason::SubstreamClosed
}
));
}
result => panic!("invalid result received: {result:?}"),
}
}
#[tokio::test]
async fn send_fails_no_message_read() {
let mut executor = QueryExecutor::new();
let peer = PeerId::random();
// prepare substream which succeeds in sending the message but closes right after
let mut substream = MockSubstream::new();
substream
.expect_poll_ready()
.times(1)
.return_once(|_| Poll::Ready(Err(crate::error::SubstreamError::ConnectionClosed)));
substream.expect_poll_close().times(1).return_once(|_| Poll::Ready(Ok(())));
executor.send_request_read_response(
peer,
Some(QueryId(1337)),
Bytes::from_static(b"hello, world"),
Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream)),
);
match tokio::time::timeout(Duration::from_secs(20), executor.next()).await {
Ok(Some(QueryContext {
peer: queried_peer,
query_id,
result,
})) => {
assert_eq!(peer, queried_peer);
assert_eq!(query_id, Some(QueryId(1337)));
assert!(std::matches!(
result,
QueryResult::SendFailure {
reason: FailureReason::SubstreamClosed
}
));
}
result => panic!("invalid result received: {result:?}"),
}
}
#[tokio::test]
async fn read_message_timeout() {
let mut executor = QueryExecutor::new();
let peer = PeerId::random();
// prepare substream which succeeds in sending the message but closes right after
let mut substream = MockSubstream::new();
substream.expect_poll_next().returning(|_| Poll::Pending);
executor.read_message(
peer,
Some(QueryId(1336)),
Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream)),
);
match tokio::time::timeout(Duration::from_secs(20), executor.next()).await {
Ok(Some(QueryContext {
peer: queried_peer,
query_id,
result,
})) => {
assert_eq!(peer, queried_peer);
assert_eq!(query_id, Some(QueryId(1336)));
assert!(std::matches!(
result,
QueryResult::ReadFailure {
reason: FailureReason::Timeout
}
));
}
result => panic!("invalid result received: {result:?}"),
}
}
#[tokio::test]
async fn read_message_substream_closed() {
let mut executor = QueryExecutor::new();
let peer = PeerId::random();
// prepare substream which succeeds in sending the message but closes right after
let mut substream = MockSubstream::new();
substream
.expect_poll_next()
.times(1)
.return_once(|_| Poll::Ready(Some(Err(crate::error::SubstreamError::ChannelClogged))));
executor.read_message(
peer,
Some(QueryId(1335)),
Substream::new_mock(peer, SubstreamId::from(0usize), Box::new(substream)),
);
match tokio::time::timeout(Duration::from_secs(20), executor.next()).await {
Ok(Some(QueryContext {
peer: queried_peer,
query_id,
result,
})) => {
assert_eq!(peer, queried_peer);
assert_eq!(query_id, Some(QueryId(1335)));
assert!(std::matches!(
result,
QueryResult::ReadFailure {
reason: FailureReason::SubstreamClosed
}
));
}
result => panic!("invalid result received: {result:?}"),
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/mod.rs | src/protocol/libp2p/kademlia/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! [`/ipfs/kad/1.0.0`](https://github.com/libp2p/specs/blob/master/kad-dht/README.md) implementation.
use crate::{
error::{Error, ImmediateDialError, SubstreamError},
protocol::{
libp2p::kademlia::{
bucket::KBucketEntry,
executor::{QueryContext, QueryExecutor, QueryResult},
message::KademliaMessage,
query::{QueryAction, QueryEngine},
routing_table::RoutingTable,
store::{MemoryStore, MemoryStoreAction},
types::{ConnectionType, KademliaPeer, Key},
},
Direction, TransportEvent, TransportService,
},
substream::Substream,
transport::Endpoint,
types::SubstreamId,
PeerId,
};
use bytes::{Bytes, BytesMut};
use futures::StreamExt;
use multiaddr::Multiaddr;
use tokio::sync::mpsc::{Receiver, Sender};
use std::{
collections::{hash_map::Entry, HashMap},
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
time::{Duration, Instant},
};
pub use config::{Config, ConfigBuilder};
pub use handle::{
IncomingRecordValidationMode, KademliaCommand, KademliaEvent, KademliaHandle, Quorum,
RoutingTableUpdateMode,
};
pub use query::QueryId;
pub use record::{ContentProvider, Key as RecordKey, PeerRecord, Record};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia";
/// Parallelism factor, `α`.
const PARALLELISM_FACTOR: usize = 3;
mod bucket;
mod config;
mod executor;
mod handle;
mod message;
mod query;
mod record;
mod routing_table;
mod store;
mod types;
mod schema {
pub(super) mod kademlia {
include!(concat!(env!("OUT_DIR"), "/kademlia.rs"));
}
}
/// Peer action.
#[derive(Debug, Clone)]
#[allow(clippy::enum_variant_names)]
enum PeerAction {
/// Find nodes (and values/providers) as part of `FIND_NODE`/`GET_VALUE`/`GET_PROVIDERS` query.
// TODO: may be a better naming would be `SendFindRequest`?
SendFindNode(QueryId),
/// Send `PUT_VALUE` message to peer.
SendPutValue(QueryId, Bytes),
/// Send `ADD_PROVIDER` message to peer.
SendAddProvider(QueryId, Bytes),
}
impl PeerAction {
fn query_id(&self) -> QueryId {
match self {
PeerAction::SendFindNode(query_id) => *query_id,
PeerAction::SendPutValue(query_id, _) => *query_id,
PeerAction::SendAddProvider(query_id, _) => *query_id,
}
}
}
/// Peer context.
#[derive(Default)]
struct PeerContext {
/// Pending action, if any.
pending_actions: HashMap<SubstreamId, PeerAction>,
}
impl PeerContext {
/// Create new [`PeerContext`].
pub fn new() -> Self {
Self {
pending_actions: HashMap::new(),
}
}
/// Add pending action for peer.
pub fn add_pending_action(&mut self, substream_id: SubstreamId, action: PeerAction) {
self.pending_actions.insert(substream_id, action);
}
}
/// Main Kademlia object.
pub(crate) struct Kademlia {
/// Transport service.
service: TransportService,
/// Local Kademlia key.
local_key: Key<PeerId>,
/// Connected peers,
peers: HashMap<PeerId, PeerContext>,
/// TX channel for sending events to `KademliaHandle`.
event_tx: Sender<KademliaEvent>,
/// RX channel for receiving commands from `KademliaHandle`.
cmd_rx: Receiver<KademliaCommand>,
/// Next query ID.
next_query_id: Arc<AtomicUsize>,
/// Routing table.
routing_table: RoutingTable,
/// Replication factor.
replication_factor: usize,
/// Record store.
store: MemoryStore,
/// Pending outbound substreams.
pending_substreams: HashMap<SubstreamId, PeerId>,
/// Pending dials.
pending_dials: HashMap<PeerId, Vec<PeerAction>>,
/// Routing table update mode.
update_mode: RoutingTableUpdateMode,
/// Incoming records validation mode.
validation_mode: IncomingRecordValidationMode,
/// Default record TTL.
record_ttl: Duration,
/// Query engine.
engine: QueryEngine,
/// Query executor.
executor: QueryExecutor,
}
impl Kademlia {
/// Create new [`Kademlia`].
pub(crate) fn new(mut service: TransportService, config: Config) -> Self {
let local_peer_id = service.local_peer_id();
let local_key = Key::from(service.local_peer_id());
let mut routing_table = RoutingTable::new(local_key.clone());
for (peer, addresses) in config.known_peers {
tracing::trace!(target: LOG_TARGET, ?peer, ?addresses, "add bootstrap peer");
routing_table.add_known_peer(peer, addresses.clone(), ConnectionType::NotConnected);
service.add_known_address(&peer, addresses.into_iter());
}
let store = MemoryStore::with_config(local_peer_id, config.memory_store_config);
Self {
service,
routing_table,
peers: HashMap::new(),
cmd_rx: config.cmd_rx,
next_query_id: config.next_query_id,
store,
event_tx: config.event_tx,
local_key,
pending_dials: HashMap::new(),
executor: QueryExecutor::new(),
pending_substreams: HashMap::new(),
update_mode: config.update_mode,
validation_mode: config.validation_mode,
record_ttl: config.record_ttl,
replication_factor: config.replication_factor,
engine: QueryEngine::new(local_peer_id, config.replication_factor, PARALLELISM_FACTOR),
}
}
/// Allocate next query ID.
fn next_query_id(&mut self) -> QueryId {
let query_id = self.next_query_id.fetch_add(1, Ordering::Relaxed);
QueryId(query_id)
}
/// Connection established to remote peer.
fn on_connection_established(&mut self, peer: PeerId, endpoint: Endpoint) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, "connection established");
match self.peers.entry(peer) {
Entry::Vacant(entry) => {
// Set the conenction type to connected and potentially save the address in the
// table.
//
// Note: this happens regardless of the state of the kademlia managed peers, because
// an already occupied entry in the `self.peers` map does not mean that we are
// no longer interested in the address / connection type of the peer.
self.routing_table.on_connection_established(Key::from(peer), endpoint);
let Some(actions) = self.pending_dials.remove(&peer) else {
// Note that we do not add peer entry if we don't have any pending actions.
// This is done to not populate `self.peers` with peers that don't support
// our Kademlia protocol.
return Ok(());
};
// go over all pending actions, open substreams and save the state to `PeerContext`
// from which it will be later queried when the substream opens
let mut context = PeerContext::new();
for action in actions {
match self.service.open_substream(peer) {
Ok(substream_id) => {
context.add_pending_action(substream_id, action);
}
Err(error) => {
tracing::debug!(
target: LOG_TARGET,
?peer,
?action,
?error,
"connection established to peer but failed to open substream",
);
if let PeerAction::SendFindNode(query_id) = action {
self.engine.register_send_failure(query_id, peer);
self.engine.register_response_failure(query_id, peer);
}
}
}
}
entry.insert(context);
Ok(())
}
Entry::Occupied(_) => {
tracing::warn!(
target: LOG_TARGET,
?peer,
?endpoint,
"connection already exists, discarding opening substreams, this is unexpected"
);
// Update the connection in the routing table, similar as above. The function call
// happens in two places to avoid unnecessary cloning of the endpoint for logging
// purposes.
self.routing_table.on_connection_established(Key::from(peer), endpoint);
Err(Error::PeerAlreadyExists(peer))
}
}
}
/// Disconnect peer from `Kademlia`.
///
/// Peer is disconnected either because the substream was detected closed
/// or because the connection was closed.
///
/// The peer is kept in the routing table but its connection state is set
/// as `NotConnected`, meaning it can be evicted from a k-bucket if another
/// peer that shares the bucket connects.
async fn disconnect_peer(&mut self, peer: PeerId, query: Option<QueryId>) {
tracing::trace!(target: LOG_TARGET, ?peer, ?query, "disconnect peer");
if let Some(query) = query {
self.engine.register_peer_failure(query, peer);
}
// Apart from the failing query, we need to fail all other pending queries for the peer
// being disconnected.
if let Some(PeerContext { pending_actions }) = self.peers.remove(&peer) {
pending_actions.into_iter().for_each(|(_, action)| {
// Don't report failure twice for the same `query_id` if it was already reported
// above. (We can still have other pending queries for the peer that
// need to be reported.)
let query_id = action.query_id();
if Some(query_id) != query {
self.engine.register_peer_failure(query_id, peer);
}
});
}
if let KBucketEntry::Occupied(entry) = self.routing_table.entry(Key::from(peer)) {
entry.connection = ConnectionType::NotConnected;
}
}
/// Local node opened a substream to remote node.
async fn on_outbound_substream(
&mut self,
peer: PeerId,
substream_id: SubstreamId,
substream: Substream,
) -> crate::Result<()> {
tracing::trace!(
target: LOG_TARGET,
?peer,
?substream_id,
"outbound substream opened",
);
let _ = self.pending_substreams.remove(&substream_id);
let pending_action = &mut self
.peers
.get_mut(&peer)
// If we opened an outbound substream, we must have pending actions for the peer.
.ok_or(Error::PeerDoesntExist(peer))?
.pending_actions
.remove(&substream_id);
match pending_action.take() {
None => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?substream_id,
"pending action doesn't exist for peer, closing substream",
);
let _ = substream.close().await;
return Ok(());
}
Some(PeerAction::SendFindNode(query)) => {
match self.engine.next_peer_action(&query, &peer) {
Some(QueryAction::SendMessage {
query,
peer,
message,
}) => {
tracing::trace!(target: LOG_TARGET, ?peer, ?query, "start sending message to peer");
self.executor.send_request_read_response(
peer,
Some(query),
message,
substream,
);
}
// query finished while the substream was being opened
None => {
let _ = substream.close().await;
}
action => {
tracing::warn!(target: LOG_TARGET, ?query, ?peer, ?action, "unexpected action for `FIND_NODE`");
let _ = substream.close().await;
debug_assert!(false);
}
}
}
Some(PeerAction::SendPutValue(query, message)) => {
tracing::trace!(target: LOG_TARGET, ?peer, "send `PUT_VALUE` message");
self.executor.send_request_eat_response_failure(
peer,
Some(query),
message,
substream,
);
// TODO: replace this with `send_request_read_response` as part of
// https://github.com/paritytech/litep2p/issues/429.
}
Some(PeerAction::SendAddProvider(query, message)) => {
tracing::trace!(target: LOG_TARGET, ?peer, "send `ADD_PROVIDER` message");
self.executor.send_message(peer, Some(query), message, substream);
}
}
Ok(())
}
/// Remote opened a substream to local node.
async fn on_inbound_substream(&mut self, peer: PeerId, substream: Substream) {
tracing::trace!(target: LOG_TARGET, ?peer, "inbound substream opened");
// Ensure peer entry exists to treat peer as [`ConnectionType::Connected`].
// when inserting into the routing table.
self.peers.entry(peer).or_default();
self.executor.read_message(peer, None, substream);
}
/// Update routing table if the routing table update mode was set to automatic.
///
/// Inform user about the potential routing table, allowing them to update it manually if
/// the mode was set to manual.
async fn update_routing_table(&mut self, peers: &[KademliaPeer]) {
let peers: Vec<_> =
peers.iter().filter(|peer| peer.peer != self.service.local_peer_id()).collect();
// inform user about the routing table update, regardless of what the routing table update
// mode is
let _ = self
.event_tx
.send(KademliaEvent::RoutingTableUpdate {
peers: peers.iter().map(|peer| peer.peer).collect::<Vec<PeerId>>(),
})
.await;
for info in peers {
let addresses = info.addresses();
self.service.add_known_address(&info.peer, addresses.clone().into_iter());
if std::matches!(self.update_mode, RoutingTableUpdateMode::Automatic) {
self.routing_table.add_known_peer(
info.peer,
addresses,
self.peers
.get(&info.peer)
.map_or(ConnectionType::NotConnected, |_| ConnectionType::Connected),
);
}
}
}
/// Handle received message.
async fn on_message_received(
&mut self,
peer: PeerId,
query_id: Option<QueryId>,
message: BytesMut,
substream: Substream,
) -> crate::Result<()> {
tracing::trace!(target: LOG_TARGET, ?peer, query = ?query_id, "handle message from peer");
match KademliaMessage::from_bytes(message, self.replication_factor)
.ok_or(Error::InvalidData)?
{
KademliaMessage::FindNode { target, peers } => {
match query_id {
Some(query_id) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?target,
query = ?query_id,
"handle `FIND_NODE` response",
);
// update routing table and inform user about the update
self.update_routing_table(&peers).await;
self.engine.register_response(
query_id,
peer,
KademliaMessage::FindNode { target, peers },
);
substream.close().await;
}
None => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?target,
"handle `FIND_NODE` request",
);
let message = KademliaMessage::find_node_response(
&target,
self.routing_table
.closest(&Key::new(target.as_ref()), self.replication_factor),
);
self.executor.send_message(peer, None, message.into(), substream);
}
}
}
KademliaMessage::PutValue { record } => match query_id {
Some(query_id) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
query = ?query_id,
record_key = ?record.key,
"handle `PUT_VALUE` response",
);
self.engine.register_response(
query_id,
peer,
KademliaMessage::PutValue { record },
);
substream.close().await;
}
None => {
tracing::trace!(
target: LOG_TARGET,
?peer,
record_key = ?record.key,
"handle `PUT_VALUE` request",
);
if let IncomingRecordValidationMode::Automatic = self.validation_mode {
self.store.put(record.clone());
}
// Send ACK even if the record was/will be filtered out to not reveal any
// internal state.
let message = KademliaMessage::put_value_response(
record.key.clone(),
record.value.clone(),
);
self.executor.send_message_eat_failure(peer, None, message, substream);
// TODO: replace this with `send_message` as part of
// https://github.com/paritytech/litep2p/issues/429.
let _ = self.event_tx.send(KademliaEvent::IncomingRecord { record }).await;
}
},
KademliaMessage::GetRecord { key, record, peers } => {
match (query_id, key) {
(Some(query_id), key) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
query = ?query_id,
?peers,
?record,
"handle `GET_VALUE` response",
);
// update routing table and inform user about the update
self.update_routing_table(&peers).await;
self.engine.register_response(
query_id,
peer,
KademliaMessage::GetRecord { key, record, peers },
);
substream.close().await;
}
(None, Some(key)) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?key,
"handle `GET_VALUE` request",
);
let value = self.store.get(&key).cloned();
let closest_peers = self
.routing_table
.closest(&Key::new(key.as_ref()), self.replication_factor);
let message =
KademliaMessage::get_value_response(key, closest_peers, value);
self.executor.send_message(peer, None, message.into(), substream);
}
(None, None) => tracing::debug!(
target: LOG_TARGET,
?peer,
?record,
?peers,
"unable to handle `GET_RECORD` request with empty key",
),
}
}
KademliaMessage::AddProvider { key, mut providers } => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?key,
?providers,
"handle `ADD_PROVIDER` message",
);
match (providers.len(), providers.pop()) {
(1, Some(provider)) => {
let addresses = provider.addresses();
if provider.peer == peer {
self.store.put_provider(
key.clone(),
ContentProvider {
peer,
addresses: addresses.clone(),
},
);
let _ = self
.event_tx
.send(KademliaEvent::IncomingProvider {
provided_key: key,
provider: ContentProvider {
peer: provider.peer,
addresses,
},
})
.await;
} else {
tracing::trace!(
target: LOG_TARGET,
publisher = ?peer,
provider = ?provider.peer,
"ignoring `ADD_PROVIDER` message with `publisher` != `provider`"
)
}
}
(n, _) => {
tracing::trace!(
target: LOG_TARGET,
publisher = ?peer,
?n,
"ignoring `ADD_PROVIDER` message with `n` != 1 providers"
)
}
}
}
KademliaMessage::GetProviders {
key,
peers,
providers,
} => {
match (query_id, key) {
(Some(query_id), key) => {
// Note: key is not required, but can be non-empty. We just ignore it here.
tracing::trace!(
target: LOG_TARGET,
?peer,
query = ?query_id,
?key,
?peers,
?providers,
"handle `GET_PROVIDERS` response",
);
// update routing table and inform user about the update
self.update_routing_table(&peers).await;
self.engine.register_response(
query_id,
peer,
KademliaMessage::GetProviders {
key,
peers,
providers,
},
);
substream.close().await;
}
(None, Some(key)) => {
tracing::trace!(
target: LOG_TARGET,
?peer,
?key,
"handle `GET_PROVIDERS` request",
);
let mut providers = self.store.get_providers(&key);
// Make sure local provider addresses are up to date.
let local_peer_id = self.local_key.clone().into_preimage();
if let Some(p) =
providers.iter_mut().find(|p| p.peer == local_peer_id).as_mut()
{
p.addresses = self.service.public_addresses().get_addresses();
}
let closer_peers = self
.routing_table
.closest(&Key::new(key.as_ref()), self.replication_factor);
let message =
KademliaMessage::get_providers_response(providers, &closer_peers);
self.executor.send_message(peer, None, message.into(), substream);
}
(None, None) => tracing::debug!(
target: LOG_TARGET,
?peer,
?peers,
?providers,
"unable to handle `GET_PROVIDERS` request with empty key",
),
}
}
}
Ok(())
}
/// Failed to open substream to remote peer.
async fn on_substream_open_failure(
&mut self,
substream_id: SubstreamId,
error: SubstreamError,
) {
tracing::trace!(
target: LOG_TARGET,
?substream_id,
?error,
"failed to open substream"
);
let Some(peer) = self.pending_substreams.remove(&substream_id) else {
tracing::debug!(
target: LOG_TARGET,
?substream_id,
"outbound substream failed for non-existent peer"
);
return;
};
if let Some(context) = self.peers.get_mut(&peer) {
let query =
context.pending_actions.remove(&substream_id).as_ref().map(PeerAction::query_id);
self.disconnect_peer(peer, query).await;
}
}
/// Handle dial failure.
fn on_dial_failure(&mut self, peer: PeerId, addresses: Vec<Multiaddr>) {
tracing::trace!(target: LOG_TARGET, ?peer, ?addresses, "failed to dial peer");
self.routing_table.on_dial_failure(Key::from(peer), &addresses);
let Some(actions) = self.pending_dials.remove(&peer) else {
return;
};
for action in actions {
let query = action.query_id();
tracing::trace!(
target: LOG_TARGET,
?peer,
?query,
?addresses,
"report failure for pending query",
);
// Fail both sending and receiving due to dial failure.
self.engine.register_send_failure(query, peer);
self.engine.register_response_failure(query, peer);
}
}
/// Open a substream with a peer or dial the peer.
fn open_substream_or_dial(
&mut self,
peer: PeerId,
action: PeerAction,
query: Option<QueryId>,
) -> Result<(), Error> {
match self.service.open_substream(peer) {
Ok(substream_id) => {
self.pending_substreams.insert(substream_id, peer);
self.peers.entry(peer).or_default().pending_actions.insert(substream_id, action);
Ok(())
}
Err(err) => {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, ?err, "Failed to open substream. Dialing peer");
match self.service.dial(&peer) {
Ok(()) => {
self.pending_dials.entry(peer).or_default().push(action);
Ok(())
}
// Already connected is a recoverable error.
Err(ImmediateDialError::AlreadyConnected) => {
// Dial returned `Error::AlreadyConnected`, retry opening the substream.
match self.service.open_substream(peer) {
Ok(substream_id) => {
self.pending_substreams.insert(substream_id, peer);
self.peers
.entry(peer)
.or_default()
.pending_actions
.insert(substream_id, action);
Ok(())
}
Err(err) => {
tracing::debug!(target: LOG_TARGET, ?query, ?peer, ?err, "Failed to open substream a second time");
Err(err.into())
}
}
}
Err(error) => {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, ?error, "Failed to dial peer");
Err(error.into())
}
}
}
}
}
/// Handle next query action.
async fn on_query_action(&mut self, action: QueryAction) -> Result<(), (QueryId, PeerId)> {
match action {
QueryAction::SendMessage { query, peer, .. } => {
// This action is used for `FIND_NODE`, `GET_VALUE` and `GET_PROVIDERS` queries.
if self
.open_substream_or_dial(peer, PeerAction::SendFindNode(query), Some(query))
.is_err()
{
// Announce the error to the query engine.
self.engine.register_send_failure(query, peer);
self.engine.register_response_failure(query, peer);
}
Ok(())
}
QueryAction::FindNodeQuerySucceeded {
target,
peers,
query,
} => {
tracing::debug!(
target: LOG_TARGET,
?query,
peer = ?target,
num_peers = ?peers.len(),
"`FIND_NODE` succeeded",
);
let _ = self
.event_tx
.send(KademliaEvent::FindNodeSuccess {
target,
query_id: query,
peers: peers
.into_iter()
.map(|info| (info.peer, info.addresses()))
.collect(),
})
.await;
Ok(())
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/message.rs | src/protocol/libp2p/kademlia/message.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::{
record::{ContentProvider, Key as RecordKey, Record},
schema,
types::{ConnectionType, KademliaPeer},
},
PeerId,
};
use bytes::{Bytes, BytesMut};
use enum_display::EnumDisplay;
use prost::Message;
use std::time::{Duration, Instant};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::message";
/// Kademlia message.
#[derive(Debug, Clone, EnumDisplay)]
pub enum KademliaMessage {
/// `FIND_NODE` message.
FindNode {
/// Query target.
target: Vec<u8>,
/// Found peers.
peers: Vec<KademliaPeer>,
},
/// Kademlia `PUT_VALUE` message.
PutValue {
/// Record.
record: Record,
},
/// `GET_VALUE` message.
GetRecord {
/// Key.
key: Option<RecordKey>,
/// Record.
record: Option<Record>,
/// Peers closer to the key.
peers: Vec<KademliaPeer>,
},
/// `ADD_PROVIDER` message.
AddProvider {
/// Key.
key: RecordKey,
/// Peers, providing the data for `key`. Must contain exactly one peer matching the sender
/// of the message.
providers: Vec<KademliaPeer>,
},
/// `GET_PROVIDERS` message.
GetProviders {
/// Key. `None` in response.
key: Option<RecordKey>,
/// Peers closer to the key.
peers: Vec<KademliaPeer>,
/// Peers, providing the data for `key`.
providers: Vec<KademliaPeer>,
},
}
impl KademliaMessage {
/// Create `FIND_NODE` message for `peer`.
pub fn find_node<T: Into<Vec<u8>>>(key: T) -> Bytes {
let message = schema::kademlia::Message {
key: key.into(),
r#type: schema::kademlia::MessageType::FindNode.into(),
cluster_level_raw: 10,
..Default::default()
};
let mut buf = BytesMut::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("Vec<u8> to provide needed capacity");
buf.freeze()
}
/// Create `PUT_VALUE` message for `record`.
pub fn put_value(record: Record) -> Bytes {
let message = schema::kademlia::Message {
key: record.key.clone().into(),
r#type: schema::kademlia::MessageType::PutValue.into(),
record: Some(record_to_schema(record)),
cluster_level_raw: 10,
..Default::default()
};
let mut buf = BytesMut::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("BytesMut to provide needed capacity");
buf.freeze()
}
/// Create `GET_VALUE` message for `record`.
pub fn get_record(key: RecordKey) -> Bytes {
let message = schema::kademlia::Message {
key: key.clone().into(),
r#type: schema::kademlia::MessageType::GetValue.into(),
cluster_level_raw: 10,
..Default::default()
};
let mut buf = BytesMut::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("BytesMut to provide needed capacity");
buf.freeze()
}
/// Create `FIND_NODE` response.
pub fn find_node_response<K: AsRef<[u8]>>(key: K, peers: Vec<KademliaPeer>) -> Vec<u8> {
let message = schema::kademlia::Message {
key: key.as_ref().to_vec(),
cluster_level_raw: 10,
r#type: schema::kademlia::MessageType::FindNode.into(),
closer_peers: peers.iter().map(|peer| peer.into()).collect(),
..Default::default()
};
let mut buf = Vec::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("Vec<u8> to provide needed capacity");
buf
}
/// Create `PUT_VALUE` response.
pub fn put_value_response(key: RecordKey, value: Vec<u8>) -> Bytes {
let message = schema::kademlia::Message {
key: key.to_vec(),
cluster_level_raw: 10,
r#type: schema::kademlia::MessageType::PutValue.into(),
record: Some(schema::kademlia::Record {
key: key.to_vec(),
value,
..Default::default()
}),
..Default::default()
};
let mut buf = BytesMut::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("BytesMut to provide needed capacity");
buf.freeze()
}
/// Create `GET_VALUE` response.
pub fn get_value_response(
key: RecordKey,
peers: Vec<KademliaPeer>,
record: Option<Record>,
) -> Vec<u8> {
let message = schema::kademlia::Message {
key: key.to_vec(),
cluster_level_raw: 10,
r#type: schema::kademlia::MessageType::GetValue.into(),
closer_peers: peers.iter().map(|peer| peer.into()).collect(),
record: record.map(record_to_schema),
..Default::default()
};
let mut buf = Vec::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("Vec<u8> to provide needed capacity");
buf
}
/// Create `ADD_PROVIDER` message with `provider`.
pub fn add_provider(provided_key: RecordKey, provider: ContentProvider) -> Bytes {
let peer = KademliaPeer::new(
provider.peer,
provider.addresses,
ConnectionType::CanConnect, // ignored by message recipient
);
let message = schema::kademlia::Message {
key: provided_key.clone().to_vec(),
cluster_level_raw: 10,
r#type: schema::kademlia::MessageType::AddProvider.into(),
provider_peers: std::iter::once((&peer).into()).collect(),
..Default::default()
};
let mut buf = BytesMut::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("BytesMut to provide needed capacity");
buf.freeze()
}
/// Create `GET_PROVIDERS` request for `key`.
pub fn get_providers_request(key: RecordKey) -> Bytes {
let message = schema::kademlia::Message {
key: key.to_vec(),
cluster_level_raw: 10,
r#type: schema::kademlia::MessageType::GetProviders.into(),
..Default::default()
};
let mut buf = BytesMut::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("BytesMut to provide needed capacity");
buf.freeze()
}
/// Create `GET_PROVIDERS` response.
pub fn get_providers_response(
providers: Vec<ContentProvider>,
closer_peers: &[KademliaPeer],
) -> Vec<u8> {
let provider_peers = providers
.into_iter()
.map(|p| {
KademliaPeer::new(
p.peer,
p.addresses,
// `ConnectionType` is ignored by a recipient
ConnectionType::NotConnected,
)
})
.map(|p| (&p).into())
.collect();
let message = schema::kademlia::Message {
cluster_level_raw: 10,
r#type: schema::kademlia::MessageType::GetProviders.into(),
closer_peers: closer_peers.iter().map(Into::into).collect(),
provider_peers,
..Default::default()
};
let mut buf = Vec::with_capacity(message.encoded_len());
message.encode(&mut buf).expect("Vec<u8> to provide needed capacity");
buf
}
/// Get [`KademliaMessage`] from bytes.
pub fn from_bytes(bytes: BytesMut, replication_factor: usize) -> Option<Self> {
match schema::kademlia::Message::decode(bytes) {
Ok(message) => match message.r#type {
// FIND_NODE
4 => {
let peers = message
.closer_peers
.iter()
.filter_map(|peer| KademliaPeer::try_from(peer).ok())
.take(replication_factor)
.collect();
Some(Self::FindNode {
target: message.key,
peers,
})
}
// PUT_VALUE
0 => {
let record = message.record?;
Some(Self::PutValue {
record: record_from_schema(record)?,
})
}
// GET_VALUE
1 => {
let key = match message.key.is_empty() {
true => message.record.as_ref().and_then(|record| {
(!record.key.is_empty()).then_some(RecordKey::from(record.key.clone()))
}),
false => Some(RecordKey::from(message.key.clone())),
};
let record = if let Some(record) = message.record {
Some(record_from_schema(record)?)
} else {
None
};
Some(Self::GetRecord {
key,
record,
peers: message
.closer_peers
.iter()
.filter_map(|peer| KademliaPeer::try_from(peer).ok())
.take(replication_factor)
.collect(),
})
}
// ADD_PROVIDER
2 => {
let key = (!message.key.is_empty()).then_some(message.key.into())?;
let providers = message
.provider_peers
.iter()
.filter_map(|peer| KademliaPeer::try_from(peer).ok())
.take(replication_factor)
.collect();
Some(Self::AddProvider { key, providers })
}
// GET_PROVIDERS
3 => {
let key = (!message.key.is_empty()).then_some(message.key.into());
let peers = message
.closer_peers
.iter()
.filter_map(|peer| KademliaPeer::try_from(peer).ok())
.take(replication_factor)
.collect();
let providers = message
.provider_peers
.iter()
.filter_map(|peer| KademliaPeer::try_from(peer).ok())
.take(replication_factor)
.collect();
Some(Self::GetProviders {
key,
peers,
providers,
})
}
message_type => {
tracing::warn!(target: LOG_TARGET, ?message_type, "unhandled message");
None
}
},
Err(error) => {
tracing::debug!(target: LOG_TARGET, ?error, "failed to decode message");
None
}
}
}
}
fn record_to_schema(record: Record) -> schema::kademlia::Record {
schema::kademlia::Record {
key: record.key.into(),
value: record.value,
time_received: String::new(),
publisher: record.publisher.map(|peer_id| peer_id.to_bytes()).unwrap_or_default(),
ttl: record
.expires
.map(|expires| {
let now = Instant::now();
if expires > now {
u32::try_from((expires - now).as_secs()).unwrap_or(u32::MAX)
} else {
1 // because 0 means "does not expire"
}
})
.unwrap_or(0),
}
}
fn record_from_schema(record: schema::kademlia::Record) -> Option<Record> {
Some(Record {
key: record.key.into(),
value: record.value,
publisher: if !record.publisher.is_empty() {
Some(PeerId::from_bytes(&record.publisher).ok()?)
} else {
None
},
expires: if record.ttl > 0 {
Some(Instant::now() + Duration::from_secs(record.ttl as u64))
} else {
None
},
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn non_empty_publisher_and_ttl_are_preserved() {
let expires = Instant::now() + Duration::from_secs(3600);
let record = Record {
key: vec![1, 2, 3].into(),
value: vec![17],
publisher: Some(PeerId::random()),
expires: Some(expires),
};
let got_record = record_from_schema(record_to_schema(record.clone())).unwrap();
assert_eq!(got_record.key, record.key);
assert_eq!(got_record.value, record.value);
assert_eq!(got_record.publisher, record.publisher);
// Check that the expiration time is sane.
let got_expires = got_record.expires.unwrap();
assert!(got_expires - expires >= Duration::ZERO);
assert!(got_expires - expires < Duration::from_secs(10));
}
#[test]
fn empty_publisher_and_ttl_are_preserved() {
let record = Record {
key: vec![1, 2, 3].into(),
value: vec![17],
publisher: None,
expires: None,
};
let got_record = record_from_schema(record_to_schema(record.clone())).unwrap();
assert_eq!(got_record, record);
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/handle.rs | src/protocol/libp2p/kademlia/handle.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::{ContentProvider, PeerRecord, QueryId, Record, RecordKey},
PeerId,
};
use futures::Stream;
use multiaddr::Multiaddr;
use tokio::sync::mpsc::{Receiver, Sender};
use std::{
num::NonZeroUsize,
pin::Pin,
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
task::{Context, Poll},
};
/// Quorum.
///
/// Quorum defines how many peers must be successfully contacted
/// in order for the query to be considered successful.
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub enum Quorum {
/// All peers must be successfully contacted.
All,
/// One peer must be successfully contacted.
One,
/// `N` peers must be successfully contacted.
N(NonZeroUsize),
}
/// Routing table update mode.
#[derive(Debug, Copy, Clone)]
pub enum RoutingTableUpdateMode {
/// Don't insert discovered peers automatically to the routing tables but
/// allow user to do that by calling [`KademliaHandle::add_known_peer()`].
Manual,
/// Automatically add all discovered peers to routing tables.
Automatic,
}
/// Incoming record validation mode.
#[derive(Debug, Copy, Clone)]
pub enum IncomingRecordValidationMode {
/// Don't insert incoming records automatically to the local DHT store
/// and let the user do that by calling [`KademliaHandle::store_record()`].
Manual,
/// Automatically accept all incoming records.
Automatic,
}
/// Kademlia commands.
#[derive(Debug)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub enum KademliaCommand {
/// Add known peer.
AddKnownPeer {
/// Peer ID.
peer: PeerId,
/// Addresses of peer.
addresses: Vec<Multiaddr>,
},
/// Send `FIND_NODE` message.
FindNode {
/// Peer ID.
peer: PeerId,
/// Query ID for the query.
query_id: QueryId,
},
/// Store record to DHT.
PutRecord {
/// Record.
record: Record,
/// [`Quorum`] for the query.
quorum: Quorum,
/// Query ID for the query.
query_id: QueryId,
},
/// Store record to DHT to the given peers.
///
/// Similar to [`KademliaCommand::PutRecord`] but allows user to specify the peers.
PutRecordToPeers {
/// Record.
record: Record,
/// [`Quorum`] for the query.
quorum: Quorum,
/// Query ID for the query.
query_id: QueryId,
/// Use the following peers for the put request.
peers: Vec<PeerId>,
/// Update local store.
update_local_store: bool,
},
/// Get record from DHT.
GetRecord {
/// Record key.
key: RecordKey,
/// [`Quorum`] for the query.
quorum: Quorum,
/// Query ID for the query.
query_id: QueryId,
},
/// Get providers from DHT.
GetProviders {
/// Provided key.
key: RecordKey,
/// Query ID for the query.
query_id: QueryId,
},
/// Register as a content provider for `key`.
StartProviding {
/// Provided key.
key: RecordKey,
/// [`Quorum`] for the query.
quorum: Quorum,
/// Query ID for the query.
query_id: QueryId,
},
/// Stop providing the key locally and refreshing the provider.
StopProviding {
/// Provided key.
key: RecordKey,
},
/// Store record locally.
StoreRecord {
// Record.
record: Record,
},
}
/// Kademlia events.
#[derive(Debug, Clone)]
pub enum KademliaEvent {
/// Result for the issued `FIND_NODE` query.
FindNodeSuccess {
/// Query ID.
query_id: QueryId,
/// Target of the query
target: PeerId,
/// Found nodes and their addresses.
peers: Vec<(PeerId, Vec<Multiaddr>)>,
},
/// Routing table update.
///
/// Kademlia has discovered one or more peers that should be added to the routing table.
/// If [`RoutingTableUpdateMode`] is `Automatic`, user can ignore this event unless some
/// upper-level protocols has user for this information.
///
/// If the mode was set to `Manual`, user should call [`KademliaHandle::add_known_peer()`]
/// in order to add the peers to routing table.
RoutingTableUpdate {
/// Discovered peers.
peers: Vec<PeerId>,
},
/// `GET_VALUE` query succeeded.
GetRecordSuccess {
/// Query ID.
query_id: QueryId,
},
/// `GET_VALUE` inflight query produced a result.
///
/// This event is emitted when a peer responds to the query with a record.
GetRecordPartialResult {
/// Query ID.
query_id: QueryId,
/// Found record.
record: PeerRecord,
},
/// `GET_PROVIDERS` query succeeded.
GetProvidersSuccess {
/// Query ID.
query_id: QueryId,
/// Provided key.
provided_key: RecordKey,
/// Found providers with cached addresses. Returned providers are sorted by distane to the
/// provided key.
providers: Vec<ContentProvider>,
},
/// `PUT_VALUE` query succeeded.
PutRecordSuccess {
/// Query ID.
query_id: QueryId,
/// Record key.
key: RecordKey,
},
/// `ADD_PROVIDER` query succeeded.
AddProviderSuccess {
/// Query ID.
query_id: QueryId,
/// Provided key.
provided_key: RecordKey,
},
/// Query failed.
QueryFailed {
/// Query ID.
query_id: QueryId,
},
/// Incoming `PUT_VALUE` request received.
///
/// In case of using [`IncomingRecordValidationMode::Manual`] and successful validation
/// the record must be manually inserted into the local DHT store with
/// [`KademliaHandle::store_record()`].
IncomingRecord {
/// Record.
record: Record,
},
/// Incoming `ADD_PROVIDER` request received.
IncomingProvider {
/// Provided key.
provided_key: RecordKey,
/// Provider.
provider: ContentProvider,
},
}
/// Handle for communicating with the Kademlia protocol.
pub struct KademliaHandle {
/// TX channel for sending commands to `Kademlia`.
cmd_tx: Sender<KademliaCommand>,
/// RX channel for receiving events from `Kademlia`.
event_rx: Receiver<KademliaEvent>,
/// Next query ID.
next_query_id: Arc<AtomicUsize>,
}
impl KademliaHandle {
/// Create new [`KademliaHandle`].
pub(super) fn new(
cmd_tx: Sender<KademliaCommand>,
event_rx: Receiver<KademliaEvent>,
next_query_id: Arc<AtomicUsize>,
) -> Self {
Self {
cmd_tx,
event_rx,
next_query_id,
}
}
/// Allocate next query ID.
fn next_query_id(&mut self) -> QueryId {
let query_id = self.next_query_id.fetch_add(1, Ordering::Relaxed);
QueryId(query_id)
}
/// Add known peer.
pub async fn add_known_peer(&self, peer: PeerId, addresses: Vec<Multiaddr>) {
let _ = self.cmd_tx.send(KademliaCommand::AddKnownPeer { peer, addresses }).await;
}
/// Send `FIND_NODE` query to known peers.
pub async fn find_node(&mut self, peer: PeerId) -> QueryId {
let query_id = self.next_query_id();
let _ = self.cmd_tx.send(KademliaCommand::FindNode { peer, query_id }).await;
query_id
}
/// Store record to DHT.
pub async fn put_record(&mut self, record: Record, quorum: Quorum) -> QueryId {
let query_id = self.next_query_id();
let _ = self
.cmd_tx
.send(KademliaCommand::PutRecord {
record,
quorum,
query_id,
})
.await;
query_id
}
/// Store record to DHT to the given peers.
///
/// Returns [`Err`] only if `Kademlia` is terminating.
pub async fn put_record_to_peers(
&mut self,
record: Record,
peers: Vec<PeerId>,
update_local_store: bool,
quorum: Quorum,
) -> QueryId {
let query_id = self.next_query_id();
let _ = self
.cmd_tx
.send(KademliaCommand::PutRecordToPeers {
record,
query_id,
peers,
update_local_store,
quorum,
})
.await;
query_id
}
/// Get record from DHT.
///
/// Returns [`Err`] only if `Kademlia` is terminating.
pub async fn get_record(&mut self, key: RecordKey, quorum: Quorum) -> QueryId {
let query_id = self.next_query_id();
let _ = self
.cmd_tx
.send(KademliaCommand::GetRecord {
key,
quorum,
query_id,
})
.await;
query_id
}
/// Register as a content provider on the DHT.
///
/// Register the local peer ID & its `public_addresses` as a provider for a given `key`.
/// Returns [`Err`] only if `Kademlia` is terminating.
pub async fn start_providing(&mut self, key: RecordKey, quorum: Quorum) -> QueryId {
let query_id = self.next_query_id();
let _ = self
.cmd_tx
.send(KademliaCommand::StartProviding {
key,
quorum,
query_id,
})
.await;
query_id
}
/// Stop providing the key on the DHT.
///
/// This will stop republishing the provider, but won't
/// remove it instantly from the nodes. It will be removed from them after the provider TTL
/// expires, set by default to 48 hours.
pub async fn stop_providing(&mut self, key: RecordKey) {
let _ = self.cmd_tx.send(KademliaCommand::StopProviding { key }).await;
}
/// Get providers from DHT.
///
/// Returns [`Err`] only if `Kademlia` is terminating.
pub async fn get_providers(&mut self, key: RecordKey) -> QueryId {
let query_id = self.next_query_id();
let _ = self.cmd_tx.send(KademliaCommand::GetProviders { key, query_id }).await;
query_id
}
/// Store the record in the local store. Used in combination with
/// [`IncomingRecordValidationMode::Manual`].
pub async fn store_record(&mut self, record: Record) {
let _ = self.cmd_tx.send(KademliaCommand::StoreRecord { record }).await;
}
/// Try to add known peer and if the channel is clogged, return an error.
pub fn try_add_known_peer(&self, peer: PeerId, addresses: Vec<Multiaddr>) -> Result<(), ()> {
self.cmd_tx
.try_send(KademliaCommand::AddKnownPeer { peer, addresses })
.map_err(|_| ())
}
/// Try to initiate `FIND_NODE` query and if the channel is clogged, return an error.
pub fn try_find_node(&mut self, peer: PeerId) -> Result<QueryId, ()> {
let query_id = self.next_query_id();
self.cmd_tx
.try_send(KademliaCommand::FindNode { peer, query_id })
.map(|_| query_id)
.map_err(|_| ())
}
/// Try to initiate `PUT_VALUE` query and if the channel is clogged, return an error.
pub fn try_put_record(&mut self, record: Record, quorum: Quorum) -> Result<QueryId, ()> {
let query_id = self.next_query_id();
self.cmd_tx
.try_send(KademliaCommand::PutRecord {
record,
query_id,
quorum,
})
.map(|_| query_id)
.map_err(|_| ())
}
/// Try to initiate `PUT_VALUE` query to the given peers and if the channel is clogged,
/// return an error.
pub fn try_put_record_to_peers(
&mut self,
record: Record,
peers: Vec<PeerId>,
update_local_store: bool,
quorum: Quorum,
) -> Result<QueryId, ()> {
let query_id = self.next_query_id();
self.cmd_tx
.try_send(KademliaCommand::PutRecordToPeers {
record,
query_id,
peers,
update_local_store,
quorum,
})
.map(|_| query_id)
.map_err(|_| ())
}
/// Try to initiate `GET_VALUE` query and if the channel is clogged, return an error.
pub fn try_get_record(&mut self, key: RecordKey, quorum: Quorum) -> Result<QueryId, ()> {
let query_id = self.next_query_id();
self.cmd_tx
.try_send(KademliaCommand::GetRecord {
key,
quorum,
query_id,
})
.map(|_| query_id)
.map_err(|_| ())
}
/// Try to store the record in the local store, and if the channel is clogged, return an error.
/// Used in combination with [`IncomingRecordValidationMode::Manual`].
pub fn try_store_record(&mut self, record: Record) -> Result<(), ()> {
self.cmd_tx.try_send(KademliaCommand::StoreRecord { record }).map_err(|_| ())
}
#[cfg(feature = "fuzz")]
/// Expose functionality for fuzzing
pub async fn fuzz_send_message(&mut self, command: KademliaCommand) -> crate::Result<()> {
let _ = self.cmd_tx.send(command).await;
Ok(())
}
}
impl Stream for KademliaHandle {
type Item = KademliaEvent;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
self.event_rx.poll_recv(cx)
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/query/target_peers.rs | src/protocol/libp2p/kademlia/query/target_peers.rs | // Copyright 2025 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::{handle::Quorum, query::QueryAction, QueryId, RecordKey},
PeerId,
};
use std::{cmp, collections::HashSet};
/// Logging target for this file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::query::target_peers";
/// Context for tracking `PUT_VALUE`/`ADD_PROVIDER` requests to peers.
#[derive(Debug)]
pub struct PutToTargetPeersContext {
/// Query ID.
pub query: QueryId,
/// Record/provider key.
pub key: RecordKey,
/// Quorum that needs to be reached for the query to succeed.
peers_to_succeed: usize,
/// Peers we're waiting for responses from.
pending_peers: HashSet<PeerId>,
/// Number of successfully responded peers.
n_succeeded: usize,
}
impl PutToTargetPeersContext {
/// Create new [`PutToTargetPeersContext`].
pub fn new(query: QueryId, key: RecordKey, peers: Vec<PeerId>, quorum: Quorum) -> Self {
Self {
query,
key,
peers_to_succeed: match quorum {
Quorum::One => 1,
// Clamp by the number of discovered peers. This should ever be relevant on
// small networks with fewer peers than the replication factor. Without such
// clamping the query would always fail in small testnets.
Quorum::N(n) => cmp::min(n.get(), cmp::max(peers.len(), 1)),
Quorum::All => cmp::max(peers.len(), 1),
},
pending_peers: peers.into_iter().collect(),
n_succeeded: 0,
}
}
/// Register a success of sending a message to `peer`.
pub fn register_send_success(&mut self, peer: PeerId) {
if self.pending_peers.remove(&peer) {
self.n_succeeded += 1;
tracing::trace!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"successful `PUT_VALUE`/`ADD_PROVIDER` to peer",
);
} else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"`PutToTargetPeersContext::register_response`: pending peer does not exist",
);
}
}
/// Register a failure of sending a message to `peer`.
pub fn register_send_failure(&mut self, peer: PeerId) {
if self.pending_peers.remove(&peer) {
tracing::trace!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"failed `PUT_VALUE`/`ADD_PROVIDER` to peer",
);
} else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"`PutToTargetPeersContext::register_response_failure`: pending peer does not exist",
);
}
}
/// Register successful response from peer.
pub fn register_response(&mut self, _peer: PeerId) {
// Currently we only track if we successfully sent the message to the peer both for
// `PUT_VALUE` and `ADD_PROVIDER`. While `PUT_VALUE` has a response message, due to litep2p
// not sending it in the past, tracking it would frequently result in reporting query
// failures. `ADD_PROVIDER` does not have a response message at all.
// TODO: once most of the network is on a litep2p version that sends `PUT_VALUE` responses,
// we should track them.
}
/// Register failed response from peer.
pub fn register_response_failure(&mut self, _peer: PeerId) {
// See a comment in `register_response`.
// Also note that due to the implementation of [`QueryEngine::register_peer_failure`], only
// one of `register_response_failure` or `register_send_failure` must be implemented.
}
/// Check if all responses have been received.
pub fn is_finished(&self) -> bool {
self.pending_peers.is_empty()
}
/// Check if all requests were successful.
pub fn is_succeded(&self) -> bool {
self.n_succeeded >= self.peers_to_succeed
}
/// Get next action if the context is finished.
pub fn next_action(&self) -> Option<QueryAction> {
if self.is_finished() {
if self.is_succeded() {
Some(QueryAction::QuerySucceeded { query: self.query })
} else {
Some(QueryAction::QueryFailed { query: self.query })
}
} else {
None
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/query/find_node.rs | src/protocol/libp2p/kademlia/query/find_node.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use bytes::Bytes;
use crate::{
protocol::libp2p::kademlia::{
message::KademliaMessage,
query::{QueryAction, QueryId},
types::{Distance, KademliaPeer, Key},
},
PeerId,
};
use std::collections::{BTreeMap, HashMap, HashSet, VecDeque};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::query::find_node";
/// Default timeout for a peer to respond to a query.
const DEFAULT_PEER_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(10);
/// The configuration needed to instantiate a new [`FindNodeContext`].
#[derive(Debug, Clone)]
pub struct FindNodeConfig<T: Clone + Into<Vec<u8>>> {
/// Local peer ID.
pub local_peer_id: PeerId,
/// Replication factor.
pub replication_factor: usize,
/// Parallelism factor.
pub parallelism_factor: usize,
/// Query ID.
pub query: QueryId,
/// Target key.
pub target: Key<T>,
}
/// Context for `FIND_NODE` queries.
#[derive(Debug)]
pub struct FindNodeContext<T: Clone + Into<Vec<u8>>> {
/// Query immutable config.
pub config: FindNodeConfig<T>,
/// Cached Kademlia message to send.
kad_message: Bytes,
/// Peers from whom the `QueryEngine` is waiting to hear a response.
pub pending: HashMap<PeerId, (KademliaPeer, std::time::Instant)>,
/// Queried candidates.
///
/// These are the peers for whom the query has already been sent
/// and who have either returned their closest peers or failed to answer.
pub queried: HashSet<PeerId>,
/// Candidates.
pub candidates: BTreeMap<Distance, KademliaPeer>,
/// Responses.
pub responses: BTreeMap<Distance, KademliaPeer>,
/// The timeout after which the pending request is no longer
/// counting towards the parallelism factor.
///
/// This is used to prevent the query from getting stuck when a peer
/// is slow or fails to respond in due time.
peer_timeout: std::time::Duration,
/// The number of pending responses that count towards the parallelism factor.
///
/// These represent the number of peers added to the `Self::pending` minus the number of peers
/// that have failed to respond within the `Self::peer_timeout`
pending_responses: usize,
}
impl<T: Clone + Into<Vec<u8>>> FindNodeContext<T> {
/// Create new [`FindNodeContext`].
pub fn new(config: FindNodeConfig<T>, in_peers: VecDeque<KademliaPeer>) -> Self {
let mut candidates = BTreeMap::new();
for candidate in &in_peers {
let distance = config.target.distance(&candidate.key);
candidates.insert(distance, candidate.clone());
}
let kad_message = KademliaMessage::find_node(config.target.clone().into_preimage());
Self {
config,
kad_message,
candidates,
pending: HashMap::new(),
queried: HashSet::new(),
responses: BTreeMap::new(),
peer_timeout: DEFAULT_PEER_TIMEOUT,
pending_responses: 0,
}
}
/// Register response failure for `peer`.
pub fn register_response_failure(&mut self, peer: PeerId) {
let Some((peer, instant)) = self.pending.remove(&peer) else {
tracing::debug!(target: LOG_TARGET, query = ?self.config.query, ?peer, "pending peer doesn't exist during response failure");
return;
};
self.pending_responses = self.pending_responses.saturating_sub(1);
tracing::trace!(target: LOG_TARGET, query = ?self.config.query, ?peer, elapsed = ?instant.elapsed(), "peer failed to respond");
self.queried.insert(peer.peer);
}
/// Register `FIND_NODE` response from `peer`.
pub fn register_response(&mut self, peer: PeerId, peers: Vec<KademliaPeer>) {
let Some((peer, instant)) = self.pending.remove(&peer) else {
tracing::debug!(target: LOG_TARGET, query = ?self.config.query, ?peer, "received response from peer but didn't expect it");
return;
};
self.pending_responses = self.pending_responses.saturating_sub(1);
tracing::trace!(target: LOG_TARGET, query = ?self.config.query, ?peer, elapsed = ?instant.elapsed(), "received response from peer");
// calculate distance for `peer` from target and insert it if
// a) the map doesn't have 20 responses
// b) it can replace some other peer that has a higher distance
let distance = self.config.target.distance(&peer.key);
// always mark the peer as queried to prevent it getting queried again
self.queried.insert(peer.peer);
if self.responses.len() < self.config.replication_factor {
self.responses.insert(distance, peer);
} else {
// Update the furthest peer if this response is closer.
// Find the furthest distance.
let furthest_distance =
self.responses.last_entry().map(|entry| *entry.key()).unwrap_or(distance);
// The response received from the peer is closer than the furthest response.
if distance < furthest_distance {
self.responses.insert(distance, peer);
// Remove the furthest entry.
if self.responses.len() > self.config.replication_factor {
self.responses.pop_last();
}
}
}
let to_query_candidate = peers.into_iter().filter_map(|peer| {
// Peer already produced a response.
if self.queried.contains(&peer.peer) {
return None;
}
// Peer was queried, awaiting response.
if self.pending.contains_key(&peer.peer) {
return None;
}
// Local node.
if self.config.local_peer_id == peer.peer {
return None;
}
Some(peer)
});
for candidate in to_query_candidate {
let distance = self.config.target.distance(&candidate.key);
self.candidates.insert(distance, candidate);
}
}
/// Register a failure of sending `FIN_NODE` request to `peer`.
pub fn register_send_failure(&mut self, _peer: PeerId) {
// In case of a send failure, `register_response_failure` is called as well.
// Failure is handled there.
}
/// Register a success of sending `FIND_NODE` request to `peer`.
pub fn register_send_success(&mut self, _peer: PeerId) {
// `FIND_NODE` requests are compound request-response pairs of messages,
// so we handle final success/failure in `register_response`/`register_response_failure`.
}
/// Get next action for `peer`.
pub fn next_peer_action(&mut self, peer: &PeerId) -> Option<QueryAction> {
self.pending.contains_key(peer).then_some(QueryAction::SendMessage {
query: self.config.query,
peer: *peer,
message: self.kad_message.clone(),
})
}
/// Schedule next peer for outbound `FIND_NODE` query.
fn schedule_next_peer(&mut self) -> Option<QueryAction> {
tracing::trace!(target: LOG_TARGET, query = ?self.config.query, "get next peer");
let (_, candidate) = self.candidates.pop_first()?;
let peer = candidate.peer;
tracing::trace!(target: LOG_TARGET, query = ?self.config.query, ?peer, "current candidate");
self.pending.insert(candidate.peer, (candidate, std::time::Instant::now()));
self.pending_responses = self.pending_responses.saturating_add(1);
Some(QueryAction::SendMessage {
query: self.config.query,
peer,
message: self.kad_message.clone(),
})
}
/// Check if the query cannot make any progress.
///
/// Returns true when there are no pending responses and no candidates to query.
fn is_done(&self) -> bool {
self.pending.is_empty() && self.candidates.is_empty()
}
/// Get next action for a `FIND_NODE` query.
pub fn next_action(&mut self) -> Option<QueryAction> {
// If we cannot make progress, return the final result.
// A query failed when we are not able to identify one single peer.
if self.is_done() {
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
pending = self.pending.len(),
candidates = self.candidates.len(),
"query finished"
);
return if self.responses.is_empty() {
Some(QueryAction::QueryFailed {
query: self.config.query,
})
} else {
Some(QueryAction::QuerySucceeded {
query: self.config.query,
})
};
}
for (peer, instant) in self.pending.values() {
if instant.elapsed() > self.peer_timeout {
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
elapsed = ?instant.elapsed(),
"peer no longer counting towards parallelism factor"
);
self.pending_responses = self.pending_responses.saturating_sub(1);
}
}
// At this point, we either have pending responses or candidates to query; and we need more
// results. Ensure we do not exceed the parallelism factor.
if self.pending_responses == self.config.parallelism_factor {
return None;
}
// Schedule the next peer to fill up the responses.
if self.responses.len() < self.config.replication_factor {
return self.schedule_next_peer();
}
// We can finish the query here, but check if there is a better candidate for the query.
match (
self.candidates.first_key_value(),
self.responses.last_key_value(),
) {
(Some((_, candidate_peer)), Some((worst_response_distance, _))) => {
let first_candidate_distance = self.config.target.distance(&candidate_peer.key);
if first_candidate_distance < *worst_response_distance {
return self.schedule_next_peer();
}
}
_ => (),
}
// We have found enough responses and there are no better candidates to query.
Some(QueryAction::QuerySucceeded {
query: self.config.query,
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::protocol::libp2p::kademlia::types::ConnectionType;
fn default_config() -> FindNodeConfig<Vec<u8>> {
FindNodeConfig {
local_peer_id: PeerId::random(),
replication_factor: 20,
parallelism_factor: 10,
query: QueryId(0),
target: Key::new(vec![1, 2, 3]),
}
}
fn peer_to_kad(peer: PeerId) -> KademliaPeer {
KademliaPeer {
peer,
key: Key::from(peer),
address_store: Default::default(),
connection: ConnectionType::Connected,
}
}
fn setup_closest_responses() -> (PeerId, PeerId, FindNodeConfig<PeerId>) {
let peer_a = PeerId::random();
let peer_b = PeerId::random();
let target = PeerId::random();
let distance_a = Key::from(peer_a).distance(&Key::from(target));
let distance_b = Key::from(peer_b).distance(&Key::from(target));
let (closest, furthest) = if distance_a < distance_b {
(peer_a, peer_b)
} else {
(peer_b, peer_a)
};
let config = FindNodeConfig {
parallelism_factor: 1,
replication_factor: 1,
target: Key::from(target),
local_peer_id: PeerId::random(),
query: QueryId(0),
};
(closest, furthest, config)
}
#[test]
fn completes_when_no_candidates() {
let config = default_config();
let mut context = FindNodeContext::new(config, VecDeque::new());
assert!(context.is_done());
let event = context.next_action().unwrap();
match event {
QueryAction::QueryFailed { query, .. } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
};
}
#[test]
fn fulfill_parallelism() {
let config = FindNodeConfig {
parallelism_factor: 3,
..default_config()
};
let in_peers_set = (0..3).map(|_| PeerId::random()).collect::<HashSet<_>>();
let in_peers = in_peers_set.iter().map(|peer| peer_to_kad(*peer)).collect();
let mut context = FindNodeContext::new(config, in_peers);
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(in_peers_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Fulfilled parallelism.
assert!(context.next_action().is_none());
}
#[test]
fn fulfill_parallelism_with_timeout_optimization() {
let config = FindNodeConfig {
parallelism_factor: 3,
..default_config()
};
let in_peers_set = (0..4).map(|_| PeerId::random()).collect::<HashSet<_>>();
let in_peers = in_peers_set.iter().map(|peer| peer_to_kad(*peer)).collect();
let mut context = FindNodeContext::new(config, in_peers);
// Test overwrite.
context.peer_timeout = std::time::Duration::from_secs(1);
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(in_peers_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Fulfilled parallelism.
assert!(context.next_action().is_none());
// Sleep more than 1 second.
std::thread::sleep(std::time::Duration::from_secs(2));
// The pending responses are reset only on the next query action.
assert_eq!(context.pending_responses, 3);
assert_eq!(context.pending.len(), 3);
// This allows other peers to be queried.
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), 4);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(in_peers_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
assert_eq!(context.pending_responses, 1);
assert_eq!(context.pending.len(), 4);
}
#[test]
fn completes_when_responses() {
let config = FindNodeConfig {
parallelism_factor: 3,
replication_factor: 3,
..default_config()
};
let peer_a = PeerId::random();
let peer_b = PeerId::random();
let peer_c = PeerId::random();
let in_peers_set: HashSet<_> = [peer_a, peer_b, peer_c].into_iter().collect();
assert_eq!(in_peers_set.len(), 3);
let in_peers = [peer_a, peer_b, peer_c].iter().map(|peer| peer_to_kad(*peer)).collect();
let mut context = FindNodeContext::new(config, in_peers);
// Schedule peer queries.
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(in_peers_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Checks a failed query that was not initiated.
let peer_d = PeerId::random();
context.register_response_failure(peer_d);
assert_eq!(context.pending.len(), 3);
assert!(context.queried.is_empty());
// Provide responses back.
context.register_response(peer_a, vec![]);
assert_eq!(context.pending.len(), 2);
assert_eq!(context.queried.len(), 1);
assert_eq!(context.responses.len(), 1);
// Provide different response from peer b with peer d as candidate.
context.register_response(peer_b, vec![peer_to_kad(peer_d)]);
assert_eq!(context.pending.len(), 1);
assert_eq!(context.queried.len(), 2);
assert_eq!(context.responses.len(), 2);
assert_eq!(context.candidates.len(), 1);
// Peer C fails.
context.register_response_failure(peer_c);
assert!(context.pending.is_empty());
assert_eq!(context.queried.len(), 3);
assert_eq!(context.responses.len(), 2);
// Drain the last candidate.
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), 1);
assert_eq!(peer, peer_d);
}
_ => panic!("Unexpected event"),
}
// Peer D responds.
context.register_response(peer_d, vec![]);
// Produces the result.
let event = context.next_action().unwrap();
match event {
QueryAction::QuerySucceeded { query, .. } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
};
}
#[test]
fn offers_closest_responses() {
let (closest, furthest, config) = setup_closest_responses();
// Scenario where we should return with the number of responses.
let in_peers = vec![peer_to_kad(furthest), peer_to_kad(closest)];
let mut context = FindNodeContext::new(config.clone(), in_peers.into_iter().collect());
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), 1);
assert!(context.pending.contains_key(&peer));
// The closest should be queried first regardless of the input order.
assert_eq!(closest, peer);
}
_ => panic!("Unexpected event"),
}
context.register_response(closest, vec![]);
let event = context.next_action().unwrap();
match event {
QueryAction::QuerySucceeded { query } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
};
}
#[test]
fn offers_closest_responses_with_better_candidates() {
let (closest, furthest, config) = setup_closest_responses();
// Scenario where the query is fulfilled however it continues because
// there is a closer peer to query.
let in_peers = vec![peer_to_kad(furthest)];
let mut context = FindNodeContext::new(config, in_peers.into_iter().collect());
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), 1);
assert!(context.pending.contains_key(&peer));
// Furthest is the only peer available.
assert_eq!(furthest, peer);
}
_ => panic!("Unexpected event"),
}
// Furthest node produces a response with the closest node.
// Even if we reach a total of 1 (parallelism factor) replies, we should continue.
context.register_response(furthest, vec![peer_to_kad(closest)]);
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), 1);
assert!(context.pending.contains_key(&peer));
// Furthest provided another peer that is closer.
assert_eq!(closest, peer);
}
_ => panic!("Unexpected event"),
}
// Even if we have the total number of responses, we have at least one
// inflight query which might be closer to the target.
assert!(context.next_action().is_none());
// Query finishes when receiving the response back.
context.register_response(closest, vec![]);
let event = context.next_action().unwrap();
match event {
QueryAction::QuerySucceeded { query, .. } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
};
}
#[test]
fn keep_k_best_results() {
let mut peers = (0..6).map(|_| PeerId::random()).collect::<Vec<_>>();
let target = Key::from(PeerId::random());
// Sort the peers by their distance to the target in descending order.
peers.sort_by_key(|peer| std::cmp::Reverse(target.distance(&Key::from(*peer))));
let config = FindNodeConfig {
parallelism_factor: 3,
replication_factor: 3,
target,
local_peer_id: PeerId::random(),
query: QueryId(0),
};
let in_peers = vec![peers[0], peers[1], peers[2]]
.iter()
.map(|peer| peer_to_kad(*peer))
.collect();
let mut context = FindNodeContext::new(config, in_peers);
// Schedule peer queries.
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Each peer responds with a better (closer) peer.
context.register_response(peers[0], vec![peer_to_kad(peers[3])]);
context.register_response(peers[1], vec![peer_to_kad(peers[4])]);
context.register_response(peers[2], vec![peer_to_kad(peers[5])]);
// Must schedule better peers.
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
}
_ => panic!("Unexpected event"),
}
}
context.register_response(peers[3], vec![]);
context.register_response(peers[4], vec![]);
context.register_response(peers[5], vec![]);
// Produces the result.
let event = context.next_action().unwrap();
match event {
QueryAction::QuerySucceeded { query } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
};
// Because the FindNode query keeps a window of the best K (3 in this case) peers,
// we expect to produce the best K peers. As opposed to having only the last entry
// updated, which would have produced [peer[0], peer[1], peer[5]].
// Check the responses.
let responses = context.responses.values().map(|peer| peer.peer).collect::<Vec<_>>();
// Note: peers are returned in order closest to the target, our `peers` input is sorted in
// decreasing order.
assert_eq!(responses, [peers[5], peers[4], peers[3]]);
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/query/put_record.rs | src/protocol/libp2p/kademlia/query/put_record.rs | // Copyright 2025 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::{handle::Quorum, query::QueryAction, QueryId, RecordKey},
PeerId,
};
use std::{cmp, collections::HashSet};
/// Logging target for this file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::query::put_record";
/// Context for tracking `PUT_VALUE` responses from peers.
#[derive(Debug)]
pub struct PutRecordToFoundNodesContext {
/// Query ID.
pub query: QueryId,
/// Record key.
pub key: RecordKey,
/// Quorum that needs to be reached for the query to succeed.
peers_to_succeed: usize,
/// Peers we're waiting for responses from.
pending_peers: HashSet<PeerId>,
/// Number of successfully responded peers.
n_succeeded: usize,
}
impl PutRecordToFoundNodesContext {
/// Create new [`PutRecordToFoundNodesContext`].
pub fn new(query: QueryId, key: RecordKey, peers: Vec<PeerId>, quorum: Quorum) -> Self {
Self {
query,
key,
peers_to_succeed: match quorum {
Quorum::One => 1,
// Clamp by the number of discovered peers. This should ever be relevant on
// small networks with fewer peers than the replication factor. Without such
// clamping the query would always fail in small testnets.
Quorum::N(n) => cmp::min(n.get(), cmp::max(peers.len(), 1)),
Quorum::All => cmp::max(peers.len(), 1),
},
pending_peers: peers.into_iter().collect(),
n_succeeded: 0,
}
}
/// Register successful response from peer.
pub fn register_response(&mut self, peer: PeerId) {
if self.pending_peers.remove(&peer) {
self.n_succeeded += 1;
tracing::trace!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"successful `PUT_VALUE` to peer",
);
} else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"`PutRecordToFoundNodesContext::register_response`: pending peer does not exist",
);
}
}
/// Register failed response from peer.
pub fn register_response_failure(&mut self, peer: PeerId) {
if self.pending_peers.remove(&peer) {
tracing::trace!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"failed `PUT_VALUE` to peer",
);
} else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.query,
?peer,
"`PutRecordToFoundNodesContext::register_response_failure`: pending peer does not exist",
);
}
}
/// Check if all responses have been received.
pub fn is_finished(&self) -> bool {
self.pending_peers.is_empty()
}
/// Check if all requests were successful.
pub fn is_succeded(&self) -> bool {
self.n_succeeded >= self.peers_to_succeed
}
/// Get next action if the context is finished.
pub fn next_action(&self) -> Option<QueryAction> {
if self.is_finished() {
if self.is_succeded() {
Some(QueryAction::QuerySucceeded { query: self.query })
} else {
Some(QueryAction::QueryFailed { query: self.query })
}
} else {
None
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/query/get_providers.rs | src/protocol/libp2p/kademlia/query/get_providers.rs | // Copyright 2024 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use bytes::Bytes;
use crate::{
protocol::libp2p::kademlia::{
message::KademliaMessage,
query::{QueryAction, QueryId},
record::{ContentProvider, Key as RecordKey},
types::{Distance, KademliaPeer, Key},
},
types::multiaddr::Multiaddr,
PeerId,
};
use std::collections::{BTreeMap, HashMap, HashSet, VecDeque};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::query::get_providers";
/// The configuration needed to instantiate a new [`GetProvidersContext`].
#[derive(Debug)]
pub struct GetProvidersConfig {
/// Local peer ID.
pub local_peer_id: PeerId,
/// Parallelism factor.
pub parallelism_factor: usize,
/// Query ID.
pub query: QueryId,
/// Target key.
pub target: Key<RecordKey>,
/// Known providers from the local store.
pub known_providers: Vec<KademliaPeer>,
}
#[derive(Debug)]
pub struct GetProvidersContext {
/// Query immutable config.
pub config: GetProvidersConfig,
/// Cached Kademlia message to send.
kad_message: Bytes,
/// Peers from whom the `QueryEngine` is waiting to hear a response.
pub pending: HashMap<PeerId, KademliaPeer>,
/// Queried candidates.
///
/// These are the peers for whom the query has already been sent
/// and who have either returned their closest peers or failed to answer.
pub queried: HashSet<PeerId>,
/// Candidates.
pub candidates: BTreeMap<Distance, KademliaPeer>,
/// Found providers.
pub found_providers: Vec<KademliaPeer>,
}
impl GetProvidersContext {
/// Create new [`GetProvidersContext`].
pub fn new(config: GetProvidersConfig, candidate_peers: VecDeque<KademliaPeer>) -> Self {
let mut candidates = BTreeMap::new();
for peer in &candidate_peers {
let distance = config.target.distance(&peer.key);
candidates.insert(distance, peer.clone());
}
let kad_message =
KademliaMessage::get_providers_request(config.target.clone().into_preimage());
Self {
config,
kad_message,
candidates,
pending: HashMap::new(),
queried: HashSet::new(),
found_providers: Vec::new(),
}
}
/// Get the found providers.
pub fn found_providers(self) -> Vec<ContentProvider> {
Self::merge_and_sort_providers(
self.config.known_providers.into_iter().chain(self.found_providers),
self.config.target,
)
}
fn merge_and_sort_providers(
found_providers: impl IntoIterator<Item = KademliaPeer>,
target: Key<RecordKey>,
) -> Vec<ContentProvider> {
// Merge addresses of different provider records of the same peer.
let mut providers = HashMap::<PeerId, HashSet<Multiaddr>>::new();
found_providers.into_iter().for_each(|provider| {
providers.entry(provider.peer).or_default().extend(provider.addresses())
});
// Convert into `Vec<KademliaPeer>`
let mut providers = providers
.into_iter()
.map(|(peer, addresses)| ContentProvider {
peer,
addresses: addresses.into_iter().collect(),
})
.collect::<Vec<_>>();
// Sort by the provider distance to the target key.
providers.sort_unstable_by(|p1, p2| {
Key::from(p1.peer).distance(&target).cmp(&Key::from(p2.peer).distance(&target))
});
providers
}
/// Register response failure for `peer`.
pub fn register_response_failure(&mut self, peer: PeerId) {
let Some(peer) = self.pending.remove(&peer) else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetProvidersContext`: pending peer doesn't exist",
);
return;
};
self.queried.insert(peer.peer);
}
/// Register `GET_PROVIDERS` response from `peer`.
pub fn register_response(
&mut self,
peer: PeerId,
providers: impl IntoIterator<Item = KademliaPeer>,
closer_peers: impl IntoIterator<Item = KademliaPeer>,
) {
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetProvidersContext`: received response from peer",
);
let Some(peer) = self.pending.remove(&peer) else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetProvidersContext`: received response from peer but didn't expect it",
);
return;
};
self.found_providers.extend(providers);
// Add the queried peer to `queried` and all new peers which haven't been
// queried to `candidates`
self.queried.insert(peer.peer);
let to_query_candidate = closer_peers.into_iter().filter_map(|peer| {
// Peer already produced a response.
if self.queried.contains(&peer.peer) {
return None;
}
// Peer was queried, awaiting response.
if self.pending.contains_key(&peer.peer) {
return None;
}
// Local node.
if self.config.local_peer_id == peer.peer {
return None;
}
Some(peer)
});
for candidate in to_query_candidate {
let distance = self.config.target.distance(&candidate.key);
self.candidates.insert(distance, candidate);
}
}
/// Register a failure of sending a `GET_PROVIDERS` request to `peer`.
pub fn register_send_failure(&mut self, _peer: PeerId) {
// In case of a send failure, `register_response_failure` is called as well.
// Failure is handled there.
}
/// Register a success of sending a `GET_PROVIDERS` request to `peer`.
pub fn register_send_success(&mut self, _peer: PeerId) {
// `GET_PROVIDERS` requests are compound request-response pairs of messages,
// so we handle final success/failure in `register_response`/`register_response_failure`.
}
/// Get next action for `peer`.
// TODO: https://github.com/paritytech/litep2p/issues/40 remove this and store the next action to `PeerAction`
pub fn next_peer_action(&mut self, peer: &PeerId) -> Option<QueryAction> {
self.pending.contains_key(peer).then_some(QueryAction::SendMessage {
query: self.config.query,
peer: *peer,
message: self.kad_message.clone(),
})
}
/// Schedule next peer for outbound `GET_VALUE` query.
fn schedule_next_peer(&mut self) -> Option<QueryAction> {
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
"`GetProvidersContext`: get next peer",
);
let (_, candidate) = self.candidates.pop_first()?;
let peer = candidate.peer;
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetProvidersContext`: current candidate",
);
self.pending.insert(candidate.peer, candidate);
Some(QueryAction::SendMessage {
query: self.config.query,
peer,
message: self.kad_message.clone(),
})
}
/// Check if the query cannot make any progress.
///
/// Returns true when there are no pending responses and no candidates to query.
fn is_done(&self) -> bool {
self.pending.is_empty() && self.candidates.is_empty()
}
/// Get next action for a `GET_PROVIDERS` query.
pub fn next_action(&mut self) -> Option<QueryAction> {
if self.is_done() {
// If we cannot make progress, return the final result.
// A query failed when we are not able to find any providers.
if self.found_providers.is_empty() {
Some(QueryAction::QueryFailed {
query: self.config.query,
})
} else {
Some(QueryAction::QuerySucceeded {
query: self.config.query,
})
}
} else if self.pending.len() == self.config.parallelism_factor {
// At this point, we either have pending responses or candidates to query; and we need
// more records. Ensure we do not exceed the parallelism factor.
None
} else {
self.schedule_next_peer()
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::protocol::libp2p::kademlia::types::ConnectionType;
use multiaddr::multiaddr;
fn default_config() -> GetProvidersConfig {
GetProvidersConfig {
local_peer_id: PeerId::random(),
parallelism_factor: 3,
query: QueryId(0),
target: Key::new(vec![1, 2, 3].into()),
known_providers: vec![],
}
}
fn peer_to_kad(peer: PeerId) -> KademliaPeer {
KademliaPeer {
peer,
key: Key::from(peer),
address_store: Default::default(),
connection: ConnectionType::NotConnected,
}
}
fn peer_to_kad_with_addresses(peer: PeerId, addresses: Vec<Multiaddr>) -> KademliaPeer {
KademliaPeer::new(peer, addresses, ConnectionType::NotConnected)
}
#[test]
fn completes_when_no_candidates() {
let config = default_config();
let mut context = GetProvidersContext::new(config, VecDeque::new());
assert!(context.is_done());
let event = context.next_action().unwrap();
match event {
QueryAction::QueryFailed { query, .. } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
}
}
#[test]
fn fulfill_parallelism() {
let config = GetProvidersConfig {
parallelism_factor: 3,
..default_config()
};
let candidate_peer_set: HashSet<_> =
[PeerId::random(), PeerId::random(), PeerId::random()].into_iter().collect();
assert_eq!(candidate_peer_set.len(), 3);
let candidate_peers = candidate_peer_set.iter().map(|peer| peer_to_kad(*peer)).collect();
let mut context = GetProvidersContext::new(config, candidate_peers);
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(candidate_peer_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Fulfilled parallelism.
assert!(context.next_action().is_none());
}
#[test]
fn completes_when_responses() {
let config = GetProvidersConfig {
parallelism_factor: 3,
..default_config()
};
let peer_a = PeerId::random();
let peer_b = PeerId::random();
let peer_c = PeerId::random();
let candidate_peer_set: HashSet<_> = [peer_a, peer_b, peer_c].into_iter().collect();
assert_eq!(candidate_peer_set.len(), 3);
let candidate_peers =
[peer_a, peer_b, peer_c].iter().map(|peer| peer_to_kad(*peer)).collect();
let mut context = GetProvidersContext::new(config, candidate_peers);
let [provider1, provider2, provider3, provider4] = (0..4)
.map(|_| ContentProvider {
peer: PeerId::random(),
addresses: vec![],
})
.collect::<Vec<_>>()
.try_into()
.unwrap();
// Schedule peer queries.
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(candidate_peer_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Checks a failed query that was not initiated.
let peer_d = PeerId::random();
context.register_response_failure(peer_d);
assert_eq!(context.pending.len(), 3);
assert!(context.queried.is_empty());
// Provide responses back.
let providers = vec![provider1.clone().into(), provider2.clone().into()];
context.register_response(peer_a, providers, vec![]);
assert_eq!(context.pending.len(), 2);
assert_eq!(context.queried.len(), 1);
assert_eq!(context.found_providers.len(), 2);
// Provide different response from peer b with peer d as candidate.
let providers = vec![provider2.clone().into(), provider3.clone().into()];
let candidates = vec![peer_to_kad(peer_d)];
context.register_response(peer_b, providers, candidates);
assert_eq!(context.pending.len(), 1);
assert_eq!(context.queried.len(), 2);
assert_eq!(context.found_providers.len(), 4);
assert_eq!(context.candidates.len(), 1);
// Peer C fails.
context.register_response_failure(peer_c);
assert!(context.pending.is_empty());
assert_eq!(context.queried.len(), 3);
assert_eq!(context.found_providers.len(), 4);
// Drain the last candidate.
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), 1);
assert_eq!(peer, peer_d);
}
_ => panic!("Unexpected event"),
}
// Peer D responds.
let providers = vec![provider4.clone().into()];
context.register_response(peer_d, providers, vec![]);
// Produces the result.
let event = context.next_action().unwrap();
match event {
QueryAction::QuerySucceeded { query, .. } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
}
// Check results.
let found_providers = context.found_providers();
assert_eq!(found_providers.len(), 4);
assert!(found_providers.contains(&provider1));
assert!(found_providers.contains(&provider2));
assert!(found_providers.contains(&provider3));
assert!(found_providers.contains(&provider4));
}
#[test]
fn providers_sorted_by_distance() {
let target = Key::new(vec![1, 2, 3].into());
let mut peers = (0..10).map(|_| PeerId::random()).collect::<Vec<_>>();
let providers = peers.iter().map(|peer| peer_to_kad(*peer)).collect::<Vec<_>>();
let found_providers =
GetProvidersContext::merge_and_sort_providers(providers, target.clone());
peers.sort_by(|p1, p2| {
Key::from(*p1).distance(&target).cmp(&Key::from(*p2).distance(&target))
});
assert!(
std::iter::zip(found_providers.into_iter(), peers.into_iter())
.all(|(provider, peer)| provider.peer == peer)
);
}
#[test]
fn provider_addresses_merged() {
let peer = PeerId::random();
let address1 = multiaddr!(Ip4([127, 0, 0, 1]), Tcp(10000u16));
let address2 = multiaddr!(Ip4([192, 168, 0, 1]), Tcp(10000u16));
let address3 = multiaddr!(Ip4([10, 0, 0, 1]), Tcp(10000u16));
let address4 = multiaddr!(Ip4([1, 1, 1, 1]), Tcp(10000u16));
let address5 = multiaddr!(Ip4([8, 8, 8, 8]), Tcp(10000u16));
let provider1 = peer_to_kad_with_addresses(peer, vec![address1.clone()]);
let provider2 = peer_to_kad_with_addresses(
peer,
vec![address2.clone(), address3.clone(), address4.clone()],
);
let provider3 = peer_to_kad_with_addresses(peer, vec![address4.clone(), address5.clone()]);
let providers = vec![provider1, provider2, provider3];
let found_providers = GetProvidersContext::merge_and_sort_providers(
providers,
Key::new(vec![1, 2, 3].into()),
);
assert_eq!(found_providers.len(), 1);
let addresses = &found_providers.first().unwrap().addresses;
assert_eq!(addresses.len(), 5);
assert!(addresses.contains(&address1));
assert!(addresses.contains(&address2));
assert!(addresses.contains(&address3));
assert!(addresses.contains(&address4));
assert!(addresses.contains(&address5));
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/query/mod.rs | src/protocol/libp2p/kademlia/query/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::{
message::KademliaMessage,
query::{
find_node::{FindNodeConfig, FindNodeContext},
get_providers::{GetProvidersConfig, GetProvidersContext},
get_record::{GetRecordConfig, GetRecordContext},
},
record::{ContentProvider, Key as RecordKey, Record},
types::{KademliaPeer, Key},
PeerRecord, Quorum,
},
PeerId,
};
use bytes::Bytes;
use std::collections::{HashMap, VecDeque};
use self::{find_many_nodes::FindManyNodesContext, target_peers::PutToTargetPeersContext};
mod find_many_nodes;
mod find_node;
mod get_providers;
mod get_record;
mod target_peers;
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::query";
/// Type representing a query ID.
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)]
#[cfg_attr(feature = "fuzz", derive(serde::Serialize, serde::Deserialize))]
pub struct QueryId(pub usize);
/// Query type.
#[derive(Debug)]
enum QueryType {
/// `FIND_NODE` query.
FindNode {
/// Context for the `FIND_NODE` query.
context: FindNodeContext<PeerId>,
},
/// `PUT_VALUE` query.
PutRecord {
/// Record that needs to be stored.
record: Record,
/// [`Quorum`] that needs to be reached for the query to succeed.
quorum: Quorum,
/// Context for the `FIND_NODE` query.
context: FindNodeContext<RecordKey>,
},
/// `PUT_VALUE` query to specified peers.
PutRecordToPeers {
/// Record that needs to be stored.
record: Record,
/// [`Quorum`] that needs to be reached for the query to succeed.
quorum: Quorum,
/// Context for finding peers.
context: FindManyNodesContext,
},
/// `PUT_VALUE` message sending phase.
PutRecordToFoundNodes {
/// Context for tracking `PUT_VALUE` responses.
context: PutToTargetPeersContext,
},
/// `GET_VALUE` query.
GetRecord {
/// Context for the `GET_VALUE` query.
context: GetRecordContext,
},
/// `ADD_PROVIDER` query.
AddProvider {
/// Provided key.
provided_key: RecordKey,
/// Provider record that need to be stored.
provider: ContentProvider,
/// [`Quorum`] that needs to be reached for the query to succeed.
quorum: Quorum,
/// Context for the `FIND_NODE` query.
context: FindNodeContext<RecordKey>,
},
/// `ADD_PROVIDER` message sending phase.
AddProviderToFoundNodes {
/// Context for tracking `ADD_PROVIDER` requests.
context: PutToTargetPeersContext,
},
/// `GET_PROVIDERS` query.
GetProviders {
/// Context for the `GET_PROVIDERS` query.
context: GetProvidersContext,
},
}
/// Query action.
#[derive(Debug)]
pub enum QueryAction {
/// Send message to peer.
SendMessage {
/// Query ID.
query: QueryId,
/// Peer.
peer: PeerId,
/// Message.
message: Bytes,
},
/// `FIND_NODE` query succeeded.
FindNodeQuerySucceeded {
/// ID of the query that succeeded.
query: QueryId,
/// Target peer.
target: PeerId,
/// Peers that were found.
peers: Vec<KademliaPeer>,
},
/// Store the record to nodes closest to target key.
PutRecordToFoundNodes {
/// Query ID of the original PUT_RECORD request.
query: QueryId,
/// Record to store.
record: Record,
/// Peers for whom the `PUT_VALUE` must be sent to.
peers: Vec<KademliaPeer>,
/// [`Quorum`] that needs to be reached for the query to succeed.
quorum: Quorum,
},
/// `PUT_VALUE` query succeeded.
PutRecordQuerySucceeded {
/// ID of the query that succeeded.
query: QueryId,
/// Record key of the stored record.
key: RecordKey,
},
/// Add the provider record to nodes closest to the target key.
AddProviderToFoundNodes {
/// Query ID of the original ADD_PROVIDER request.
query: QueryId,
/// Provided key.
provided_key: RecordKey,
/// Provider record.
provider: ContentProvider,
/// Peers for whom the `ADD_PROVIDER` must be sent to.
peers: Vec<KademliaPeer>,
/// [`Quorum`] that needs to be reached for the query to succeed.
quorum: Quorum,
},
/// `ADD_PROVIDER` query succeeded.
AddProviderQuerySucceeded {
/// ID of the query that succeeded.
query: QueryId,
/// Provided key.
provided_key: RecordKey,
},
/// `GET_VALUE` query succeeded.
GetRecordQueryDone {
/// Query ID.
query_id: QueryId,
},
/// `GET_VALUE` inflight query produced a result.
///
/// This event is emitted when a peer responds to the query with a record.
GetRecordPartialResult {
/// Query ID.
query_id: QueryId,
/// Found record.
record: PeerRecord,
},
/// `GET_PROVIDERS` query succeeded.
GetProvidersQueryDone {
/// Query ID.
query_id: QueryId,
/// Provided key.
provided_key: RecordKey,
/// Found providers.
providers: Vec<ContentProvider>,
},
/// Query succeeded.
QuerySucceeded {
/// ID of the query that succeeded.
query: QueryId,
},
/// Query failed.
QueryFailed {
/// ID of the query that failed.
query: QueryId,
},
}
/// Kademlia query engine.
pub struct QueryEngine {
/// Local peer ID.
local_peer_id: PeerId,
/// Replication factor.
replication_factor: usize,
/// Parallelism factor.
parallelism_factor: usize,
/// Active queries.
queries: HashMap<QueryId, QueryType>,
}
impl QueryEngine {
/// Create new [`QueryEngine`].
pub fn new(
local_peer_id: PeerId,
replication_factor: usize,
parallelism_factor: usize,
) -> Self {
Self {
local_peer_id,
replication_factor,
parallelism_factor,
queries: HashMap::new(),
}
}
/// Start `FIND_NODE` query.
pub fn start_find_node(
&mut self,
query_id: QueryId,
target: PeerId,
candidates: VecDeque<KademliaPeer>,
) -> QueryId {
tracing::debug!(
target: LOG_TARGET,
?query_id,
?target,
num_peers = ?candidates.len(),
"start `FIND_NODE` query"
);
let target = Key::from(target);
let config = FindNodeConfig {
local_peer_id: self.local_peer_id,
replication_factor: self.replication_factor,
parallelism_factor: self.parallelism_factor,
query: query_id,
target,
};
self.queries.insert(
query_id,
QueryType::FindNode {
context: FindNodeContext::new(config, candidates),
},
);
query_id
}
/// Start `PUT_VALUE` query.
pub fn start_put_record(
&mut self,
query_id: QueryId,
record: Record,
candidates: VecDeque<KademliaPeer>,
quorum: Quorum,
) -> QueryId {
tracing::debug!(
target: LOG_TARGET,
?query_id,
target = ?record.key,
num_peers = ?candidates.len(),
"start `PUT_VALUE` query"
);
let target = Key::new(record.key.clone());
let config = FindNodeConfig {
local_peer_id: self.local_peer_id,
replication_factor: self.replication_factor,
parallelism_factor: self.parallelism_factor,
query: query_id,
target,
};
self.queries.insert(
query_id,
QueryType::PutRecord {
record,
quorum,
context: FindNodeContext::new(config, candidates),
},
);
query_id
}
/// Start `PUT_VALUE` query to specified peers.
pub fn start_put_record_to_peers(
&mut self,
query_id: QueryId,
record: Record,
peers_to_report: Vec<KademliaPeer>,
quorum: Quorum,
) -> QueryId {
tracing::debug!(
target: LOG_TARGET,
?query_id,
target = ?record.key,
num_peers = ?peers_to_report.len(),
"start `PUT_VALUE` query to peers"
);
self.queries.insert(
query_id,
QueryType::PutRecordToPeers {
record,
quorum,
context: FindManyNodesContext::new(query_id, peers_to_report),
},
);
query_id
}
/// Start `GET_VALUE` query.
pub fn start_get_record(
&mut self,
query_id: QueryId,
target: RecordKey,
candidates: VecDeque<KademliaPeer>,
quorum: Quorum,
local_record: bool,
) -> QueryId {
tracing::debug!(
target: LOG_TARGET,
?query_id,
?target,
num_peers = ?candidates.len(),
"start `GET_VALUE` query"
);
let target = Key::new(target);
let config = GetRecordConfig {
local_peer_id: self.local_peer_id,
known_records: if local_record { 1 } else { 0 },
quorum,
replication_factor: self.replication_factor,
parallelism_factor: self.parallelism_factor,
query: query_id,
target,
};
self.queries.insert(
query_id,
QueryType::GetRecord {
context: GetRecordContext::new(config, candidates, local_record),
},
);
query_id
}
/// Start `ADD_PROVIDER` query.
pub fn start_add_provider(
&mut self,
query_id: QueryId,
provided_key: RecordKey,
provider: ContentProvider,
candidates: VecDeque<KademliaPeer>,
quorum: Quorum,
) -> QueryId {
tracing::debug!(
target: LOG_TARGET,
?query_id,
?provider,
num_peers = ?candidates.len(),
"start `ADD_PROVIDER` query",
);
let config = FindNodeConfig {
local_peer_id: self.local_peer_id,
replication_factor: self.replication_factor,
parallelism_factor: self.parallelism_factor,
query: query_id,
target: Key::new(provided_key.clone()),
};
self.queries.insert(
query_id,
QueryType::AddProvider {
provided_key,
provider,
quorum,
context: FindNodeContext::new(config, candidates),
},
);
query_id
}
/// Start `GET_PROVIDERS` query.
pub fn start_get_providers(
&mut self,
query_id: QueryId,
key: RecordKey,
candidates: VecDeque<KademliaPeer>,
known_providers: Vec<ContentProvider>,
) -> QueryId {
tracing::debug!(
target: LOG_TARGET,
?query_id,
?key,
num_peers = ?candidates.len(),
"start `GET_PROVIDERS` query",
);
let target = Key::new(key);
let config = GetProvidersConfig {
local_peer_id: self.local_peer_id,
parallelism_factor: self.parallelism_factor,
query: query_id,
target,
known_providers: known_providers.into_iter().map(Into::into).collect(),
};
self.queries.insert(
query_id,
QueryType::GetProviders {
context: GetProvidersContext::new(config, candidates),
},
);
query_id
}
/// Start `PUT_VALUE` requests tracking.
pub fn start_put_record_to_found_nodes_requests_tracking(
&mut self,
query_id: QueryId,
key: RecordKey,
peers: Vec<PeerId>,
quorum: Quorum,
) {
tracing::debug!(
target: LOG_TARGET,
?query_id,
num_peers = ?peers.len(),
"start `PUT_VALUE` responses tracking"
);
self.queries.insert(
query_id,
QueryType::PutRecordToFoundNodes {
context: PutToTargetPeersContext::new(query_id, key, peers, quorum),
},
);
}
/// Start `ADD_PROVIDER` requests tracking.
pub fn start_add_provider_to_found_nodes_requests_tracking(
&mut self,
query_id: QueryId,
provided_key: RecordKey,
peers: Vec<PeerId>,
quorum: Quorum,
) {
tracing::debug!(
target: LOG_TARGET,
?query_id,
num_peers = ?peers.len(),
"start `ADD_PROVIDER` progress tracking"
);
self.queries.insert(
query_id,
QueryType::AddProviderToFoundNodes {
context: PutToTargetPeersContext::new(query_id, provided_key, peers, quorum),
},
);
}
/// Register response failure from a queried peer.
pub fn register_response_failure(&mut self, query: QueryId, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "register response failure");
match self.queries.get_mut(&query) {
None => {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "response failure for a stale query");
}
Some(QueryType::FindNode { context }) => {
context.register_response_failure(peer);
}
Some(QueryType::PutRecord { context, .. }) => {
context.register_response_failure(peer);
}
Some(QueryType::PutRecordToPeers { context, .. }) => {
context.register_response_failure(peer);
}
Some(QueryType::PutRecordToFoundNodes { context }) => {
context.register_response_failure(peer);
}
Some(QueryType::GetRecord { context }) => {
context.register_response_failure(peer);
}
Some(QueryType::AddProvider { context, .. }) => {
context.register_response_failure(peer);
}
Some(QueryType::AddProviderToFoundNodes { context }) => {
context.register_response_failure(peer);
}
Some(QueryType::GetProviders { context }) => {
context.register_response_failure(peer);
}
}
}
/// Register that `response` received from `peer`.
pub fn register_response(&mut self, query: QueryId, peer: PeerId, message: KademliaMessage) {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "register response");
match self.queries.get_mut(&query) {
None => {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "response for a stale query");
}
Some(QueryType::FindNode { context }) => match message {
KademliaMessage::FindNode { peers, .. } => {
context.register_response(peer, peers);
}
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `FIND_NODE`: {message}",
);
context.register_response_failure(peer);
}
},
Some(QueryType::PutRecord { context, .. }) => match message {
KademliaMessage::FindNode { peers, .. } => {
context.register_response(peer, peers);
}
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `FIND_NODE` during `PUT_VALUE` query: {message}",
);
context.register_response_failure(peer);
}
},
Some(QueryType::PutRecordToPeers { context, .. }) => match message {
KademliaMessage::FindNode { peers, .. } => {
context.register_response(peer, peers);
}
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `FIND_NODE` during `PUT_VALUE` (to peers): {message}",
);
context.register_response_failure(peer);
}
},
Some(QueryType::PutRecordToFoundNodes { context }) => match message {
KademliaMessage::PutValue { .. } => {
context.register_response(peer);
}
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `PUT_VALUE`: {message}",
);
context.register_response_failure(peer);
}
},
Some(QueryType::GetRecord { context }) => match message {
KademliaMessage::GetRecord { record, peers, .. } =>
context.register_response(peer, record, peers),
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `GET_VALUE`: {message}",
);
context.register_response_failure(peer);
}
},
Some(QueryType::AddProvider { context, .. }) => match message {
KademliaMessage::FindNode { peers, .. } => {
context.register_response(peer, peers);
}
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `FIND_NODE` during `ADD_PROVIDER` query: {message}",
);
context.register_response_failure(peer);
}
},
Some(QueryType::AddProviderToFoundNodes { context, .. }) => match message {
KademliaMessage::AddProvider { .. } => {
context.register_response(peer);
}
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `ADD_PROVIDER`: {message}",
);
context.register_response_failure(peer);
}
},
Some(QueryType::GetProviders { context }) => match message {
KademliaMessage::GetProviders {
key: _,
providers,
peers,
} => {
context.register_response(peer, providers, peers);
}
message => {
tracing::debug!(
target: LOG_TARGET,
?query,
?peer,
"unexpected response to `GET_PROVIDERS`: {message}",
);
context.register_response_failure(peer);
}
},
}
}
pub fn register_send_failure(&mut self, query: QueryId, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "register send failure");
match self.queries.get_mut(&query) {
None => {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "send failure for a stale query");
}
Some(QueryType::FindNode { context }) => {
context.register_send_failure(peer);
}
Some(QueryType::PutRecord { context, .. }) => {
context.register_send_failure(peer);
}
Some(QueryType::PutRecordToPeers { context, .. }) => {
context.register_send_failure(peer);
}
Some(QueryType::PutRecordToFoundNodes { context }) => {
context.register_send_failure(peer);
}
Some(QueryType::GetRecord { context }) => {
context.register_send_failure(peer);
}
Some(QueryType::AddProvider { context, .. }) => {
context.register_send_failure(peer);
}
Some(QueryType::AddProviderToFoundNodes { context }) => {
context.register_send_failure(peer);
}
Some(QueryType::GetProviders { context }) => {
context.register_send_failure(peer);
}
}
}
pub fn register_send_success(&mut self, query: QueryId, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "register send success");
match self.queries.get_mut(&query) {
None => {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "send success for a stale query");
}
Some(QueryType::FindNode { context }) => {
context.register_send_success(peer);
}
Some(QueryType::PutRecord { context, .. }) => {
context.register_send_success(peer);
}
Some(QueryType::PutRecordToPeers { context, .. }) => {
context.register_send_success(peer);
}
Some(QueryType::PutRecordToFoundNodes { context, .. }) => {
context.register_send_success(peer);
}
Some(QueryType::GetRecord { context }) => {
context.register_send_success(peer);
}
Some(QueryType::AddProvider { context, .. }) => {
context.register_send_success(peer);
}
Some(QueryType::AddProviderToFoundNodes { context, .. }) => {
context.register_send_success(peer);
}
Some(QueryType::GetProviders { context }) => {
context.register_send_success(peer);
}
}
}
/// Register peer failure when it is not known whether sending or receiveiing failed.
/// This is called from [`super::Kademlia::disconnect_peer`].
pub fn register_peer_failure(&mut self, query: QueryId, peer: PeerId) {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "register peer failure");
// Because currently queries track either send success/failure (`PUT_VALUE`, `ADD_PROVIDER`)
// or response success/failure (`FIND_NODE`, `GET_VALUE`, `GET_PROVIDERS`),
// but not both, we can just call both here and not propagate this different type of
// failure to specific queries knowing this will result in the correct behaviour.
self.register_send_failure(query, peer);
self.register_response_failure(query, peer);
}
/// Get next action for `peer` from the [`QueryEngine`].
pub fn next_peer_action(&mut self, query: &QueryId, peer: &PeerId) -> Option<QueryAction> {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "get next peer action");
match self.queries.get_mut(query) {
None => {
tracing::trace!(target: LOG_TARGET, ?query, ?peer, "response failure for a stale query");
None
}
Some(QueryType::FindNode { context }) => context.next_peer_action(peer),
Some(QueryType::PutRecord { context, .. }) => context.next_peer_action(peer),
Some(QueryType::PutRecordToPeers { context, .. }) => context.next_peer_action(peer),
Some(QueryType::GetRecord { context }) => context.next_peer_action(peer),
Some(QueryType::AddProvider { context, .. }) => context.next_peer_action(peer),
Some(QueryType::GetProviders { context }) => context.next_peer_action(peer),
Some(QueryType::PutRecordToFoundNodes { .. }) => {
// All `PUT_VALUE` requests were sent when initiating this query type.
None
}
Some(QueryType::AddProviderToFoundNodes { .. }) => {
// All `ADD_PROVIDER` requests were sent when initiating this query type.
None
}
}
}
/// Handle query success by returning the queried value(s)
/// and removing the query from [`QueryEngine`].
fn on_query_succeeded(&mut self, query: QueryId) -> QueryAction {
match self.queries.remove(&query).expect("query to exist") {
QueryType::FindNode { context } => QueryAction::FindNodeQuerySucceeded {
query,
target: context.config.target.into_preimage(),
peers: context.responses.into_values().collect::<Vec<_>>(),
},
QueryType::PutRecord {
record,
quorum,
context,
} => QueryAction::PutRecordToFoundNodes {
query: context.config.query,
record,
peers: context.responses.into_values().collect::<Vec<_>>(),
quorum,
},
QueryType::PutRecordToPeers {
record,
quorum,
context,
} => QueryAction::PutRecordToFoundNodes {
query: context.query,
record,
peers: context.peers_to_report,
quorum,
},
QueryType::PutRecordToFoundNodes { context } => QueryAction::PutRecordQuerySucceeded {
query: context.query,
key: context.key,
},
QueryType::GetRecord { context } => QueryAction::GetRecordQueryDone {
query_id: context.config.query,
},
QueryType::AddProvider {
provided_key,
provider,
quorum,
context,
} => QueryAction::AddProviderToFoundNodes {
query: context.config.query,
provided_key,
provider,
peers: context.responses.into_values().collect::<Vec<_>>(),
quorum,
},
QueryType::AddProviderToFoundNodes { context } =>
QueryAction::AddProviderQuerySucceeded {
query: context.query,
provided_key: context.key,
},
QueryType::GetProviders { context } => QueryAction::GetProvidersQueryDone {
query_id: context.config.query,
provided_key: context.config.target.clone().into_preimage(),
providers: context.found_providers(),
},
}
}
/// Handle query failure by removing the query from [`QueryEngine`] and
/// returning the appropriate [`QueryAction`] to user.
fn on_query_failed(&mut self, query: QueryId) -> QueryAction {
let _ = self.queries.remove(&query).expect("query to exist");
QueryAction::QueryFailed { query }
}
/// Get next action from the [`QueryEngine`].
pub fn next_action(&mut self) -> Option<QueryAction> {
for (_, state) in self.queries.iter_mut() {
let action = match state {
QueryType::FindNode { context } => context.next_action(),
QueryType::PutRecord { context, .. } => context.next_action(),
QueryType::PutRecordToPeers { context, .. } => context.next_action(),
QueryType::GetRecord { context } => context.next_action(),
QueryType::AddProvider { context, .. } => context.next_action(),
QueryType::GetProviders { context } => context.next_action(),
QueryType::PutRecordToFoundNodes { context, .. } => context.next_action(),
QueryType::AddProviderToFoundNodes { context, .. } => context.next_action(),
};
match action {
Some(QueryAction::QuerySucceeded { query }) => {
return Some(self.on_query_succeeded(query));
}
Some(QueryAction::QueryFailed { query }) =>
return Some(self.on_query_failed(query)),
Some(_) => return action,
_ => continue,
}
}
None
}
}
#[cfg(test)]
mod tests {
use multihash::{Code, Multihash};
use super::*;
use crate::protocol::libp2p::kademlia::types::ConnectionType;
// make fixed peer id
fn make_peer_id(first: u8, second: u8) -> PeerId {
let mut peer_id = vec![0u8; 32];
peer_id[0] = first;
peer_id[1] = second;
PeerId::from_bytes(
&Multihash::wrap(Code::Identity.into(), &peer_id)
.expect("The digest size is never too large")
.to_bytes(),
)
.unwrap()
}
#[test]
fn find_node_query_fails() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let mut engine = QueryEngine::new(PeerId::random(), 20usize, 3usize);
let target_peer = PeerId::random();
let _target_key = Key::from(target_peer);
let query = engine.start_find_node(
QueryId(1337),
target_peer,
vec![
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
]
.into(),
);
for _ in 0..4 {
if let Some(QueryAction::SendMessage { query, peer, .. }) = engine.next_action() {
engine.register_response_failure(query, peer);
}
}
if let Some(QueryAction::QueryFailed { query: failed }) = engine.next_action() {
assert_eq!(failed, query);
}
assert!(engine.next_action().is_none());
}
#[test]
fn find_node_lookup_paused() {
let mut engine = QueryEngine::new(PeerId::random(), 20usize, 3usize);
let target_peer = PeerId::random();
let _target_key = Key::from(target_peer);
let _ = engine.start_find_node(
QueryId(1338),
target_peer,
vec![
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
KademliaPeer::new(PeerId::random(), vec![], ConnectionType::NotConnected),
]
.into(),
);
for _ in 0..3 {
let _ = engine.next_action();
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/query/get_record.rs | src/protocol/libp2p/kademlia/query/get_record.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use bytes::Bytes;
use crate::{
protocol::libp2p::kademlia::{
message::KademliaMessage,
query::{QueryAction, QueryId},
record::{Key as RecordKey, PeerRecord, Record},
types::{Distance, KademliaPeer, Key},
Quorum,
},
PeerId,
};
use std::collections::{BTreeMap, HashMap, HashSet, VecDeque};
/// Logging target for the file.
const LOG_TARGET: &str = "litep2p::ipfs::kademlia::query::get_record";
/// The configuration needed to instantiate a new [`GetRecordContext`].
#[derive(Debug)]
pub struct GetRecordConfig {
/// Local peer ID.
pub local_peer_id: PeerId,
/// How many records we already know about (ie extracted from storage).
///
/// This can either be 0 or 1 when the record is extracted local storage.
pub known_records: usize,
/// Quorum for the query.
pub quorum: Quorum,
/// Replication factor.
pub replication_factor: usize,
/// Parallelism factor.
pub parallelism_factor: usize,
/// Query ID.
pub query: QueryId,
/// Target key.
pub target: Key<RecordKey>,
}
impl GetRecordConfig {
/// Checks if the found number of records meets the specified quorum.
///
/// Used to determine if the query found enough records to stop.
fn sufficient_records(&self, records: usize) -> bool {
// The total number of known records is the sum of the records we knew about before starting
// the query and the records we found along the way.
let total_known = self.known_records + records;
match self.quorum {
Quorum::All => total_known >= self.replication_factor,
Quorum::One => total_known >= 1,
Quorum::N(needed_responses) => total_known >= needed_responses.get(),
}
}
}
#[derive(Debug)]
pub struct GetRecordContext {
/// Query immutable config.
pub config: GetRecordConfig,
/// Cached Kademlia message to send.
kad_message: Bytes,
/// Peers from whom the `QueryEngine` is waiting to hear a response.
pub pending: HashMap<PeerId, KademliaPeer>,
/// Queried candidates.
///
/// These are the peers for whom the query has already been sent
/// and who have either returned their closest peers or failed to answer.
pub queried: HashSet<PeerId>,
/// Candidates.
pub candidates: BTreeMap<Distance, KademliaPeer>,
/// Number of found records.
pub found_records: usize,
/// Records to propagate as next query action.
pub records: VecDeque<PeerRecord>,
}
impl GetRecordContext {
/// Create new [`GetRecordContext`].
pub fn new(
config: GetRecordConfig,
in_peers: VecDeque<KademliaPeer>,
local_record: bool,
) -> Self {
let mut candidates = BTreeMap::new();
for candidate in &in_peers {
let distance = config.target.distance(&candidate.key);
candidates.insert(distance, candidate.clone());
}
let kad_message = KademliaMessage::get_record(config.target.clone().into_preimage());
Self {
config,
kad_message,
candidates,
pending: HashMap::new(),
queried: HashSet::new(),
found_records: if local_record { 1 } else { 0 },
records: VecDeque::new(),
}
}
/// Register response failure for `peer`.
pub fn register_response_failure(&mut self, peer: PeerId) {
let Some(peer) = self.pending.remove(&peer) else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetRecordContext`: pending peer doesn't exist",
);
return;
};
self.queried.insert(peer.peer);
}
/// Register `GET_VALUE` response from `peer`.
///
/// Returns some if the response should be propagated to the user.
pub fn register_response(
&mut self,
peer: PeerId,
record: Option<Record>,
peers: Vec<KademliaPeer>,
) {
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetRecordContext`: received response from peer",
);
let Some(peer) = self.pending.remove(&peer) else {
tracing::debug!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetRecordContext`: received response from peer but didn't expect it",
);
return;
};
if let Some(record) = record {
if !record.is_expired(std::time::Instant::now()) {
self.records.push_back(PeerRecord {
peer: peer.peer,
record,
});
self.found_records += 1;
}
}
// Add the queried peer to `queried` and all new peers which haven't been
// queried to `candidates`
self.queried.insert(peer.peer);
let to_query_candidate = peers.into_iter().filter_map(|peer| {
// Peer already produced a response.
if self.queried.contains(&peer.peer) {
return None;
}
// Peer was queried, awaiting response.
if self.pending.contains_key(&peer.peer) {
return None;
}
// Local node.
if self.config.local_peer_id == peer.peer {
return None;
}
Some(peer)
});
for candidate in to_query_candidate {
let distance = self.config.target.distance(&candidate.key);
self.candidates.insert(distance, candidate);
}
}
/// Register a failure of sending a `GET_VALUE` request to `peer`.
pub fn register_send_failure(&mut self, _peer: PeerId) {
// In case of a send failure, `register_response_failure` is called as well.
// Failure is handled there.
}
/// Register a success of sending a `GET_VALUE` request to `peer`.
pub fn register_send_success(&mut self, _peer: PeerId) {
// `GET_VALUE` requests are compound request-response pairs of messages,
// so we handle final success/failure in `register_response`/`register_response_failure`.
}
/// Get next action for `peer`.
// TODO: https://github.com/paritytech/litep2p/issues/40 remove this and store the next action to `PeerAction`
pub fn next_peer_action(&mut self, peer: &PeerId) -> Option<QueryAction> {
self.pending.contains_key(peer).then_some(QueryAction::SendMessage {
query: self.config.query,
peer: *peer,
message: self.kad_message.clone(),
})
}
/// Schedule next peer for outbound `GET_VALUE` query.
fn schedule_next_peer(&mut self) -> Option<QueryAction> {
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
"`GetRecordContext`: get next peer",
);
let (_, candidate) = self.candidates.pop_first()?;
let peer = candidate.peer;
tracing::trace!(
target: LOG_TARGET,
query = ?self.config.query,
?peer,
"`GetRecordContext`: current candidate",
);
self.pending.insert(candidate.peer, candidate);
Some(QueryAction::SendMessage {
query: self.config.query,
peer,
message: self.kad_message.clone(),
})
}
/// Check if the query cannot make any progress.
///
/// Returns true when there are no pending responses and no candidates to query.
fn is_done(&self) -> bool {
self.pending.is_empty() && self.candidates.is_empty()
}
/// Get next action for a `GET_VALUE` query.
pub fn next_action(&mut self) -> Option<QueryAction> {
// Drain the records first.
if let Some(record) = self.records.pop_front() {
return Some(QueryAction::GetRecordPartialResult {
query_id: self.config.query,
record,
});
}
// These are the records we knew about before starting the query and
// the records we found along the way.
let known_records = self.config.known_records + self.found_records;
// If we cannot make progress, return the final result.
// A query failed when we are not able to identify one single record.
if self.is_done() {
return if known_records == 0 {
Some(QueryAction::QueryFailed {
query: self.config.query,
})
} else {
Some(QueryAction::QuerySucceeded {
query: self.config.query,
})
};
}
// Check if enough records have been found
let sufficient_records = self.config.sufficient_records(self.found_records);
if sufficient_records {
return Some(QueryAction::QuerySucceeded {
query: self.config.query,
});
}
// At this point, we either have pending responses or candidates to query; and we need more
// records. Ensure we do not exceed the parallelism factor.
if self.pending.len() == self.config.parallelism_factor {
return None;
}
self.schedule_next_peer()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::protocol::libp2p::kademlia::types::ConnectionType;
fn default_config() -> GetRecordConfig {
GetRecordConfig {
local_peer_id: PeerId::random(),
quorum: Quorum::All,
known_records: 0,
replication_factor: 20,
parallelism_factor: 10,
query: QueryId(0),
target: Key::new(vec![1, 2, 3].into()),
}
}
fn peer_to_kad(peer: PeerId) -> KademliaPeer {
KademliaPeer {
peer,
key: Key::from(peer),
address_store: Default::default(),
connection: ConnectionType::Connected,
}
}
#[test]
fn config_check() {
// Quorum::All with no known records.
let config = GetRecordConfig {
quorum: Quorum::All,
known_records: 0,
replication_factor: 20,
..default_config()
};
assert!(config.sufficient_records(20));
assert!(!config.sufficient_records(19));
// Quorum::All with 1 known records.
let config = GetRecordConfig {
quorum: Quorum::All,
known_records: 1,
replication_factor: 20,
..default_config()
};
assert!(config.sufficient_records(19));
assert!(!config.sufficient_records(18));
// Quorum::One with no known records.
let config = GetRecordConfig {
quorum: Quorum::One,
known_records: 0,
..default_config()
};
assert!(config.sufficient_records(1));
assert!(!config.sufficient_records(0));
// Quorum::One with known records.
let config = GetRecordConfig {
quorum: Quorum::One,
known_records: 1,
..default_config()
};
assert!(config.sufficient_records(1));
assert!(config.sufficient_records(0));
// Quorum::N with no known records.
let config = GetRecordConfig {
quorum: Quorum::N(std::num::NonZeroUsize::new(10).expect("valid; qed")),
known_records: 0,
..default_config()
};
assert!(config.sufficient_records(10));
assert!(!config.sufficient_records(9));
// Quorum::N with known records.
let config = GetRecordConfig {
quorum: Quorum::N(std::num::NonZeroUsize::new(10).expect("valid; qed")),
known_records: 1,
..default_config()
};
assert!(config.sufficient_records(9));
assert!(!config.sufficient_records(8));
}
#[test]
fn completes_when_no_candidates() {
let config = default_config();
let mut context = GetRecordContext::new(config, VecDeque::new(), false);
assert!(context.is_done());
let event = context.next_action().unwrap();
match event {
QueryAction::QueryFailed { query } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
}
let config = GetRecordConfig {
known_records: 1,
..default_config()
};
let mut context = GetRecordContext::new(config, VecDeque::new(), false);
assert!(context.is_done());
let event = context.next_action().unwrap();
match event {
QueryAction::QuerySucceeded { query } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
}
}
#[test]
fn fulfill_parallelism() {
let config = GetRecordConfig {
parallelism_factor: 3,
..default_config()
};
let in_peers_set: HashSet<_> =
[PeerId::random(), PeerId::random(), PeerId::random()].into_iter().collect();
assert_eq!(in_peers_set.len(), 3);
let in_peers = in_peers_set.iter().map(|peer| peer_to_kad(*peer)).collect();
let mut context = GetRecordContext::new(config, in_peers, false);
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(in_peers_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Fulfilled parallelism.
assert!(context.next_action().is_none());
}
#[test]
fn completes_when_responses() {
let key = vec![1, 2, 3];
let config = GetRecordConfig {
parallelism_factor: 3,
replication_factor: 3,
..default_config()
};
let peer_a = PeerId::random();
let peer_b = PeerId::random();
let peer_c = PeerId::random();
let in_peers_set: HashSet<_> = [peer_a, peer_b, peer_c].into_iter().collect();
assert_eq!(in_peers_set.len(), 3);
let in_peers = [peer_a, peer_b, peer_c].iter().map(|peer| peer_to_kad(*peer)).collect();
let mut context = GetRecordContext::new(config, in_peers, false);
// Schedule peer queries.
for num in 0..3 {
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), num + 1);
assert!(context.pending.contains_key(&peer));
// Check the peer is the one provided.
assert!(in_peers_set.contains(&peer));
}
_ => panic!("Unexpected event"),
}
}
// Checks a failed query that was not initiated.
let peer_d = PeerId::random();
context.register_response_failure(peer_d);
assert_eq!(context.pending.len(), 3);
assert!(context.queried.is_empty());
let mut found_records = Vec::new();
// Provide responses back.
let record = Record::new(key.clone(), vec![1, 2, 3]);
context.register_response(peer_a, Some(record), vec![]);
// Check propagated action.
let record = context.next_action().unwrap();
match record {
QueryAction::GetRecordPartialResult { query_id, record } => {
assert_eq!(query_id, QueryId(0));
assert_eq!(record.peer, peer_a);
assert_eq!(record.record, Record::new(key.clone(), vec![1, 2, 3]));
found_records.push(record);
}
_ => panic!("Unexpected event"),
}
assert_eq!(context.pending.len(), 2);
assert_eq!(context.queried.len(), 1);
assert_eq!(context.found_records, 1);
// Provide different response from peer b with peer d as candidate.
let record = Record::new(key.clone(), vec![4, 5, 6]);
context.register_response(peer_b, Some(record), vec![peer_to_kad(peer_d)]);
// Check propagated action.
let record = context.next_action().unwrap();
match record {
QueryAction::GetRecordPartialResult { query_id, record } => {
assert_eq!(query_id, QueryId(0));
assert_eq!(record.peer, peer_b);
assert_eq!(record.record, Record::new(key.clone(), vec![4, 5, 6]));
found_records.push(record);
}
_ => panic!("Unexpected event"),
}
assert_eq!(context.pending.len(), 1);
assert_eq!(context.queried.len(), 2);
assert_eq!(context.found_records, 2);
assert_eq!(context.candidates.len(), 1);
// Peer C fails.
context.register_response_failure(peer_c);
assert!(context.pending.is_empty());
assert_eq!(context.queried.len(), 3);
assert_eq!(context.found_records, 2);
// Drain the last candidate.
let event = context.next_action().unwrap();
match event {
QueryAction::SendMessage { query, peer, .. } => {
assert_eq!(query, QueryId(0));
// Added as pending.
assert_eq!(context.pending.len(), 1);
assert_eq!(peer, peer_d);
}
_ => panic!("Unexpected event"),
}
// Peer D responds.
let record = Record::new(key.clone(), vec![4, 5, 6]);
context.register_response(peer_d, Some(record), vec![]);
// Check propagated action.
let record = context.next_action().unwrap();
match record {
QueryAction::GetRecordPartialResult { query_id, record } => {
assert_eq!(query_id, QueryId(0));
assert_eq!(record.peer, peer_d);
assert_eq!(record.record, Record::new(key.clone(), vec![4, 5, 6]));
found_records.push(record);
}
_ => panic!("Unexpected event"),
}
// Produces the result.
let event = context.next_action().unwrap();
match event {
QueryAction::QuerySucceeded { query } => {
assert_eq!(query, QueryId(0));
}
_ => panic!("Unexpected event"),
}
// Check results.
assert_eq!(
found_records,
vec![
PeerRecord {
peer: peer_a,
record: Record::new(key.clone(), vec![1, 2, 3]),
},
PeerRecord {
peer: peer_b,
record: Record::new(key.clone(), vec![4, 5, 6]),
},
PeerRecord {
peer: peer_d,
record: Record::new(key.clone(), vec![4, 5, 6]),
},
]
);
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/src/protocol/libp2p/kademlia/query/find_many_nodes.rs | src/protocol/libp2p/kademlia/query/find_many_nodes.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use crate::{
protocol::libp2p::kademlia::{
query::{QueryAction, QueryId},
types::KademliaPeer,
},
PeerId,
};
/// Context for multiple `FIND_NODE` queries.
// TODO: https://github.com/paritytech/litep2p/issues/80 implement finding nodes not present in the routing table.
#[derive(Debug)]
pub struct FindManyNodesContext {
/// Query ID.
pub query: QueryId,
/// The peers we are looking for.
pub peers_to_report: Vec<KademliaPeer>,
}
impl FindManyNodesContext {
/// Creates a new [`FindManyNodesContext`].
pub fn new(query: QueryId, peers_to_report: Vec<KademliaPeer>) -> Self {
Self {
query,
peers_to_report,
}
}
/// Register response failure for `peer`.
pub fn register_response_failure(&mut self, _peer: PeerId) {}
/// Register `FIND_NODE` response from `peer`.
pub fn register_response(&mut self, _peer: PeerId, _peers: Vec<KademliaPeer>) {}
/// Register a failure of sending a request to `peer`.
pub fn register_send_failure(&mut self, _peer: PeerId) {}
/// Register a success of sending a request to `peer`.
pub fn register_send_success(&mut self, _peer: PeerId) {}
/// Get next action for `peer`.
pub fn next_peer_action(&mut self, _peer: &PeerId) -> Option<QueryAction> {
None
}
/// Get next action for a `FIND_NODE` query.
pub fn next_action(&mut self) -> Option<QueryAction> {
Some(QueryAction::QuerySucceeded { query: self.query })
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/user_protocol_2.rs | tests/user_protocol_2.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
codec::ProtocolCodec,
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::{TransportEvent, TransportService, UserProtocol},
transport::tcp::config::Config as TcpConfig,
types::protocol::ProtocolName,
Litep2p, Litep2pEvent, PeerId,
};
use futures::StreamExt;
use multiaddr::Multiaddr;
use tokio::sync::mpsc::{channel, Receiver, Sender};
use std::collections::HashSet;
struct CustomProtocol {
protocol: ProtocolName,
codec: ProtocolCodec,
peers: HashSet<PeerId>,
rx: Receiver<Multiaddr>,
}
impl CustomProtocol {
pub fn new() -> (Self, Sender<Multiaddr>) {
let (tx, rx) = channel(64);
(
Self {
rx,
peers: HashSet::new(),
protocol: ProtocolName::from("/custom-protocol/1"),
codec: ProtocolCodec::UnsignedVarint(None),
},
tx,
)
}
}
#[async_trait::async_trait]
impl UserProtocol for CustomProtocol {
fn protocol(&self) -> ProtocolName {
self.protocol.clone()
}
fn codec(&self) -> ProtocolCodec {
self.codec
}
async fn run(mut self: Box<Self>, mut service: TransportService) -> litep2p::Result<()> {
loop {
tokio::select! {
event = service.next() => match event.unwrap() {
TransportEvent::ConnectionEstablished { peer, .. } => {
self.peers.insert(peer);
}
TransportEvent::ConnectionClosed { peer: _ } => {}
TransportEvent::SubstreamOpened {
peer: _,
protocol: _,
direction: _,
substream: _,
fallback: _,
} => {}
TransportEvent::SubstreamOpenFailure {
substream: _,
error: _,
} => {}
TransportEvent::DialFailure { .. } => {}
},
address = self.rx.recv() => {
service.dial_address(address.unwrap()).unwrap();
}
}
}
}
}
#[tokio::test]
async fn user_protocol_2() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (custom_protocol1, sender1) = CustomProtocol::new();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
..Default::default()
})
.with_user_protocol(Box::new(custom_protocol1))
.build();
let (custom_protocol2, _sender2) = CustomProtocol::new();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
..Default::default()
})
.with_user_protocol(Box::new(custom_protocol2))
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = litep2p2.listen_addresses().next().unwrap().clone();
sender1.send(address).await.unwrap();
let mut litep2p1_ready = false;
let mut litep2p2_ready = false;
while !litep2p1_ready && !litep2p2_ready {
tokio::select! {
event = litep2p1.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
litep2p1_ready = true;
},
event = litep2p2.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
litep2p2_ready = true;
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/webrtc.rs | tests/webrtc.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#![cfg(feature = "webrtc")]
use futures::StreamExt;
use litep2p::{
config::ConfigBuilder as Litep2pConfigBuilder,
crypto::ed25519::Keypair,
protocol::{libp2p::ping, notification::ConfigBuilder},
transport::webrtc::config::Config,
types::protocol::ProtocolName,
Litep2p,
};
#[tokio::test]
#[ignore]
async fn webrtc_test() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config, mut ping_event_stream) = ping::Config::default();
let (notif_config, mut notif_event_stream) = ConfigBuilder::new(ProtocolName::from(
// Westend block-announces protocol name.
"/e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e/block-announces/1",
))
.with_max_size(5 * 1024 * 1024)
.with_handshake(vec![1, 2, 3, 4])
.with_auto_accept_inbound(true)
.build();
let config = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_webrtc(Config {
listen_addresses: vec!["/ip4/192.168.1.170/udp/8888/webrtc-direct".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config)
.with_notification_protocol(notif_config)
.build();
let mut litep2p = Litep2p::new(config).unwrap();
let address = litep2p.listen_addresses().next().unwrap().clone();
tracing::info!("listen address: {address:?}");
loop {
tokio::select! {
event = litep2p.next_event() => {
tracing::debug!("litep2p event received: {event:?}");
}
event = ping_event_stream.next() => {
if std::matches!(event, None) {
tracing::error!("ping event stream terminated");
break
}
tracing::error!("ping event received: {event:?}");
}
_event = notif_event_stream.next() => {
// tracing::error!("notification event received: {event:?}");
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/custom_executor.rs | tests/custom_executor.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
config::ConfigBuilder,
crypto::ed25519::Keypair,
executor::Executor,
protocol::{
notification::{
Config as NotificationConfig, Direction, NotificationEvent, ValidationResult,
},
request_response::{
ConfigBuilder as RequestResponseConfigBuilder, DialOptions, RequestResponseEvent,
},
},
transport::tcp::config::Config as TcpConfig,
types::protocol::ProtocolName,
Litep2p, Litep2pEvent,
};
use bytes::BytesMut;
use futures::{future::BoxFuture, stream::FuturesUnordered, StreamExt};
use tokio::sync::mpsc::{channel, Receiver, Sender};
use std::{future::Future, pin::Pin, sync::Arc};
struct TaskExecutor {
rx: Receiver<Pin<Box<dyn Future<Output = ()> + Send>>>,
futures: FuturesUnordered<BoxFuture<'static, ()>>,
}
impl TaskExecutor {
pub fn new() -> (Self, Sender<Pin<Box<dyn Future<Output = ()> + Send>>>) {
let (tx, rx) = channel(64);
(
Self {
rx,
futures: FuturesUnordered::new(),
},
tx,
)
}
async fn next(&mut self) {
tokio::select! {
future = self.rx.recv() => {
self.futures.push(future.unwrap());
}
_ = self.futures.next(), if !self.futures.is_empty() => {}
}
}
}
struct TaskExecutorHandle {
tx: Sender<Pin<Box<dyn Future<Output = ()> + Send>>>,
}
impl Executor for TaskExecutorHandle {
fn run(&self, future: Pin<Box<dyn Future<Output = ()> + Send>>) {
let _ = self.tx.try_send(future);
}
fn run_with_name(&self, _: &'static str, future: Pin<Box<dyn Future<Output = ()> + Send>>) {
let _ = self.tx.try_send(future);
}
}
#[tokio::test]
async fn custom_executor() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (mut executor, sender) = TaskExecutor::new();
tokio::spawn(async move {
loop {
executor.next().await
}
});
let (notif_config1, mut handle1) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let (req_resp_config1, mut req_resp_handle1) =
RequestResponseConfigBuilder::new(ProtocolName::from("/protocol/1"))
.with_max_size(1024)
.build();
let handle = TaskExecutorHandle { tx: sender.clone() };
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config1)
.with_request_response_protocol(req_resp_config1)
.with_executor(Arc::new(handle))
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.build();
let (notif_config2, mut handle2) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let (req_resp_config2, mut req_resp_handle2) =
RequestResponseConfigBuilder::new(ProtocolName::from("/protocol/1"))
.with_max_size(1024)
.build();
let handle = TaskExecutorHandle { tx: sender };
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config2)
.with_request_response_protocol(req_resp_config2)
.with_executor(Arc::new(handle))
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer1 = *litep2p1.local_peer_id();
let peer2 = *litep2p2.local_peer_id();
// wait until peers have connected and spawn the litep2p objects in the background
let address = litep2p2.listen_addresses().next().unwrap().clone();
litep2p1.dial_address(address).await.unwrap();
let mut litep2p1_connected = false;
let mut litep2p2_connected = false;
loop {
tokio::select! {
event = litep2p1.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
litep2p1_connected = true;
},
event = litep2p2.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
litep2p2_connected = true;
}
}
if litep2p1_connected && litep2p2_connected {
tokio::time::sleep(std::time::Duration::from_millis(200)).await;
break;
}
}
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {},
_ = litep2p2.next_event() => {},
}
}
});
// open substream for `peer2` and accept it
handle1.open_substream(peer2).await.unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
handle2.send_validation_result(peer1, ValidationResult::Accept);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_validation_result(peer2, ValidationResult::Accept);
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
direction: Direction::Inbound,
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Outbound,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_sync_notification(peer2, vec![1, 3, 3, 7]).unwrap();
handle2.send_sync_notification(peer1, vec![1, 3, 3, 8]).unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer1,
notification: BytesMut::from(&[1, 3, 3, 7][..]),
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer2,
notification: BytesMut::from(&[1, 3, 3, 8][..]),
}
);
// verify that the request-response protocol works as well
req_resp_handle1
.send_request(peer2, vec![1, 2, 3, 4], DialOptions::Reject)
.await
.unwrap();
match req_resp_handle2.next().await.unwrap() {
RequestResponseEvent::RequestReceived {
peer,
request_id,
request,
..
} => {
assert_eq!(peer, peer1);
assert_eq!(request, vec![1, 2, 3, 4]);
req_resp_handle2.send_response(request_id, vec![1, 3, 3, 7]);
}
event => panic!("unexpected event: {event:?}"),
}
match req_resp_handle1.next().await.unwrap() {
RequestResponseEvent::ResponseReceived { peer, response, .. } => {
assert_eq!(peer, peer2);
assert_eq!(response, vec![1, 3, 3, 7]);
}
event => panic!("unexpected event: {event:?}"),
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/mod.rs | tests/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
mod common;
mod conformance;
mod connection;
mod protocol;
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/user_protocol.rs | tests/user_protocol.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
codec::ProtocolCodec,
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::{mdns::Config as MdnsConfig, TransportEvent, TransportService, UserProtocol},
transport::tcp::config::Config as TcpConfig,
types::protocol::ProtocolName,
Litep2p, PeerId,
};
use futures::StreamExt;
use std::{collections::HashSet, sync::Arc, time::Duration};
struct CustomProtocol {
protocol: ProtocolName,
codec: ProtocolCodec,
peers: HashSet<PeerId>,
}
impl CustomProtocol {
pub fn new() -> Self {
let protocol: Arc<str> = Arc::from(String::from("/custom-protocol/1"));
Self {
peers: HashSet::new(),
protocol: ProtocolName::from(protocol),
codec: ProtocolCodec::UnsignedVarint(None),
}
}
}
#[async_trait::async_trait]
impl UserProtocol for CustomProtocol {
fn protocol(&self) -> ProtocolName {
self.protocol.clone()
}
fn codec(&self) -> ProtocolCodec {
self.codec
}
async fn run(mut self: Box<Self>, mut service: TransportService) -> litep2p::Result<()> {
loop {
while let Some(event) = service.next().await {
tracing::trace!("received event: {event:?}");
match event {
TransportEvent::ConnectionEstablished { peer, .. } => {
self.peers.insert(peer);
}
TransportEvent::ConnectionClosed { peer } => {
self.peers.remove(&peer);
}
_ => {}
}
}
}
}
}
#[tokio::test]
async fn user_protocol() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let custom_protocol1 = Box::new(CustomProtocol::new());
let (mdns_config, _stream) = MdnsConfig::new(Duration::from_secs(30));
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
..Default::default()
})
.with_user_protocol(custom_protocol1)
.with_mdns(mdns_config)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let peer1 = *litep2p1.local_peer_id();
let listen_address = litep2p1.listen_addresses().next().unwrap().clone();
let custom_protocol2 = Box::new(CustomProtocol::new());
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
..Default::default()
})
.with_user_protocol(custom_protocol2)
.with_known_addresses(vec![(peer1, vec![listen_address])].into_iter())
.with_max_parallel_dials(8usize)
.build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
litep2p2.dial(&peer1).await.unwrap();
// wait until connection is established
let mut litep2p1_ready = false;
let mut litep2p2_ready = false;
while !litep2p1_ready && !litep2p2_ready {
tokio::select! {
event = litep2p1.next_event() => {
tracing::trace!("litep2p1 event: {event:?}");
litep2p1_ready = true;
}
event = litep2p2.next_event() => {
tracing::trace!("litep2p2 event: {event:?}");
litep2p2_ready = true;
}
}
}
// wait until connection is closed by the keep-alive timeout
let mut litep2p1_ready = false;
let mut litep2p2_ready = false;
while !litep2p1_ready && !litep2p2_ready {
tokio::select! {
event = litep2p1.next_event() => {
tracing::trace!("litep2p1 event: {event:?}");
litep2p1_ready = true;
}
event = litep2p2.next_event() => {
tracing::trace!("litep2p2 event: {event:?}");
litep2p2_ready = true;
}
}
}
let sink = litep2p2.bandwidth_sink();
tracing::trace!("inbound {}, outbound {}", sink.outbound(), sink.inbound());
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/substream.rs | tests/substream.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
codec::ProtocolCodec,
config::ConfigBuilder,
error::SubstreamError,
protocol::{Direction, TransportEvent, TransportService, UserProtocol},
substream::{Substream, SubstreamSet},
transport::tcp::config::Config as TcpConfig,
types::{protocol::ProtocolName, SubstreamId},
Error, Litep2p, Litep2pEvent, PeerId,
};
#[cfg(feature = "quic")]
use litep2p::transport::quic::config::Config as QuicConfig;
#[cfg(feature = "websocket")]
use litep2p::transport::websocket::config::Config as WebSocketConfig;
use bytes::Bytes;
use futures::{Sink, SinkExt, StreamExt};
use tokio::{
io::AsyncWrite,
sync::{
mpsc::{channel, Receiver, Sender},
oneshot,
},
};
use std::{
collections::{HashMap, HashSet},
io::ErrorKind,
sync::Arc,
task::Poll,
};
enum Transport {
Tcp(TcpConfig),
#[cfg(feature = "quic")]
Quic(QuicConfig),
#[cfg(feature = "websocket")]
WebSocket(WebSocketConfig),
}
enum Command {
SendPayloadFramed(PeerId, Vec<u8>, oneshot::Sender<litep2p::Result<()>>),
SendPayloadSink(PeerId, Vec<u8>, oneshot::Sender<litep2p::Result<()>>),
SendPayloadAsyncWrite(PeerId, Vec<u8>, oneshot::Sender<litep2p::Result<()>>),
OpenSubstream(PeerId, oneshot::Sender<()>),
}
struct CustomProtocol {
protocol: ProtocolName,
codec: ProtocolCodec,
peers: HashSet<PeerId>,
rx: Receiver<Command>,
pending_opens: HashMap<SubstreamId, (PeerId, oneshot::Sender<()>)>,
substreams: SubstreamSet<PeerId, Substream>,
}
impl CustomProtocol {
pub fn new(codec: ProtocolCodec) -> (Self, Sender<Command>) {
let protocol: Arc<str> = Arc::from(String::from("/custom-protocol/1"));
let (tx, rx) = channel(64);
(
Self {
peers: HashSet::new(),
protocol: ProtocolName::from(protocol),
codec,
rx,
pending_opens: HashMap::new(),
substreams: SubstreamSet::new(),
},
tx,
)
}
}
#[async_trait::async_trait]
impl UserProtocol for CustomProtocol {
fn protocol(&self) -> ProtocolName {
self.protocol.clone()
}
fn codec(&self) -> ProtocolCodec {
self.codec
}
async fn run(mut self: Box<Self>, mut service: TransportService) -> litep2p::Result<()> {
loop {
tokio::select! {
event = service.next() => match event.unwrap() {
TransportEvent::ConnectionEstablished { peer, .. } => {
self.peers.insert(peer);
}
TransportEvent::ConnectionClosed { peer } => {
self.peers.remove(&peer);
}
TransportEvent::SubstreamOpened {
peer,
substream,
direction,
..
} => {
self.substreams.insert(peer, substream);
if let Direction::Outbound(substream_id) = direction {
self.pending_opens.remove(&substream_id).unwrap().1.send(()).unwrap();
}
}
_ => {}
},
event = self.substreams.next() => match event {
None => panic!("`SubstreamSet` returned `None`"),
Some((peer, Err(_))) => {
if let Some(mut substream) = self.substreams.remove(&peer) {
futures::future::poll_fn(|cx| {
match futures::ready!(Sink::poll_close(Pin::new(&mut substream), cx)) {
_ => Poll::Ready(()),
}
}).await;
}
}
Some((peer, Ok(_))) => {
if let Some(mut substream) = self.substreams.remove(&peer) {
futures::future::poll_fn(|cx| {
match futures::ready!(Sink::poll_close(Pin::new(&mut substream), cx)) {
_ => Poll::Ready(()),
}
}).await;
}
},
},
command = self.rx.recv() => match command.unwrap() {
Command::SendPayloadFramed(peer, payload, tx) => {
match self.substreams.remove(&peer) {
None => {
tx.send(Err(Error::PeerDoesntExist(peer))).unwrap();
}
Some(mut substream) => {
let payload = Bytes::from(payload);
let res = substream.send_framed(payload).await.map_err(Into::into);
tx.send(res).unwrap();
let _ = substream.close().await;
}
}
}
Command::SendPayloadSink(peer, payload, tx) => {
match self.substreams.remove(&peer) {
None => {
tx.send(Err(Error::PeerDoesntExist(peer))).unwrap();
}
Some(mut substream) => {
let payload = Bytes::from(payload);
let res = substream.send(payload).await.map_err(Into::into);
tx.send(res).unwrap();
let _ = substream.close().await;
}
}
}
Command::SendPayloadAsyncWrite(peer, payload, tx) => {
match self.substreams.remove(&peer) {
None => {
tx.send(Err(Error::PeerDoesntExist(peer))).unwrap();
}
Some(mut substream) => {
let res = futures::future::poll_fn(|cx| {
if let Err(error) = futures::ready!(Pin::new(&mut substream).poll_write(cx, &payload)) {
return Poll::Ready(Err(error.into()));
}
if let Err(error) = futures::ready!(tokio::io::AsyncWrite::poll_flush(
Pin::new(&mut substream),
cx
)) {
return Poll::Ready(Err(error.into()));
}
if let Err(error) = futures::ready!(tokio::io::AsyncWrite::poll_shutdown(
Pin::new(&mut substream),
cx
)) {
return Poll::Ready(Err(error.into()));
}
Poll::Ready(Ok(()))
})
.await;
tx.send(res).unwrap();
}
}
}
Command::OpenSubstream(peer, tx) => {
let substream_id = service.open_substream(peer).unwrap();
self.pending_opens.insert(substream_id, (peer, tx));
}
}
}
}
}
}
async fn connect_peers(litep2p1: &mut Litep2p, litep2p2: &mut Litep2p) {
let listen_address = litep2p1.listen_addresses().next().unwrap().clone();
litep2p2.dial_address(listen_address).await.unwrap();
let mut litep2p1_ready = false;
let mut litep2p2_ready = false;
while !litep2p1_ready && !litep2p2_ready {
tokio::select! {
event = litep2p1.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() { litep2p1_ready = true },
event = litep2p2.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() { litep2p2_ready = true },
}
}
}
#[tokio::test]
async fn too_big_identity_payload_framed_tcp() {
too_big_identity_payload_framed(
Transport::Tcp(Default::default()),
Transport::Tcp(Default::default()),
)
.await;
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn too_big_identity_payload_framed_quic() {
too_big_identity_payload_framed(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn too_big_identity_payload_framed_websocket() {
too_big_identity_payload_framed(
Transport::WebSocket(Default::default()),
Transport::WebSocket(Default::default()),
)
.await;
}
// send too big payload using `Substream::send_framed()` and verify it's rejected
async fn too_big_identity_payload_framed(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (custom_protocol1, tx1) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config1 = match transport1 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol1))
.build();
let (custom_protocol2, _tx2) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config2 = match transport2 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol2))
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer2 = *litep2p2.local_peer_id();
// connect peers and start event loops for litep2ps
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_event = litep2p1.next_event() => {}
_event = litep2p2.next_event() => {}
}
}
});
tokio::time::sleep(std::time::Duration::from_millis(1000)).await;
// open substream to peer
let (tx, rx) = oneshot::channel();
tx1.send(Command::OpenSubstream(peer2, tx)).await.unwrap();
let Ok(()) = rx.await else {
panic!("failed to open substream");
};
// send too large paylod to peer
let (tx, rx) = oneshot::channel();
tx1.send(Command::SendPayloadFramed(peer2, vec![0u8; 16], tx)).await.unwrap();
match rx.await {
Ok(Err(Error::IoError(ErrorKind::PermissionDenied))) => {}
Ok(Err(Error::SubstreamError(SubstreamError::IoError(ErrorKind::PermissionDenied)))) => {}
event => panic!("invalid event received: {event:?}"),
}
}
#[tokio::test]
async fn too_big_identity_payload_sink_tcp() {
too_big_identity_payload_sink(
Transport::Tcp(Default::default()),
Transport::Tcp(Default::default()),
)
.await;
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn too_big_identity_payload_sink_quic() {
too_big_identity_payload_sink(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn too_big_identity_payload_sink_websocket() {
too_big_identity_payload_sink(
Transport::WebSocket(Default::default()),
Transport::WebSocket(Default::default()),
)
.await;
}
// send too big payload using `<Substream as Sink>::send()` and verify it's rejected
async fn too_big_identity_payload_sink(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (custom_protocol1, tx1) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config1 = match transport1 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol1))
.build();
let (custom_protocol2, _tx2) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config2 = match transport2 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol2))
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer2 = *litep2p2.local_peer_id();
// connect peers and start event loops for litep2ps
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_event = litep2p1.next_event() => {}
_event = litep2p2.next_event() => {}
}
}
});
tokio::time::sleep(std::time::Duration::from_millis(1000)).await;
{
// open substream to peer
let (tx, rx) = oneshot::channel();
tx1.send(Command::OpenSubstream(peer2, tx)).await.unwrap();
let Ok(()) = rx.await else {
panic!("failed to open substream");
};
// send too large payload to peer
let (tx, rx) = oneshot::channel();
tx1.send(Command::SendPayloadSink(peer2, vec![0u8; 16], tx)).await.unwrap();
match rx.await {
Ok(Err(Error::IoError(ErrorKind::PermissionDenied))) => {}
Ok(Err(Error::SubstreamError(SubstreamError::IoError(
ErrorKind::PermissionDenied,
)))) => {}
event => panic!("invalid event received: {event:?}"),
}
}
}
#[tokio::test]
async fn correct_payload_size_sink_tcp() {
correct_payload_size_sink(
Transport::Tcp(Default::default()),
Transport::Tcp(Default::default()),
)
.await;
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn correct_payload_size_sink_quic() {
correct_payload_size_sink(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn correct_payload_size_sink_websocket() {
correct_payload_size_sink(
Transport::WebSocket(Default::default()),
Transport::WebSocket(Default::default()),
)
.await;
}
// send correctly-sized payload using `<Substream as Sink>::send()`
async fn correct_payload_size_sink(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (custom_protocol1, tx1) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config1 = match transport1 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol1))
.build();
let (custom_protocol2, _tx2) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config2 = match transport2 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol2))
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer2 = *litep2p2.local_peer_id();
// connect peers and start event loops for litep2ps
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_event = litep2p1.next_event() => {}
_event = litep2p2.next_event() => {}
}
}
});
tokio::time::sleep(std::time::Duration::from_millis(1000)).await;
// open substream to peer
let (tx, rx) = oneshot::channel();
tx1.send(Command::OpenSubstream(peer2, tx)).await.unwrap();
let Ok(()) = rx.await else {
panic!("failed to open substream");
};
let (tx, rx) = oneshot::channel();
tx1.send(Command::SendPayloadSink(peer2, vec![0u8; 10], tx)).await.unwrap();
match rx.await {
Ok(_) => {}
event => panic!("invalid event received: {event:?}"),
}
}
#[tokio::test]
async fn correct_payload_size_async_write_tcp() {
correct_payload_size_async_write(
Transport::Tcp(Default::default()),
Transport::Tcp(Default::default()),
)
.await;
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn correct_payload_size_async_write_quic() {
correct_payload_size_async_write(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn correct_payload_size_async_write_websocket() {
correct_payload_size_async_write(
Transport::WebSocket(Default::default()),
Transport::WebSocket(Default::default()),
)
.await;
}
// send correctly-sized payload using `<Substream as AsyncRead>::poll_write()`
async fn correct_payload_size_async_write(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (custom_protocol1, tx1) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config1 = match transport1 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol1))
.build();
let (custom_protocol2, _tx2) = CustomProtocol::new(ProtocolCodec::Identity(10usize));
let config2 = match transport2 {
Transport::Tcp(config) => ConfigBuilder::new().with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => ConfigBuilder::new().with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => ConfigBuilder::new().with_websocket(config),
}
.with_user_protocol(Box::new(custom_protocol2))
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer2 = *litep2p2.local_peer_id();
// connect peers and start event loops for litep2ps
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_event = litep2p1.next_event() => {}
_event = litep2p2.next_event() => {}
}
}
});
tokio::time::sleep(std::time::Duration::from_millis(1000)).await;
// open substream to peer
let (tx, rx) = oneshot::channel();
tx1.send(Command::OpenSubstream(peer2, tx)).await.unwrap();
let Ok(()) = rx.await else {
panic!("failed to open substream");
};
let (tx, rx) = oneshot::channel();
tx1.send(Command::SendPayloadAsyncWrite(peer2, vec![0u8; 10], tx))
.await
.unwrap();
match rx.await {
Ok(_) => {}
event => panic!("invalid event received: {event:?}"),
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/connection/protocol_dial_invalid_address.rs | tests/connection/protocol_dial_invalid_address.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
codec::ProtocolCodec,
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::{TransportEvent, TransportService, UserProtocol},
transport::tcp::config::Config as TcpConfig,
types::protocol::ProtocolName,
Litep2p, PeerId,
};
use futures::StreamExt;
use multiaddr::{Multiaddr, Protocol};
use multihash::Multihash;
use tokio::sync::oneshot;
#[derive(Debug)]
struct CustomProtocol {
dial_address: Multiaddr,
protocol: ProtocolName,
codec: ProtocolCodec,
tx: oneshot::Sender<()>,
}
impl CustomProtocol {
pub fn new(dial_address: Multiaddr) -> (Self, oneshot::Receiver<()>) {
let (tx, rx) = oneshot::channel();
(
Self {
dial_address,
protocol: ProtocolName::from("/custom-protocol/1"),
codec: ProtocolCodec::UnsignedVarint(None),
tx,
},
rx,
)
}
}
#[async_trait::async_trait]
impl UserProtocol for CustomProtocol {
fn protocol(&self) -> ProtocolName {
self.protocol.clone()
}
fn codec(&self) -> ProtocolCodec {
self.codec
}
async fn run(mut self: Box<Self>, mut service: TransportService) -> litep2p::Result<()> {
if service.dial_address(self.dial_address.clone()).is_err() {
self.tx.send(()).unwrap();
return Ok(());
}
loop {
while let Some(event) = service.next().await {
if let TransportEvent::DialFailure { .. } = event {
self.tx.send(()).unwrap();
return Ok(());
}
}
}
}
}
#[tokio::test]
async fn protocol_dial_invalid_dns_address() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let address = Multiaddr::empty()
.with(Protocol::Dns(std::borrow::Cow::Owned(
"address.that.doesnt.exist.hopefully.pls".to_string(),
)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(
Multihash::from_bytes(&PeerId::random().to_bytes()).unwrap(),
));
let (custom_protocol, rx) = CustomProtocol::new(address);
let custom_protocol = Box::new(custom_protocol);
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
..Default::default()
})
.with_user_protocol(custom_protocol)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p1.next_event().await;
}
});
rx.await.unwrap();
}
#[tokio::test]
async fn protocol_dial_peer_id_missing() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let address = Multiaddr::empty()
.with(Protocol::Dns(std::borrow::Cow::Owned(
"google.com".to_string(),
)))
.with(Protocol::Tcp(8888));
let (custom_protocol, rx) = CustomProtocol::new(address);
let custom_protocol = Box::new(custom_protocol);
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
..Default::default()
})
.with_user_protocol(custom_protocol)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p1.next_event().await;
}
});
rx.await.unwrap();
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/connection/mod.rs | tests/connection/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
config::ConfigBuilder,
crypto::ed25519::Keypair,
error::{DialError, Error, NegotiationError},
protocol::libp2p::ping::{Config as PingConfig, PingEvent},
transport::tcp::config::Config as TcpConfig,
Litep2p, Litep2pEvent, PeerId,
};
#[cfg(feature = "websocket")]
use litep2p::transport::websocket::config::Config as WebSocketConfig;
#[cfg(feature = "quic")]
use litep2p::{error::AddressError, transport::quic::config::Config as QuicConfig};
use futures::{Stream, StreamExt};
use multiaddr::{Multiaddr, Protocol};
use multihash::Multihash;
use network_interface::{NetworkInterface, NetworkInterfaceConfig};
use tokio::net::TcpListener;
#[cfg(feature = "quic")]
use tokio::net::UdpSocket;
use crate::common::{add_transport, Transport};
#[cfg(feature = "websocket")]
use std::collections::HashSet;
#[cfg(test)]
mod protocol_dial_invalid_address;
#[cfg(test)]
mod stability;
#[tokio::test]
async fn two_litep2ps_work_tcp() {
two_litep2ps_work(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn two_litep2ps_work_quic() {
two_litep2ps_work(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn two_litep2ps_work_websocket() {
two_litep2ps_work(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn two_litep2ps_work(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, _ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config1);
let config1 = add_transport(config1, transport1).build();
let (ping_config2, _ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = litep2p2.listen_addresses().next().unwrap().clone();
litep2p1.dial_address(address).await.unwrap();
let (res1, res2) = tokio::join!(litep2p1.next_event(), litep2p2.next_event());
assert!(std::matches!(
res1,
Some(Litep2pEvent::ConnectionEstablished { .. })
));
assert!(std::matches!(
res2,
Some(Litep2pEvent::ConnectionEstablished { .. })
));
}
#[tokio::test]
async fn dial_failure_tcp() {
dial_failure(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Multiaddr::empty()
.with(Protocol::Ip6(std::net::Ipv6Addr::new(
0, 0, 0, 0, 0, 0, 0, 1,
)))
.with(Protocol::Tcp(1)),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn dial_failure_quic() {
dial_failure(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
Multiaddr::empty()
.with(Protocol::Ip6(std::net::Ipv6Addr::new(
0, 0, 0, 0, 0, 0, 0, 1,
)))
.with(Protocol::Udp(1))
.with(Protocol::QuicV1),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn dial_failure_websocket() {
dial_failure(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Multiaddr::empty()
.with(Protocol::Ip6(std::net::Ipv6Addr::new(
0, 0, 0, 0, 0, 0, 0, 1,
)))
.with(Protocol::Tcp(1))
.with(Protocol::Ws(std::borrow::Cow::Owned("/".to_string()))),
)
.await;
}
async fn dial_failure(transport1: Transport, transport2: Transport, dial_address: Multiaddr) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, _ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config1);
let config1 = add_transport(config1, transport1).build();
let (ping_config2, _ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = dial_address.with(Protocol::P2p(
Multihash::from_bytes(&litep2p2.local_peer_id().to_bytes()).unwrap(),
));
litep2p1.dial_address(address).await.unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p2.next_event().await;
}
});
assert!(std::matches!(
litep2p1.next_event().await,
Some(Litep2pEvent::DialFailure { .. })
));
}
#[tokio::test]
async fn connect_over_dns() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let keypair1 = Keypair::generate();
let (ping_config1, _ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(keypair1)
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config1)
.build();
let keypair2 = Keypair::generate();
let (ping_config2, _ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(keypair2)
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer2 = *litep2p2.local_peer_id();
let address = litep2p2.listen_addresses().next().unwrap().clone();
let tcp = address.iter().nth(1).unwrap();
let mut new_address = Multiaddr::empty();
new_address.push(Protocol::Dns("localhost".into()));
new_address.push(tcp);
new_address.push(Protocol::P2p(
Multihash::from_bytes(&peer2.to_bytes()).unwrap(),
));
litep2p1.dial_address(new_address).await.unwrap();
let (res1, res2) = tokio::join!(litep2p1.next_event(), litep2p2.next_event());
assert!(std::matches!(
res1,
Some(Litep2pEvent::ConnectionEstablished { .. })
));
assert!(std::matches!(
res2,
Some(Litep2pEvent::ConnectionEstablished { .. })
));
}
#[tokio::test]
async fn connection_timeout_tcp() {
// create tcp listener but don't accept any inbound connections
let listener = TcpListener::bind("[::1]:0").await.unwrap();
let address = listener.local_addr().unwrap();
let address = Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Tcp(address.port()))
.with(Protocol::P2p(
Multihash::from_bytes(&PeerId::random().to_bytes()).unwrap(),
));
connection_timeout(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
address,
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn connection_timeout_quic() {
// create udp socket but don't respond to any inbound datagrams
let listener = UdpSocket::bind("127.0.0.1:0").await.unwrap();
let address = listener.local_addr().unwrap();
let address = Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Udp(address.port()))
.with(Protocol::QuicV1)
.with(Protocol::P2p(
Multihash::from_bytes(&PeerId::random().to_bytes()).unwrap(),
));
connection_timeout(Transport::Quic(Default::default()), address).await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn connection_timeout_websocket() {
// create tcp listener but don't accept any inbound connections
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let address = listener.local_addr().unwrap();
let address = Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Tcp(address.port()))
.with(Protocol::Ws(std::borrow::Cow::Owned("/".to_string())))
.with(Protocol::P2p(
Multihash::from_bytes(&PeerId::random().to_bytes()).unwrap(),
));
connection_timeout(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
address,
)
.await;
}
async fn connection_timeout(transport: Transport, address: Multiaddr) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config, _ping_event_stream) = PingConfig::default();
let litep2p_config = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config);
let litep2p_config = add_transport(litep2p_config, transport).build();
let mut litep2p = Litep2p::new(litep2p_config).unwrap();
litep2p.dial_address(address.clone()).await.unwrap();
let Some(Litep2pEvent::DialFailure {
address: dial_address,
error,
}) = litep2p.next_event().await
else {
panic!("invalid event received");
};
assert_eq!(dial_address, address);
println!("{error:?}");
match error {
DialError::Timeout => {}
DialError::NegotiationError(NegotiationError::Timeout) => {}
_ => panic!("unexpected error {error:?}"),
}
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn dial_quic_peer_id_missing() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config, _ping_event_stream) = PingConfig::default();
let config = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_quic(Default::default())
.with_libp2p_ping(ping_config)
.build();
let mut litep2p = Litep2p::new(config).unwrap();
// create udp socket but don't respond to any inbound datagrams
let listener = UdpSocket::bind("127.0.0.1:0").await.unwrap();
let address = listener.local_addr().unwrap();
let address = Multiaddr::empty()
.with(Protocol::from(address.ip()))
.with(Protocol::Udp(address.port()))
.with(Protocol::QuicV1);
match litep2p.dial_address(address.clone()).await {
Err(Error::AddressError(AddressError::PeerIdMissing)) => {}
state => panic!("dial not supposed to succeed {state:?}"),
}
}
#[tokio::test]
async fn dial_self_tcp() {
dial_self(Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}))
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn dial_self_quic() {
dial_self(Transport::Quic(Default::default())).await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn dial_self_websocket() {
dial_self(Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}))
.await;
}
async fn dial_self(transport: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config, _ping_event_stream) = PingConfig::default();
let litep2p_config = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config);
let litep2p_config = add_transport(litep2p_config, transport).build();
let mut litep2p = Litep2p::new(litep2p_config).unwrap();
let address = litep2p.listen_addresses().next().unwrap().clone();
// dial without peer id attached
assert!(std::matches!(
litep2p.dial_address(address.clone()).await,
Err(Error::TriedToDialSelf)
));
}
#[tokio::test]
async fn attempt_to_dial_using_unsupported_transport_tcp() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config, _ping_event_stream) = PingConfig::default();
let config = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(Default::default())
.with_libp2p_ping(ping_config)
.build();
let mut litep2p = Litep2p::new(config).unwrap();
let address = Multiaddr::empty()
.with(Protocol::from(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::Ws(std::borrow::Cow::Borrowed("/")))
.with(Protocol::P2p(
Multihash::from_bytes(&PeerId::random().to_bytes()).unwrap(),
));
assert!(std::matches!(
litep2p.dial_address(address.clone()).await,
Err(Error::TransportNotSupported(_))
));
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn attempt_to_dial_using_unsupported_transport_quic() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config, _ping_event_stream) = PingConfig::default();
let config = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_quic(Default::default())
.with_libp2p_ping(ping_config)
.build();
let mut litep2p = Litep2p::new(config).unwrap();
let address = Multiaddr::empty()
.with(Protocol::from(std::net::Ipv4Addr::new(127, 0, 0, 1)))
.with(Protocol::Tcp(8888))
.with(Protocol::P2p(
Multihash::from_bytes(&PeerId::random().to_bytes()).unwrap(),
));
assert!(std::matches!(
litep2p.dial_address(address.clone()).await,
Err(Error::TransportNotSupported(_))
));
}
#[tokio::test]
async fn keep_alive_timeout_tcp() {
keep_alive_timeout(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn keep_alive_timeout_quic() {
keep_alive_timeout(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn keep_alive_timeout_websocket() {
keep_alive_timeout(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn keep_alive_timeout(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config1);
let config1 = add_transport(config1, transport1).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let (ping_config2, mut ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address1 = litep2p1.listen_addresses().next().unwrap().clone();
litep2p2.dial_address(address1).await.unwrap();
let mut litep2p1_ping = false;
let mut litep2p2_ping = false;
loop {
tokio::select! {
event = litep2p1.next_event() => match event {
Some(Litep2pEvent::ConnectionClosed { .. }) if litep2p1_ping || litep2p2_ping => {
break;
}
_ => {}
},
event = litep2p2.next_event() => match event {
Some(Litep2pEvent::ConnectionClosed { .. }) if litep2p1_ping || litep2p2_ping => {
break;
}
_ => {}
},
_event = ping_event_stream1.next() => {
tracing::warn!("ping1 received");
litep2p1_ping = true;
}
_event = ping_event_stream2.next() => {
tracing::warn!("ping2 received");
litep2p2_ping = true;
}
}
}
}
#[tokio::test]
async fn simultaneous_dial_tcp() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let (ping_config2, mut ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config2)
.build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address1 = litep2p1.listen_addresses().next().unwrap().clone();
let address2 = litep2p2.listen_addresses().next().unwrap().clone();
let (res1, res2) = tokio::join!(
litep2p1.dial_address(address2),
litep2p2.dial_address(address1)
);
assert!(std::matches!((res1, res2), (Ok(()), Ok(()))));
let mut ping_received1 = false;
let mut ping_received2 = false;
while !ping_received1 || !ping_received2 {
tokio::select! {
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = ping_event_stream1.next() => {
if event.is_some() {
ping_received1 = true;
}
}
event = ping_event_stream2.next() => {
if event.is_some() {
ping_received2 = true;
}
}
}
}
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn simultaneous_dial_quic() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_quic(Default::default())
.with_libp2p_ping(ping_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let (ping_config2, mut ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_quic(Default::default())
.with_libp2p_ping(ping_config2)
.build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address1 = litep2p1.listen_addresses().next().unwrap().clone();
let address2 = litep2p2.listen_addresses().next().unwrap().clone();
let (res1, res2) = tokio::join!(
litep2p1.dial_address(address2),
litep2p2.dial_address(address1)
);
assert!(std::matches!((res1, res2), (Ok(()), Ok(()))));
let mut ping_received1 = false;
let mut ping_received2 = false;
while !ping_received1 || !ping_received2 {
tokio::select! {
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = ping_event_stream1.next() => {
if event.is_some() {
ping_received1 = true;
}
}
event = ping_event_stream2.next() => {
if event.is_some() {
ping_received2 = true;
}
}
}
}
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn simultaneous_dial_ipv6_quic() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_quic(Default::default())
.with_libp2p_ping(ping_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let (ping_config2, mut ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_quic(Default::default())
.with_libp2p_ping(ping_config2)
.build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address1 = litep2p1.listen_addresses().next().unwrap().clone();
let address2 = litep2p2.listen_addresses().next().unwrap().clone();
let (res1, res2) = tokio::join!(
litep2p1.dial_address(address2),
litep2p2.dial_address(address1)
);
assert!(std::matches!((res1, res2), (Ok(()), Ok(()))));
let mut ping_received1 = false;
let mut ping_received2 = false;
while !ping_received1 || !ping_received2 {
tokio::select! {
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = ping_event_stream1.next() => {
if event.is_some() {
ping_received1 = true;
}
}
event = ping_event_stream2.next() => {
if event.is_some() {
ping_received2 = true;
}
}
}
}
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn websocket_over_ipv6() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_websocket(WebSocketConfig {
listen_addresses: vec!["/ip6/::1/tcp/0/ws".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let (ping_config2, mut ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_websocket(WebSocketConfig {
listen_addresses: vec!["/ip6/::1/tcp/0/ws".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config2)
.build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address2 = litep2p2.listen_addresses().next().unwrap().clone();
litep2p1.dial_address(address2).await.unwrap();
let mut ping_received1 = false;
let mut ping_received2 = false;
while !ping_received1 || !ping_received2 {
tokio::select! {
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = ping_event_stream1.next() => {
if event.is_some() {
ping_received1 = true;
}
}
event = ping_event_stream2.next() => {
if event.is_some() {
ping_received2 = true;
}
}
}
}
}
#[tokio::test]
async fn tcp_dns_resolution() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let (ping_config2, mut ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config2)
.build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = litep2p2.listen_addresses().next().unwrap().clone();
let tcp = address.iter().nth(1).unwrap();
let peer2 = *litep2p2.local_peer_id();
let mut new_address = Multiaddr::empty();
new_address.push(Protocol::Dns("localhost".into()));
new_address.push(tcp);
new_address.push(Protocol::P2p(
Multihash::from_bytes(&peer2.to_bytes()).unwrap(),
));
litep2p1.dial_address(new_address).await.unwrap();
let mut ping_received1 = false;
let mut ping_received2 = false;
while !ping_received1 || !ping_received2 {
tokio::select! {
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = ping_event_stream1.next() => {
if event.is_some() {
ping_received1 = true;
}
}
event = ping_event_stream2.next() => {
if event.is_some() {
ping_received2 = true;
}
}
}
}
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn websocket_dns_resolution() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) = PingConfig::default();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_websocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let (ping_config2, mut ping_event_stream2) = PingConfig::default();
let config2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_websocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config2)
.build();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = litep2p2.listen_addresses().next().unwrap().clone();
let tcp = address.iter().nth(1).unwrap();
let peer2 = *litep2p2.local_peer_id();
let mut new_address = Multiaddr::empty();
new_address.push(Protocol::Dns("localhost".into()));
new_address.push(tcp);
new_address.push(Protocol::Ws(std::borrow::Cow::Owned("/".to_string())));
new_address.push(Protocol::P2p(
Multihash::from_bytes(&peer2.to_bytes()).unwrap(),
));
litep2p1.dial_address(new_address).await.unwrap();
let mut ping_received1 = false;
let mut ping_received2 = false;
while !ping_received1 || !ping_received2 {
tokio::select! {
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = ping_event_stream1.next() => {
if event.is_some() {
ping_received1 = true;
}
}
event = ping_event_stream2.next() => {
if event.is_some() {
ping_received2 = true;
}
}
}
}
}
#[tokio::test]
async fn multiple_listen_addresses_tcp() {
multiple_listen_addresses(
Transport::Tcp(TcpConfig {
listen_addresses: vec![
"/ip6/::1/tcp/0".parse().unwrap(),
"/ip4/127.0.0.1/tcp/0".parse().unwrap(),
],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec![],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec![],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn multiple_listen_addresses_quic() {
multiple_listen_addresses(
Transport::Quic(QuicConfig {
listen_addresses: vec![
"/ip4/127.0.0.1/udp/0/quic-v1".parse().unwrap(),
"/ip6/::1/udp/0/quic-v1".parse().unwrap(),
],
..Default::default()
}),
Transport::Quic(QuicConfig {
listen_addresses: vec![],
..Default::default()
}),
Transport::Quic(QuicConfig {
listen_addresses: vec![],
..Default::default()
}),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn multiple_listen_addresses_websocket() {
multiple_listen_addresses(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec![
"/ip4/127.0.0.1/tcp/0/ws".parse().unwrap(),
"/ip6/::1/tcp/0/ws".parse().unwrap(),
],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec![],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/connection/stability.rs | tests/connection/stability.rs | // Copyright 2025 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
codec::ProtocolCodec,
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::{
libp2p::ping::Config as PingConfig, Direction, TransportEvent, TransportService,
UserProtocol,
},
substream::Substream,
transport::tcp::config::Config as TcpConfig,
types::protocol::ProtocolName,
utils::futures_stream::FuturesStream,
Litep2p, PeerId,
};
use futures::{future::BoxFuture, StreamExt};
use crate::common::{add_transport, Transport};
const PROTOCOL_NAME: &str = "/litep2p-stability/1.0.0";
const LOG_TARGET: &str = "litep2p::stability";
/// The stability protocol ensures a single transport connection
/// (either TCP or WebSocket) can sustain multiple received packets.
///
/// The scenario puts stress on the internal buffers, ensuring that
/// each layer behave properly.
///
/// ## Protocol Details
///
/// The protocol opens 16 outbound substreams on the connection established event.
/// Therefore, it will handle 16 outbound substreams and 16 inbound substreams
/// (open by the remote).
///
/// The outbound substreams will push a configurable number of packets, each of
/// size 128 bytes, to the remote peer. While the inbound substreams will read
/// the same number of packets from the remote peer.
pub struct StabilityProtocol {
/// The number of identical packets to send / receive on a substream.
total_packets: usize,
inbound: FuturesStream<BoxFuture<'static, Result<(), String>>>,
outbound: FuturesStream<BoxFuture<'static, Result<(), String>>>,
/// Peer Id for logging purposes.
peer_id: PeerId,
/// The sender to notify the test that the protocol finished.
tx: Option<tokio::sync::oneshot::Sender<()>>,
}
impl StabilityProtocol {
fn new(total_packets: usize, peer_id: PeerId) -> (Self, tokio::sync::oneshot::Receiver<()>) {
let (tx, rx) = tokio::sync::oneshot::channel();
(
Self {
total_packets,
inbound: FuturesStream::new(),
outbound: FuturesStream::new(),
peer_id,
tx: Some(tx),
},
rx,
)
}
fn handle_substream(&mut self, mut substream: Substream, direction: Direction) {
let mut total_packets = self.total_packets;
match direction {
Direction::Inbound => {
self.inbound.push(Box::pin(async move {
while total_packets > 0 {
let _payload = substream
.next()
.await
.ok_or_else(|| {
tracing::warn!(target: LOG_TARGET, "Failed to read None from substream");
"Failed to read None from substream".to_string()
})?
.map_err(|err| {
tracing::warn!(target: LOG_TARGET, "Failed to read from substream {:?}", err);
"Failed to read from substream".to_string()
})?;
total_packets -= 1;
}
Ok(())
}));
}
Direction::Outbound { .. } => {
self.outbound.push(Box::pin(async move {
let mut frame = vec![0; 128];
for i in 0..frame.len() {
frame[i] = i as u8;
}
while total_packets > 0 {
substream.send_framed(frame.clone().into()).await.map_err(|err| {
tracing::warn!("Failed to send to substream {:?}", err);
"Failed to send to substream".to_string()
})?;
total_packets -= 1;
}
Ok(())
}));
}
}
}
}
#[async_trait::async_trait]
impl UserProtocol for StabilityProtocol {
fn protocol(&self) -> ProtocolName {
PROTOCOL_NAME.into()
}
fn codec(&self) -> ProtocolCodec {
// Similar to the identify payload size.
ProtocolCodec::UnsignedVarint(Some(4096))
}
async fn run(mut self: Box<Self>, mut service: TransportService) -> litep2p::Result<()> {
let num_substreams = 16;
let mut handled_substreams = 0;
loop {
if handled_substreams == 2 * num_substreams {
tracing::info!(
target: LOG_TARGET,
handled_substreams,
peer_id = %self.peer_id,
"StabilityProtocol finished to handle packets",
);
self.tx.take().expect("Send happens only once; qed").send(()).unwrap();
// If one of the stability protocols finishes, while the
// the other is still reading data from the stream, the test
// might race if the substream detects the connection as closed.
futures::future::pending::<()>().await;
}
tokio::select! {
event = service.next() => match event {
Some(TransportEvent::ConnectionEstablished { peer, .. }) => {
for i in 0..num_substreams {
match service.open_substream(peer) {
Ok(_) => {},
Err(e) => {
tracing::error!(
target: LOG_TARGET,
?e,
i,
peer_id = %self.peer_id,
"Failed to open substream"
);
// Drop the tx sender.
return Ok(());
}
}
}
}
Some(TransportEvent::ConnectionClosed { peer }) => {
tracing::error!(
target: LOG_TARGET,
peer_id = %self.peer_id,
"Connection closed unexpectedly: {}",
peer
);
panic!("connection closed");
}
Some(TransportEvent::SubstreamOpened {
substream,
direction,
..
}) => {
self.handle_substream(substream, direction);
}
_ => {},
},
inbound = self.inbound.next(), if !self.inbound.is_empty() => {
match inbound {
Some(Ok(())) => {
handled_substreams += 1;
}
Some(Err(err)) => {
tracing::error!(
target: LOG_TARGET,
peer_id = %self.peer_id,
"Inbound stream failed with error: {}",
err
);
// Drop the tx sender.
return Ok(());
}
None => {
tracing::error!(
target: LOG_TARGET,
peer_id = %self.peer_id,
"Inbound stream failed with None",
);
panic!("Inbound stream failed");
}
}
},
outbound = self.outbound.next(), if !self.outbound.is_empty() => {
match outbound {
Some(Ok(())) => {
handled_substreams += 1;
}
Some(Err(err)) => {
tracing::error!(
target: LOG_TARGET,
peer_id = %self.peer_id,
"Outbound stream failed with error: {}",
err
);
// Drop the tx sender.
return Ok(());
}
None => {
tracing::error!(
target: LOG_TARGET,
peer_id = %self.peer_id,
"Outbound stream failed with None",
);
panic!("Outbound stream failed");
}
}
},
}
}
}
}
async fn stability_litep2p_transport(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, _ping_event_stream1) = PingConfig::default();
let keypair = Keypair::generate();
let peer_id = keypair.public().to_peer_id();
let (stability_protocol, mut exit1) = StabilityProtocol::new(1000, peer_id);
let config1 = ConfigBuilder::new()
.with_keypair(keypair)
.with_libp2p_ping(ping_config1)
.with_user_protocol(Box::new(stability_protocol));
let config1 = add_transport(config1, transport1).build();
let (ping_config2, _ping_event_stream2) = PingConfig::default();
let keypair = Keypair::generate();
let peer_id = keypair.public().to_peer_id();
let (stability_protocol, mut exit2) = StabilityProtocol::new(1000, peer_id);
let config2 = ConfigBuilder::new()
.with_keypair(keypair)
.with_libp2p_ping(ping_config2)
.with_user_protocol(Box::new(stability_protocol));
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = litep2p2.listen_addresses().next().unwrap().clone();
litep2p1.dial_address(address).await.unwrap();
let mut litep2p1_exit = false;
let mut litep2p2_exit = false;
loop {
if litep2p1_exit && litep2p2_exit {
break;
}
tokio::select! {
// Wait for the stability protocols to finish, while keeping
// the peer connections alive.
event = &mut exit1, if !litep2p1_exit => {
if let Ok(()) = event {
litep2p1_exit = true;
} else {
panic!("StabilityProtocol 1 failed");
}
},
event = &mut exit2, if !litep2p2_exit => {
if let Ok(()) = event {
litep2p2_exit = true;
} else {
panic!("StabilityProtocol 2 failed");
}
},
// Drive litep2p backends.
event = litep2p1.next_event() => {
tracing::info!(target: LOG_TARGET, "litep2p1 event: {:?}", event);
}
event = litep2p2.next_event() => {
tracing::info!(target: LOG_TARGET, "litep2p2 event: {:?}", event);
}
}
}
}
#[tokio::test]
async fn stability_tcp() {
let transport1 = Transport::Tcp(TcpConfig::default());
let transport2 = Transport::Tcp(TcpConfig::default());
stability_litep2p_transport(transport1, transport2).await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn stability_websocket() {
use litep2p::transport::websocket::config::Config as WebSocketConfig;
let transport1 = Transport::WebSocket(WebSocketConfig::default());
let transport2 = Transport::WebSocket(WebSocketConfig::default());
stability_litep2p_transport(transport1, transport2).await;
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/common/mod.rs | tests/common/mod.rs | // Copyright 2024 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{config::ConfigBuilder, transport::tcp::config::Config as TcpConfig};
#[cfg(feature = "quic")]
use litep2p::transport::quic::config::Config as QuicConfig;
#[cfg(feature = "websocket")]
use litep2p::transport::websocket::config::Config as WebSocketConfig;
pub(crate) enum Transport {
Tcp(TcpConfig),
#[cfg(feature = "quic")]
Quic(QuicConfig),
#[cfg(feature = "websocket")]
WebSocket(WebSocketConfig),
}
pub(crate) fn add_transport(config: ConfigBuilder, transport: Transport) -> ConfigBuilder {
match transport {
Transport::Tcp(transport) => config.with_tcp(transport),
#[cfg(feature = "quic")]
Transport::Quic(transport) => config.with_quic(transport),
#[cfg(feature = "websocket")]
Transport::WebSocket(transport) => config.with_websocket(transport),
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/conformance/mod.rs | tests/conformance/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#[cfg(test)]
mod rust;
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/conformance/rust/identify.rs | tests/conformance/rust/identify.rs | // Copyright 2018 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#![allow(clippy::large_enum_variant)]
use futures::{Stream, StreamExt};
use libp2p::{
identify, identity, ping,
swarm::{NetworkBehaviour, SwarmBuilder, SwarmEvent},
PeerId, Swarm,
};
use litep2p::{
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::libp2p::{
identify::{Config as IdentifyConfig, IdentifyEvent},
ping::{Config as PingConfig, PingEvent},
},
transport::tcp::config::Config as TcpConfig,
Litep2p,
};
// We create a custom network behaviour that combines gossipsub, ping and identify.
#[derive(NetworkBehaviour)]
#[behaviour(out_event = "MyBehaviourEvent")]
struct MyBehaviour {
identify: identify::Behaviour,
ping: ping::Behaviour,
}
enum MyBehaviourEvent {
Identify(identify::Event),
Ping(ping::Event),
}
impl From<identify::Event> for MyBehaviourEvent {
fn from(event: identify::Event) -> Self {
MyBehaviourEvent::Identify(event)
}
}
impl From<ping::Event> for MyBehaviourEvent {
fn from(event: ping::Event) -> Self {
MyBehaviourEvent::Ping(event)
}
}
// initialize litep2p with ping support
fn initialize_litep2p() -> (
Litep2p,
Box<dyn Stream<Item = PingEvent> + Send + Unpin>,
Box<dyn Stream<Item = IdentifyEvent> + Send + Unpin>,
) {
let keypair = Keypair::generate();
let (ping_config, ping_event_stream) = PingConfig::default();
let (identify_config, identify_event_stream) =
IdentifyConfig::new("proto v1".to_string(), None);
let litep2p = Litep2p::new(
ConfigBuilder::new()
.with_keypair(keypair)
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config)
.with_libp2p_identify(identify_config)
.build(),
)
.unwrap();
(litep2p, ping_event_stream, identify_event_stream)
}
fn initialize_libp2p() -> Swarm<MyBehaviour> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());
tracing::debug!("Local peer id: {local_peer_id:?}");
let transport = libp2p::tokio_development_transport(local_key.clone()).unwrap();
let behaviour = MyBehaviour {
identify: identify::Behaviour::new(
identify::Config::new("/ipfs/1.0.0".into(), local_key.public())
.with_agent_version("libp2p agent".to_string()),
),
ping: Default::default(),
};
let mut swarm = SwarmBuilder::with_tokio_executor(transport, behaviour, local_peer_id).build();
swarm.listen_on("/ip6/::1/tcp/0".parse().unwrap()).unwrap();
swarm
}
#[tokio::test]
async fn identify_works() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let mut libp2p = initialize_libp2p();
let (mut litep2p, _ping_event_stream, mut identify_event_stream) = initialize_litep2p();
let address = litep2p.listen_addresses().next().unwrap().clone();
libp2p.dial(address).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p.next_event().await;
}
});
let mut libp2p_done = false;
let mut litep2p_done = false;
loop {
tokio::select! {
event = libp2p.select_next_some() => {
match event {
SwarmEvent::NewListenAddr { address, .. } => {
tracing::info!("Listening on {address:?}")
}
SwarmEvent::Behaviour(MyBehaviourEvent::Ping(_event)) => {},
SwarmEvent::Behaviour(MyBehaviourEvent::Identify(event)) => if let identify::Event::Received { info, .. } = event {
libp2p_done = true;
assert_eq!(info.protocol_version, "proto v1");
assert_eq!(info.agent_version, "litep2p/1.0.0");
if libp2p_done && litep2p_done {
break
}
}
_ => {}
}
},
event = identify_event_stream.next() => match event {
Some(IdentifyEvent::PeerIdentified { protocol_version, user_agent, .. }) => {
litep2p_done = true;
assert_eq!(protocol_version, Some("/ipfs/1.0.0".to_string()));
assert_eq!(user_agent, Some("libp2p agent".to_string()));
if libp2p_done && litep2p_done {
break
}
}
None => panic!("identify exited"),
},
_ = tokio::time::sleep(std::time::Duration::from_secs(5)) => {
panic!("failed to receive identify in time");
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/conformance/rust/quic_ping.rs | tests/conformance/rust/quic_ping.rs | // Copyright 2018 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use futures::{future::Either, Stream, StreamExt};
use libp2p::{
core::{muxing::StreamMuxerBox, transport::OrTransport},
identity, ping, quic,
swarm::{keep_alive, NetworkBehaviour, SwarmBuilder, SwarmEvent},
PeerId, Swarm, Transport,
};
use litep2p::{
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::libp2p::ping::{Config as PingConfig, PingEvent},
transport::quic::config::Config as QuicConfig,
Litep2p,
};
#[derive(NetworkBehaviour, Default)]
struct Behaviour {
keep_alive: keep_alive::Behaviour,
ping: ping::Behaviour,
}
// initialize litep2p with ping support
fn initialize_litep2p() -> (Litep2p, Box<dyn Stream<Item = PingEvent> + Send + Unpin>) {
let keypair = Keypair::generate();
let (ping_config, ping_event_stream) = PingConfig::default();
let litep2p = Litep2p::new(
ConfigBuilder::new()
.with_keypair(keypair)
.with_quic(QuicConfig {
listen_addresses: vec!["/ip4/127.0.0.1/udp/8888/quic-v1".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config)
.build(),
)
.unwrap();
(litep2p, ping_event_stream)
}
fn initialize_libp2p() -> Swarm<Behaviour> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());
tracing::debug!("Local peer id: {local_peer_id:?}");
let tcp_transport = libp2p::tokio_development_transport(local_key.clone()).unwrap();
let quic_transport = quic::tokio::Transport::new(quic::Config::new(&local_key));
let transport = OrTransport::new(quic_transport, tcp_transport)
.map(|either_output, _| match either_output {
Either::Left((peer_id, muxer)) => (peer_id, StreamMuxerBox::new(muxer)),
Either::Right((peer_id, muxer)) => (peer_id, StreamMuxerBox::new(muxer)),
})
.boxed();
let mut swarm =
SwarmBuilder::with_tokio_executor(transport, Behaviour::default(), local_peer_id).build();
swarm.listen_on("/ip6/::1/tcp/0".parse().unwrap()).unwrap();
swarm.listen_on("/ip4/127.0.0.1/udp/0/quic-v1".parse().unwrap()).unwrap();
swarm
}
#[tokio::test]
async fn libp2p_dials() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let mut libp2p = initialize_libp2p();
let (mut litep2p, mut ping_event_stream) = initialize_litep2p();
let address: multiaddr::Multiaddr = format!(
"/ip4/127.0.0.1/udp/8888/quic-v1/p2p/{}",
*litep2p.local_peer_id()
)
.parse()
.unwrap();
libp2p.dial(address).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p.next_event().await;
}
});
let mut libp2p_done = false;
let mut litep2p_done = false;
loop {
tokio::select! {
event = libp2p.select_next_some() => {
match event {
SwarmEvent::NewListenAddr { address, .. } => {
tracing::info!("Listening on {address:?}")
}
SwarmEvent::Behaviour(BehaviourEvent::Ping(_)) => {
libp2p_done = true;
if libp2p_done && litep2p_done {
break
}
}
_ => {}
}
}
_event = ping_event_stream.next() => {
litep2p_done = true;
if libp2p_done && litep2p_done {
break
}
}
_ = tokio::time::sleep(std::time::Duration::from_secs(5)) => {
panic!("failed to receive ping in time");
}
}
}
}
#[tokio::test]
async fn litep2p_dials() {}
#[tokio::test]
async fn libp2p_doesnt_support_ping() {}
#[tokio::test]
async fn litep2p_doesnt_support_ping() {}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/conformance/rust/kademlia.rs | tests/conformance/rust/kademlia.rs | // Copyright 2018 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use futures::StreamExt;
use libp2p::{
identify, identity,
kad::{
self, store::RecordStore, AddProviderOk, GetProvidersOk, InboundRequest,
KademliaEvent as Libp2pKademliaEvent, QueryResult,
},
swarm::{keep_alive, AddressScore, NetworkBehaviour, SwarmBuilder, SwarmEvent},
PeerId, Swarm,
};
use litep2p::{
config::ConfigBuilder as Litep2pConfigBuilder,
crypto::ed25519::Keypair,
protocol::libp2p::kademlia::{
ConfigBuilder, KademliaEvent, KademliaHandle, Quorum, Record, RecordKey,
},
transport::tcp::config::Config as TcpConfig,
types::multiaddr::{Multiaddr, Protocol},
Litep2p,
};
use std::time::Duration;
#[derive(NetworkBehaviour)]
struct Behaviour {
keep_alive: keep_alive::Behaviour,
kad: kad::Kademlia<kad::store::MemoryStore>,
identify: identify::Behaviour,
}
// initialize litep2p with ping support
fn initialize_litep2p() -> (Litep2p, KademliaHandle) {
let keypair = Keypair::generate();
let (kad_config, kad_handle) = ConfigBuilder::new().build();
let litep2p = Litep2p::new(
Litep2pConfigBuilder::new()
.with_keypair(keypair)
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config)
.build(),
)
.unwrap();
(litep2p, kad_handle)
}
fn initialize_libp2p() -> Swarm<Behaviour> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());
tracing::debug!("Local peer id: {local_peer_id:?}");
let transport = libp2p::tokio_development_transport(local_key.clone()).unwrap();
let behaviour = {
let config = kad::KademliaConfig::default();
let store = kad::store::MemoryStore::new(local_peer_id);
Behaviour {
kad: kad::Kademlia::with_config(local_peer_id, store, config),
keep_alive: Default::default(),
identify: identify::Behaviour::new(identify::Config::new(
"/ipfs/1.0.0".into(),
local_key.public(),
)),
}
};
let mut swarm = SwarmBuilder::with_tokio_executor(transport, behaviour, local_peer_id).build();
swarm.listen_on("/ip6/::1/tcp/0".parse().unwrap()).unwrap();
swarm
}
#[tokio::test]
async fn find_node() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let mut addresses = vec![];
let mut peer_ids = vec![];
for _ in 0..3 {
let mut libp2p = initialize_libp2p();
loop {
if let SwarmEvent::NewListenAddr { address, .. } = libp2p.select_next_some().await {
addresses.push(address);
peer_ids.push(*libp2p.local_peer_id());
break;
}
}
tokio::spawn(async move {
loop {
let _ = libp2p.select_next_some().await;
}
});
}
let mut libp2p = initialize_libp2p();
let (mut litep2p, mut kad_handle) = initialize_litep2p();
let address = litep2p.listen_addresses().next().unwrap().clone();
for i in 0..addresses.len() {
libp2p.dial(addresses[i].clone()).unwrap();
let _ = libp2p.behaviour_mut().kad.add_address(&peer_ids[i], addresses[i].clone());
}
libp2p.dial(address).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p.next_event().await;
}
});
#[allow(unused)]
let mut listen_addr = None;
let peer_id = *libp2p.local_peer_id();
tracing::error!("local peer id: {peer_id}");
loop {
if let SwarmEvent::NewListenAddr { address, .. } = libp2p.select_next_some().await {
listen_addr = Some(address);
break;
}
}
tokio::spawn(async move {
loop {
let _ = libp2p.select_next_some().await;
}
});
tokio::time::sleep(std::time::Duration::from_secs(3)).await;
let listen_addr = listen_addr.unwrap().with(Protocol::P2p(peer_id.into()));
kad_handle
.add_known_peer(
litep2p::PeerId::from_bytes(&peer_id.to_bytes()).unwrap(),
vec![listen_addr],
)
.await;
let target = litep2p::PeerId::random();
let _ = kad_handle.find_node(target).await;
loop {
if let Some(KademliaEvent::FindNodeSuccess {
target: query_target,
peers,
..
}) = kad_handle.next().await
{
assert_eq!(target, query_target);
assert!(!peers.is_empty());
break;
}
}
}
#[tokio::test]
async fn put_record() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let mut addresses = vec![];
let mut peer_ids = vec![];
let counter = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0usize));
for _ in 0..3 {
let mut libp2p = initialize_libp2p();
loop {
if let SwarmEvent::NewListenAddr { address, .. } = libp2p.select_next_some().await {
addresses.push(address);
peer_ids.push(*libp2p.local_peer_id());
break;
}
}
let counter_copy = std::sync::Arc::clone(&counter);
tokio::spawn(async move {
let mut record_found = false;
loop {
tokio::select! {
_ = libp2p.select_next_some() => {}
_ = tokio::time::sleep(std::time::Duration::from_secs(1)) => {
let store = libp2p.behaviour_mut().kad.store_mut();
if store.get(&libp2p::kad::record::Key::new(&vec![1, 2, 3, 4])).is_some() && !record_found {
counter_copy.fetch_add(1usize, std::sync::atomic::Ordering::SeqCst);
record_found = true;
}
}
}
}
});
}
let mut libp2p = initialize_libp2p();
let (mut litep2p, mut kad_handle) = initialize_litep2p();
let address = litep2p.listen_addresses().next().unwrap().clone();
for i in 0..addresses.len() {
libp2p.dial(addresses[i].clone()).unwrap();
let _ = libp2p.behaviour_mut().kad.add_address(&peer_ids[i], addresses[i].clone());
}
libp2p.dial(address).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p.next_event().await;
}
});
#[allow(unused)]
let mut listen_addr = None;
let peer_id = *libp2p.local_peer_id();
tracing::error!("local peer id: {peer_id}");
loop {
if let SwarmEvent::NewListenAddr { address, .. } = libp2p.select_next_some().await {
listen_addr = Some(address);
break;
}
}
let counter_copy = std::sync::Arc::clone(&counter);
tokio::spawn(async move {
let mut record_found = false;
loop {
tokio::select! {
_ = libp2p.select_next_some() => {}
_ = tokio::time::sleep(std::time::Duration::from_secs(1)) => {
let store = libp2p.behaviour_mut().kad.store_mut();
if store.get(&libp2p::kad::record::Key::new(&vec![1, 2, 3, 4])).is_some() && !record_found {
counter_copy.fetch_add(1usize, std::sync::atomic::Ordering::SeqCst);
record_found = true;
}
}
}
}
});
tokio::time::sleep(std::time::Duration::from_secs(3)).await;
let listen_addr = listen_addr.unwrap().with(Protocol::P2p(peer_id.into()));
kad_handle
.add_known_peer(
litep2p::PeerId::from_bytes(&peer_id.to_bytes()).unwrap(),
vec![listen_addr],
)
.await;
let record_key = RecordKey::new(&vec![1, 2, 3, 4]);
let record = Record::new(record_key, vec![1, 3, 3, 7, 1, 3, 3, 8]);
let _ = kad_handle.put_record(record, Quorum::All).await;
loop {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
if counter.load(std::sync::atomic::Ordering::SeqCst) == 4 {
break;
}
}
}
#[tokio::test]
async fn get_record() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let mut addresses = vec![];
let mut peer_ids = vec![];
let counter = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0usize));
for _ in 0..3 {
let mut libp2p = initialize_libp2p();
loop {
if let SwarmEvent::NewListenAddr { address, .. } = libp2p.select_next_some().await {
addresses.push(address);
peer_ids.push(*libp2p.local_peer_id());
break;
}
}
let counter_copy = std::sync::Arc::clone(&counter);
tokio::spawn(async move {
let mut record_found = false;
loop {
tokio::select! {
_ = libp2p.select_next_some() => {}
_ = tokio::time::sleep(std::time::Duration::from_secs(1)) => {
let store = libp2p.behaviour_mut().kad.store_mut();
if store.get(&libp2p::kad::record::Key::new(&vec![1, 2, 3, 4])).is_some() && !record_found {
counter_copy.fetch_add(1usize, std::sync::atomic::Ordering::SeqCst);
record_found = true;
}
}
}
}
});
}
let mut libp2p = initialize_libp2p();
let (mut litep2p, mut kad_handle) = initialize_litep2p();
let address = litep2p.listen_addresses().next().unwrap().clone();
for i in 0..addresses.len() {
libp2p.dial(addresses[i].clone()).unwrap();
let _ = libp2p.behaviour_mut().kad.add_address(&peer_ids[i], addresses[i].clone());
}
// publish record on the network
let record = libp2p::kad::Record {
key: libp2p::kad::RecordKey::new(&vec![1, 2, 3, 4]),
value: vec![13, 37, 13, 38],
publisher: None,
expires: None,
};
libp2p.behaviour_mut().kad.put_record(record, libp2p::kad::Quorum::All).unwrap();
#[allow(unused)]
let mut listen_addr = None;
loop {
tokio::select! {
event = libp2p.select_next_some() => if let SwarmEvent::NewListenAddr { address, .. } = event {
listen_addr = Some(address);
},
_ = tokio::time::sleep(std::time::Duration::from_secs(1)) => {
if counter.load(std::sync::atomic::Ordering::SeqCst) == 3 {
break;
}
}
}
}
libp2p.dial(address).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p.next_event().await;
}
});
let peer_id = *libp2p.local_peer_id();
tokio::spawn(async move {
loop {
let _ = libp2p.select_next_some().await;
}
});
tokio::time::sleep(std::time::Duration::from_secs(3)).await;
let listen_addr = listen_addr.unwrap().with(Protocol::P2p(peer_id.into()));
kad_handle
.add_known_peer(
litep2p::PeerId::from_bytes(&peer_id.to_bytes()).unwrap(),
vec![listen_addr],
)
.await;
let _ = kad_handle.get_record(RecordKey::new(&vec![1, 2, 3, 4]), Quorum::All).await;
loop {
match kad_handle.next().await.unwrap() {
KademliaEvent::GetRecordPartialResult { record, .. } => {
assert_eq!(record.record.key.as_ref(), vec![1, 2, 3, 4]);
assert_eq!(record.record.value, vec![13, 37, 13, 38]);
break;
}
KademliaEvent::GetRecordSuccess { .. } => break,
KademliaEvent::RoutingTableUpdate { .. } => {}
event => panic!("invalid event received {event:?}"),
}
}
}
#[tokio::test]
async fn litep2p_add_provider_to_libp2p() {
let (mut litep2p, mut litep2p_kad) = initialize_litep2p();
let mut libp2p = initialize_libp2p();
// Drive libp2p a little bit to get the listen address.
let get_libp2p_listen_addr = async {
loop {
if let SwarmEvent::NewListenAddr { address, .. } = libp2p.select_next_some().await {
break address;
}
}
};
let libp2p_listen_addr = tokio::time::timeout(Duration::from_secs(10), get_libp2p_listen_addr)
.await
.expect("didn't get libp2p listen address in 10 seconds");
let litep2p_public_addr: Multiaddr = "/ip6/::1/tcp/10000".parse().unwrap();
litep2p.public_addresses().add_address(litep2p_public_addr.clone()).unwrap();
// Get public address with peer ID.
let litep2p_public_addr = litep2p.public_addresses().get_addresses().pop().unwrap();
let libp2p_peer_id = litep2p::PeerId::from_bytes(&libp2p.local_peer_id().to_bytes()).unwrap();
litep2p_kad.add_known_peer(libp2p_peer_id, vec![libp2p_listen_addr]).await;
let litep2p_peer_id = PeerId::from_bytes(&litep2p.local_peer_id().to_bytes()).unwrap();
let key = vec![1u8, 2u8, 3u8];
litep2p_kad.start_providing(RecordKey::new(&key), Quorum::All).await;
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("provider was not added in 10 secs")
}
_ = litep2p.next_event() => {}
_ = litep2p_kad.next() => {}
event = libp2p.select_next_some() => {
if let SwarmEvent::Behaviour(BehaviourEvent::Kad(event)) = event {
if let Libp2pKademliaEvent::InboundRequest{request} = event {
if let InboundRequest::AddProvider{..} = request {
let store = libp2p.behaviour_mut().kad.store_mut();
let mut providers = store.providers(&key.clone().into());
assert_eq!(providers.len(), 1);
let record = providers.pop().unwrap();
assert_eq!(record.key.as_ref(), key);
assert_eq!(record.provider, litep2p_peer_id);
assert_eq!(record.addresses, vec![litep2p_public_addr.clone()]);
break
}
}
}
}
}
}
}
#[tokio::test]
async fn libp2p_add_provider_to_litep2p() {
let (mut litep2p, mut litep2p_kad) = initialize_litep2p();
let mut libp2p = initialize_libp2p();
let libp2p_peerid = litep2p::PeerId::from_bytes(&libp2p.local_peer_id().to_bytes()).unwrap();
let libp2p_public_addr: Multiaddr = "/ip4/1.1.1.1/tcp/10000".parse().unwrap();
libp2p.add_external_address(libp2p_public_addr.clone(), AddressScore::Infinite);
let litep2p_peerid = PeerId::from_bytes(&litep2p.local_peer_id().to_bytes()).unwrap();
let litep2p_address = litep2p.listen_addresses().next().unwrap().clone();
libp2p.behaviour_mut().kad.add_address(&litep2p_peerid, litep2p_address);
// Start providing
let key = vec![1u8, 2u8, 3u8];
libp2p.behaviour_mut().kad.start_providing(key.clone().into()).unwrap();
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("provider was not added in 10 secs")
}
_ = litep2p.next_event() => {}
_ = libp2p.select_next_some() => {}
event = litep2p_kad.next() => {
if let Some(KademliaEvent::IncomingProvider{ provided_key, provider }) = event {
assert_eq!(provided_key, key.clone().into());
assert_eq!(provider.peer, libp2p_peerid);
assert_eq!(provider.addresses, vec![libp2p_public_addr]);
break
}
}
}
}
}
#[tokio::test]
async fn litep2p_get_providers_from_libp2p() {
let (mut litep2p, mut litep2p_kad) = initialize_litep2p();
let mut libp2p = initialize_libp2p();
let libp2p_peerid = litep2p::PeerId::from_bytes(&libp2p.local_peer_id().to_bytes()).unwrap();
let libp2p_public_addr: Multiaddr = "/ip4/1.1.1.1/tcp/10000".parse().unwrap();
libp2p.add_external_address(libp2p_public_addr.clone(), AddressScore::Infinite);
// Start providing
let key = vec![1u8, 2u8, 3u8];
let query_id = libp2p.behaviour_mut().kad.start_providing(key.clone().into()).unwrap();
let mut libp2p_listen_addr = None;
let mut provider_stored = false;
// Drive libp2p a little bit to get listen address and make sure the provider was store
// loacally.
tokio::time::timeout(Duration::from_secs(10), async {
loop {
match libp2p.select_next_some().await {
SwarmEvent::Behaviour(BehaviourEvent::Kad(
Libp2pKademliaEvent::OutboundQueryProgressed { id, result, .. },
)) => {
assert_eq!(id, query_id);
assert!(
matches!(result, QueryResult::StartProviding(Ok(AddProviderOk { key }))
if key == key.clone())
);
provider_stored = true;
if libp2p_listen_addr.is_some() {
break;
}
}
SwarmEvent::NewListenAddr { address, .. } => {
libp2p_listen_addr = Some(address);
if provider_stored {
break;
}
}
_ => {}
}
}
})
.await
.expect("failed to store provider and get listen address in 10 seconds");
let libp2p_listen_addr = libp2p_listen_addr.unwrap();
// `GET_PROVIDERS`
litep2p_kad
.add_known_peer(libp2p_peerid, vec![libp2p_listen_addr.clone()])
.await;
let original_query_id = litep2p_kad.get_providers(key.clone().into()).await;
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("provider was not added in 10 secs")
}
_ = litep2p.next_event() => {}
_ = libp2p.select_next_some() => {}
event = litep2p_kad.next() => {
if let Some(KademliaEvent::GetProvidersSuccess {
query_id,
provided_key,
mut providers,
}) = event {
assert_eq!(query_id, original_query_id);
assert_eq!(provided_key, key.clone().into());
assert_eq!(providers.len(), 1);
let provider = providers.pop().unwrap();
assert_eq!(provider.peer, libp2p_peerid);
assert_eq!(provider.addresses.len(), 2);
assert!(provider.addresses.contains(&libp2p_listen_addr));
assert!(provider.addresses.contains(&libp2p_public_addr));
break
}
}
}
}
}
#[tokio::test]
async fn libp2p_get_providers_from_litep2p() {
let (mut litep2p, mut litep2p_kad) = initialize_litep2p();
let mut libp2p = initialize_libp2p();
let litep2p_peerid = PeerId::from_bytes(&litep2p.local_peer_id().to_bytes()).unwrap();
let litep2p_listen_address = litep2p.listen_addresses().next().unwrap().clone();
let litep2p_public_address: Multiaddr = "/ip4/1.1.1.1/tcp/10000".parse().unwrap();
litep2p.public_addresses().add_address(litep2p_public_address).unwrap();
// Store provider locally in litep2p.
let original_key = vec![1u8, 2u8, 3u8];
litep2p_kad.start_providing(original_key.clone().into(), Quorum::All).await;
// Drive litep2p a little bit to make sure the provider record is stored and no `ADD_PROVIDER`
// requests are generated (because no peers are know yet).
tokio::time::timeout(Duration::from_secs(2), async {
litep2p.next_event().await;
})
.await
.unwrap_err();
libp2p.behaviour_mut().kad.add_address(&litep2p_peerid, litep2p_listen_address);
let query_id = libp2p.behaviour_mut().kad.get_providers(original_key.clone().into());
loop {
tokio::select! {
event = libp2p.select_next_some() => {
if let SwarmEvent::Behaviour(BehaviourEvent::Kad(
Libp2pKademliaEvent::OutboundQueryProgressed { id, result, .. })
) = event {
assert_eq!(id, query_id);
if let QueryResult::GetProviders(Ok(
GetProvidersOk::FoundProviders { key, providers }
)) = result {
assert_eq!(key, original_key.clone().into());
assert_eq!(providers.len(), 1);
assert!(providers.contains(&litep2p_peerid));
// It looks like `libp2p` discards the cached provider addresses received
// in `GET_PROVIDERS` response, so we can't check it here.
// The addresses are neither used to extend the `libp2p` routing table.
break
} else {
panic!("invalid query result")
}
}
}
_ = litep2p.next_event() => {}
_ = litep2p_kad.next() => {}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/conformance/rust/ping.rs | tests/conformance/rust/ping.rs | // Copyright 2018 Parity Technologies (UK) Ltd.
// Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use futures::{Stream, StreamExt};
use libp2p::{
identity, ping,
swarm::{keep_alive, NetworkBehaviour, SwarmBuilder, SwarmEvent},
PeerId, Swarm,
};
use litep2p::{
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::libp2p::ping::{Config as PingConfig, PingEvent},
transport::tcp::config::Config as TcpConfig,
Litep2p,
};
#[derive(NetworkBehaviour, Default)]
struct Behaviour {
keep_alive: keep_alive::Behaviour,
ping: ping::Behaviour,
}
// initialize litep2p with ping support
fn initialize_litep2p() -> (Litep2p, Box<dyn Stream<Item = PingEvent> + Send + Unpin>) {
let keypair = Keypair::generate();
let (ping_config, ping_event_stream) = PingConfig::default();
let litep2p = Litep2p::new(
ConfigBuilder::new()
.with_keypair(keypair)
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_ping(ping_config)
.build(),
)
.unwrap();
(litep2p, ping_event_stream)
}
fn initialize_libp2p() -> Swarm<Behaviour> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());
tracing::debug!("Local peer id: {local_peer_id:?}");
let transport = libp2p::tokio_development_transport(local_key).unwrap();
let mut swarm =
SwarmBuilder::with_tokio_executor(transport, Behaviour::default(), local_peer_id).build();
swarm.listen_on("/ip6/::1/tcp/0".parse().unwrap()).unwrap();
swarm
}
#[tokio::test]
async fn libp2p_dials() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let mut libp2p = initialize_libp2p();
let (mut litep2p, mut ping_event_stream) = initialize_litep2p();
let address = litep2p.listen_addresses().next().unwrap().clone();
libp2p.dial(address).unwrap();
tokio::spawn(async move {
loop {
let _ = litep2p.next_event().await;
}
});
let mut libp2p_done = false;
let mut litep2p_done = false;
loop {
tokio::select! {
event = libp2p.select_next_some() => {
match event {
SwarmEvent::NewListenAddr { address, .. } => {
tracing::info!("Listening on {address:?}")
}
SwarmEvent::Behaviour(BehaviourEvent::Ping(_)) => {
libp2p_done = true;
if libp2p_done && litep2p_done {
break
}
}
_ => {}
}
}
_event = ping_event_stream.next() => {
litep2p_done = true;
if libp2p_done && litep2p_done {
break
}
}
_ = tokio::time::sleep(std::time::Duration::from_secs(5)) => {
panic!("failed to receive ping in time");
}
}
}
}
#[tokio::test]
async fn litep2p_dials() {}
#[tokio::test]
async fn libp2p_doesnt_support_ping() {}
#[tokio::test]
async fn litep2p_doesnt_support_ping() {}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/conformance/rust/mod.rs | tests/conformance/rust/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#[cfg(test)]
mod identify;
#[cfg(test)]
mod kademlia;
#[cfg(test)]
mod ping;
#[cfg(all(test, feature = "quic"))]
mod quic_ping;
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/protocol/identify.rs | tests/protocol/identify.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use futures::{FutureExt, StreamExt};
use litep2p::{
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::libp2p::{
identify::{Config, IdentifyEvent},
ping::Config as PingConfig,
},
Litep2p, Litep2pEvent,
};
use crate::common::{add_transport, Transport};
#[tokio::test]
async fn identify_supported_tcp() {
identify_supported(
Transport::Tcp(Default::default()),
Transport::Tcp(Default::default()),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn identify_supported_quic() {
identify_supported(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn identify_supported_websocket() {
identify_supported(
Transport::WebSocket(Default::default()),
Transport::WebSocket(Default::default()),
)
.await
}
async fn identify_supported(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (identify_config1, mut identify_event_stream1) =
Config::new("/proto/1".to_string(), Some("agent v1".to_string()));
let config_builder1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_identify(identify_config1);
let config1 = add_transport(config_builder1, transport1).build();
let (identify_config2, mut identify_event_stream2) =
Config::new("/proto/2".to_string(), Some("agent v2".to_string()));
let config_builder2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_identify(identify_config2);
let config2 = add_transport(config_builder2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address1 = litep2p1.listen_addresses().next().unwrap().clone();
let address2 = litep2p2.listen_addresses().next().unwrap().clone();
tracing::info!("listen address of peer1: {address1}");
tracing::info!("listen address of peer2: {address2}");
litep2p1.dial_address(address2).await.unwrap();
let mut litep2p1_done = false;
let mut litep2p2_done = false;
loop {
tokio::select! {
_event = litep2p1.next_event() => {}
_event = litep2p2.next_event() => {}
event = identify_event_stream1.next() => {
let IdentifyEvent::PeerIdentified { observed_address, protocol_version, user_agent, .. } = event.unwrap();
tracing::info!("peer2 observed: {observed_address:?}");
assert_eq!(protocol_version, Some("/proto/2".to_string()));
assert_eq!(user_agent, Some("agent v2".to_string()));
litep2p1_done = true;
if litep2p1_done && litep2p2_done {
break
}
}
event = identify_event_stream2.next() => {
let IdentifyEvent::PeerIdentified { observed_address, protocol_version, user_agent, .. } = event.unwrap();
tracing::info!("peer1 observed: {observed_address:?}");
assert_eq!(protocol_version, Some("/proto/1".to_string()));
assert_eq!(user_agent, Some("agent v1".to_string()));
litep2p2_done = true;
if litep2p1_done && litep2p2_done {
break
}
}
}
}
let mut litep2p1_done = false;
let mut litep2p2_done = false;
while !litep2p1_done || !litep2p2_done {
tokio::select! {
event = litep2p1.next_event() => if let Litep2pEvent::ConnectionClosed { .. } = event.unwrap() {
litep2p1_done = true;
},
event = litep2p2.next_event() => if let Litep2pEvent::ConnectionClosed { .. } = event.unwrap() {
litep2p2_done = true;
}
}
}
}
#[tokio::test]
async fn identify_not_supported_tcp() {
identify_not_supported(
Transport::Tcp(Default::default()),
Transport::Tcp(Default::default()),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn identify_not_supported_quic() {
identify_not_supported(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn identify_not_supported_websocket() {
identify_not_supported(
Transport::WebSocket(Default::default()),
Transport::WebSocket(Default::default()),
)
.await
}
async fn identify_not_supported(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config, _event_stream) = PingConfig::default();
let config_builder1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_ping(ping_config);
let config1 = add_transport(config_builder1, transport1).build();
let (identify_config2, mut identify_event_stream2) = Config::new("litep2p".to_string(), None);
let config_builder2 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_libp2p_identify(identify_config2);
let config2 = add_transport(config_builder2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = litep2p2.listen_addresses().next().unwrap().clone();
litep2p1.dial_address(address).await.unwrap();
let mut litep2p1_done = false;
let mut litep2p2_done = false;
while !litep2p1_done || !litep2p2_done {
tokio::select! {
event = litep2p1.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
tracing::error!("litep2p1 connection established");
litep2p1_done = true;
},
event = litep2p2.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
tracing::error!("litep2p2 connection established");
litep2p2_done = true;
}
}
}
let mut litep2p1_done = false;
let mut litep2p2_done = false;
while !litep2p1_done || !litep2p2_done {
tokio::select! {
event = litep2p1.next_event() => if let Litep2pEvent::ConnectionClosed { .. } = event.unwrap() {
tracing::error!("litep2p1 connection closed");
litep2p1_done = true;
},
event = litep2p2.next_event() => if let Litep2pEvent::ConnectionClosed { .. } = event.unwrap() {
tracing::error!("litep2p2 connection closed");
litep2p2_done = true;
}
}
}
assert!(identify_event_stream2.next().now_or_never().is_none());
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/protocol/kademlia.rs | tests/protocol/kademlia.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#![allow(unused)]
use bytes::Bytes;
use futures::StreamExt;
use litep2p::{
config::ConfigBuilder,
crypto::ed25519::Keypair,
protocol::libp2p::kademlia::{
ConfigBuilder as KademliaConfigBuilder, ContentProvider, IncomingRecordValidationMode,
KademliaEvent, PeerRecord, Quorum, Record, RecordKey,
},
transport::tcp::config::Config as TcpConfig,
types::multiaddr::{Multiaddr, Protocol},
Litep2p, PeerId,
};
fn spawn_litep2p(port: u16) {
let (kad_config1, _kad_handle1) = KademliaConfigBuilder::new().build();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
listen_addresses: vec![format!("/ip6/::1/tcp/{port}").parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
tokio::spawn(async move { while let Some(_) = litep2p1.next_event().await {} });
}
#[tokio::test]
#[ignore]
async fn kademlia_supported() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (kad_config1, _kad_handle1) = KademliaConfigBuilder::new().build();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
for port in 9000..9003 {
spawn_litep2p(port);
}
loop {
tokio::select! {
event = litep2p1.next_event() => {
tracing::info!("litep2p event received: {event:?}");
}
// event = kad_handle1.next() => {
// tracing::info!("kademlia event received: {event:?}");
// }
}
}
}
#[tokio::test]
#[ignore]
async fn put_value() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new().build();
let config1 = ConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
for i in 0..10 {
kad_handle1
.add_known_peer(
PeerId::random(),
vec![format!("/ip6/::/tcp/{i}").parse().unwrap()],
)
.await;
}
// let key = RecordKey::new(&Bytes::from(vec![1, 3, 3, 7]));
// kad_handle1.put_value(key, vec![1, 2, 3, 4]).await;
// loop {
// tokio::select! {
// event = litep2p1.next_event() => {
// tracing::info!("litep2p event received: {event:?}");
// }
// event = kad_handle1.next() => {
// tracing::info!("kademlia event received: {event:?}");
// }
// }
// }
}
#[tokio::test]
async fn records_are_stored_automatically() {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new().build();
let (kad_config2, mut kad_handle2) = KademliaConfigBuilder::new().build();
let config1 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let config2 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
kad_handle1
.add_known_peer(
*litep2p2.local_peer_id(),
litep2p2.listen_addresses().cloned().collect(),
)
.await;
// Publish the record.
let record = Record::new(vec![1, 2, 3], vec![0x01]);
let query_id = kad_handle1.put_record(record.clone(), Quorum::All).await;
let mut records = Vec::new();
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("record was not stored in 10 secs")
}
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = kad_handle1.next() => {
match event {
Some(KademliaEvent::PutRecordSuccess { query_id: got_query_id, key }) => {
assert_eq!(got_query_id, query_id);
assert_eq!(key, record.key);
// Check if the record was stored.
let _ = kad_handle2
.get_record(RecordKey::from(vec![1, 2, 3]), Quorum::One).await;
}
Some(KademliaEvent::QueryFailed { query_id: got_query_id }) => {
assert_eq!(got_query_id, query_id);
panic!("query failed")
}
_ => {}
}
}
event = kad_handle2.next() => {
match event {
Some(KademliaEvent::IncomingRecord { record: got_record }) => {
assert_eq!(got_record.key, record.key);
assert_eq!(got_record.value, record.value);
assert_eq!(got_record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(got_record.expires.is_some());
}
Some(KademliaEvent::GetRecordPartialResult { query_id: _, record }) => {
records.push(record);
}
Some(KademliaEvent::GetRecordSuccess { query_id: _ }) => {
assert_eq!(records.len(), 1);
let got_record = records.first().unwrap();
// Record retrieved from local storage.
assert_eq!(got_record.peer, *litep2p2.local_peer_id());
assert_eq!(got_record.record.key, record.key);
assert_eq!(got_record.record.value, record.value);
assert_eq!(got_record.record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(got_record.record.expires.is_some());
break
}
_ => {}
}
}
}
}
}
#[tokio::test]
async fn records_are_stored_manually() {
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new()
.with_incoming_records_validation_mode(IncomingRecordValidationMode::Manual)
.build();
let (kad_config2, mut kad_handle2) = KademliaConfigBuilder::new()
.with_incoming_records_validation_mode(IncomingRecordValidationMode::Manual)
.build();
let config1 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let config2 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
kad_handle1
.add_known_peer(
*litep2p2.local_peer_id(),
litep2p2.listen_addresses().cloned().collect(),
)
.await;
// Publish the record.
let mut record = Record::new(vec![1, 2, 3], vec![0x01]);
let query_id = kad_handle1.put_record(record.clone(), Quorum::All).await;
let mut records = Vec::new();
let mut put_record_success = false;
let mut get_record_success = false;
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("record was not stored in 10 secs")
}
_ = litep2p1.next_event() => {}
_ = litep2p2.next_event() => {}
event = kad_handle1.next() => {
match event {
Some(KademliaEvent::PutRecordSuccess { query_id: got_query_id, key }) => {
assert_eq!(got_query_id, query_id);
assert_eq!(key, record.key);
// Due to manual validation, the record will be stored later, so we request
// it in `kad_handle2` after receiving the incoming record
put_record_success = true;
if get_record_success {
break;
}
}
_ => {}
}
}
event = kad_handle2.next() => {
match event {
Some(KademliaEvent::IncomingRecord { record: got_record }) => {
assert_eq!(got_record.key, record.key);
assert_eq!(got_record.value, record.value);
assert_eq!(got_record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(got_record.expires.is_some());
kad_handle2.store_record(got_record).await;
// Check if the record was stored.
let _ = kad_handle2
.get_record(RecordKey::from(vec![1, 2, 3]), Quorum::One).await;
}
Some(KademliaEvent::GetRecordPartialResult { query_id: _, record }) => {
records.push(record);
}
Some(KademliaEvent::GetRecordSuccess { query_id: _ }) => {
assert_eq!(records.len(), 1);
let got_record = records.first().unwrap();
// Record retrieved from local storage.
assert_eq!(got_record.peer, *litep2p2.local_peer_id());
assert_eq!(got_record.record.key, record.key);
assert_eq!(got_record.record.value, record.value);
assert_eq!(got_record.record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(got_record.record.expires.is_some());
get_record_success = true;
if put_record_success {
break;
}
}
_ => {}
}
}
}
}
}
#[tokio::test]
async fn not_validated_records_are_not_stored() {
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new()
.with_incoming_records_validation_mode(IncomingRecordValidationMode::Manual)
.build();
let (kad_config2, mut kad_handle2) = KademliaConfigBuilder::new()
.with_incoming_records_validation_mode(IncomingRecordValidationMode::Manual)
.build();
let config1 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let config2 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
kad_handle1
.add_known_peer(
*litep2p2.local_peer_id(),
litep2p2.listen_addresses().cloned().collect(),
)
.await;
// Publish the record.
let record = Record::new(vec![1, 2, 3], vec![0x01]);
let query_id = kad_handle1.put_record(record.clone(), Quorum::All).await;
let mut records = Vec::new();
let mut get_record_query_id = None;
let mut put_record_success = false;
let mut get_record_success = false;
let mut query_failed = false;
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("query has not failed in 10 secs")
}
event = litep2p1.next_event() => {}
event = litep2p2.next_event() => {}
event = kad_handle1.next() => {
match event {
Some(KademliaEvent::PutRecordSuccess { query_id: got_query_id, key }) => {
assert_eq!(got_query_id, query_id);
assert_eq!(key, record.key);
put_record_success = true;
if get_record_success || query_failed {
break;
}
}
_ => {}
}
}
event = kad_handle2.next() => {
match event {
Some(KademliaEvent::IncomingRecord { record: got_record }) => {
assert_eq!(got_record.key, record.key);
assert_eq!(got_record.value, record.value);
assert_eq!(got_record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(got_record.expires.is_some());
// Do not call `kad_handle2.store_record(record).await`.
// Check if the record was stored.
let query_id = kad_handle2
.get_record(RecordKey::from(vec![1, 2, 3]), Quorum::One).await;
get_record_query_id = Some(query_id);
}
Some(KademliaEvent::GetRecordPartialResult { query_id, record }) => {
assert_eq!(query_id, get_record_query_id.unwrap());
records.push(record);
}
Some(KademliaEvent::GetRecordSuccess { query_id: _ }) => {
assert_eq!(records.len(), 1);
let got_record = records.first().unwrap();
// The record was not stored at litep2p2.
assert_eq!(got_record.peer, *litep2p1.local_peer_id());
get_record_success = true;
if put_record_success {
break
}
}
Some(KademliaEvent::QueryFailed { query_id }) => {
assert_eq!(query_id, get_record_query_id.unwrap());
query_failed = true;
if put_record_success {
break
}
}
_ => {}
}
}
}
}
}
#[tokio::test]
async fn get_record_retrieves_remote_records() {
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new()
.with_incoming_records_validation_mode(IncomingRecordValidationMode::Manual)
.build();
let (kad_config2, mut kad_handle2) = KademliaConfigBuilder::new()
.with_incoming_records_validation_mode(IncomingRecordValidationMode::Manual)
.build();
let config1 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let config2 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
// Store the record on `litep2p1``.
let original_record = Record::new(vec![1, 2, 3], vec![0x01]);
let query1 = kad_handle1.put_record(original_record.clone(), Quorum::All).await;
let mut records = Vec::new();
let mut query2 = None;
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("record was not retrieved in 10 secs")
}
event = litep2p1.next_event() => {}
event = litep2p2.next_event() => {}
event = kad_handle1.next() => {
if let Some(KademliaEvent::QueryFailed { query_id }) = event {
// Query failed, but the record was stored locally.
assert_eq!(query_id, query1);
// Let peer2 know about peer1.
kad_handle2
.add_known_peer(
*litep2p1.local_peer_id(),
litep2p1.listen_addresses().cloned().collect(),
)
.await;
// Let peer2 get record from peer1.
let query_id = kad_handle2
.get_record(RecordKey::from(vec![1, 2, 3]), Quorum::One).await;
query2 = Some(query_id);
}
}
event = kad_handle2.next() => {
match event {
Some(KademliaEvent::GetRecordPartialResult { query_id: _, record }) => {
records.push(record);
}
Some(KademliaEvent::GetRecordSuccess { query_id: _ }) => {
assert_eq!(records.len(), 1);
let got_record = records.first().unwrap();
assert_eq!(got_record.peer, *litep2p1.local_peer_id());
assert_eq!(got_record.record.key, original_record.key);
assert_eq!(got_record.record.value, original_record.value);
assert_eq!(got_record.record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(got_record.record.expires.is_some());
break
}
Some(KademliaEvent::QueryFailed { query_id: _ }) => {
panic!("query failed")
}
_ => {}
}
}
}
}
}
#[tokio::test]
async fn get_record_retrieves_local_and_remote_records() {
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new().build();
let (kad_config2, mut kad_handle2) = KademliaConfigBuilder::new().build();
let config1 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let config2 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
// Let peers jnow about each other
kad_handle1
.add_known_peer(
*litep2p2.local_peer_id(),
litep2p2.listen_addresses().cloned().collect(),
)
.await;
kad_handle2
.add_known_peer(
*litep2p1.local_peer_id(),
litep2p1.listen_addresses().cloned().collect(),
)
.await;
// Store the record on `litep2p1``.
let original_record = Record::new(vec![1, 2, 3], vec![0x01]);
let query1 = kad_handle1.put_record(original_record.clone(), Quorum::All).await;
let (mut peer1_stored, mut peer2_stored) = (false, false);
let mut query3 = None;
let mut records = Vec::new();
let mut put_record_success = false;
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("record was not retrieved in 10 secs")
}
event = litep2p1.next_event() => {}
event = litep2p2.next_event() => {}
event = kad_handle1.next() => {
match event {
Some(KademliaEvent::PutRecordSuccess { query_id: got_query_id, key }) => {
assert_eq!(got_query_id, query1);
assert_eq!(key, original_record.key);
// Due to manual validation, the record will be stored later, so we request
// it in `kad_handle2` after receiving the incoming record
put_record_success = true;
}
_ => {}
}
}
event = kad_handle2.next() => {
match event {
Some(KademliaEvent::IncomingRecord { record: got_record }) => {
assert_eq!(got_record.key, original_record.key);
assert_eq!(got_record.value, original_record.value);
assert_eq!(got_record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(got_record.expires.is_some());
// Get record.
let query_id = kad_handle2
.get_record(RecordKey::from(vec![1, 2, 3]), Quorum::All).await;
query3 = Some(query_id);
}
Some(KademliaEvent::GetRecordPartialResult { query_id: _, record }) => {
records.push(record);
}
Some(KademliaEvent::GetRecordSuccess { query_id: _ }) => {
assert_eq!(records.len(), 2);
// Locally retrieved record goes first.
assert_eq!(records[0].peer, *litep2p2.local_peer_id());
assert_eq!(records[0].record.key, original_record.key);
assert_eq!(records[0].record.value, original_record.value);
assert_eq!(records[0].record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(records[0].record.expires.is_some());
// Remote record from peer 1.
assert_eq!(records[1].peer, *litep2p1.local_peer_id());
assert_eq!(records[1].record.key, original_record.key);
assert_eq!(records[1].record.value, original_record.value);
assert_eq!(records[1].record.publisher.unwrap(), *litep2p1.local_peer_id());
assert!(records[1].record.expires.is_some());
break
}
Some(KademliaEvent::QueryFailed { query_id: _ }) => {
panic!("peer2 query failed")
}
_ => {}
}
}
}
}
assert!(
put_record_success,
"Publisher was not notified that the record was received",
);
}
#[tokio::test]
async fn provider_retrieved_by_remote_node() {
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new().build();
let (kad_config2, mut kad_handle2) = KademliaConfigBuilder::new().build();
let config1 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let config2 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
// Register at least one public address.
let peer1 = *litep2p1.local_peer_id();
let peer1_public_address = "/ip4/192.168.0.1/tcp/10000"
.parse::<Multiaddr>()
.unwrap()
.with(Protocol::P2p(peer1.into()));
litep2p1.public_addresses().add_address(peer1_public_address.clone());
assert_eq!(
litep2p1.public_addresses().get_addresses(),
vec![peer1_public_address.clone()],
);
// Store provider locally.
let key = RecordKey::new(&vec![1, 2, 3]);
let query0 = kad_handle1.start_providing(key.clone(), Quorum::All).await;
// This is the expected provider.
let expected_provider = ContentProvider {
peer: peer1,
addresses: vec![peer1_public_address],
};
let mut query1 = None;
let mut query2 = None;
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("provider was not retrieved in 10 secs")
}
event = litep2p1.next_event() => {}
event = litep2p2.next_event() => {}
event = kad_handle1.next() => {
if let Some(KademliaEvent::QueryFailed { query_id }) = event {
// Publishing the provider failed, because the nodes are not connected.
assert_eq!(query_id, query0);
// This request to get provider should fail because the nodes are still
// not connected.
query1 = Some(kad_handle2.get_providers(key.clone()).await);
}
}
event = kad_handle2.next() => {
match event {
Some(KademliaEvent::QueryFailed { query_id }) => {
// Query failed, because the nodes don't know about each other yet.
assert_eq!(Some(query_id), query1);
// Let the node know about `litep2p1`.
kad_handle2
.add_known_peer(
*litep2p1.local_peer_id(),
litep2p1.listen_addresses().cloned().collect(),
)
.await;
// And request providers again.
query2 = Some(kad_handle2.get_providers(key.clone()).await);
}
Some(KademliaEvent::GetProvidersSuccess {
query_id,
provided_key,
providers,
}) => {
assert_eq!(query_id, query2.unwrap());
assert_eq!(provided_key, key);
assert_eq!(providers.len(), 1);
assert_eq!(providers.first().unwrap(), &expected_provider);
break
}
_ => {}
}
}
}
}
}
#[tokio::test]
async fn provider_added_to_remote_node() {
let (kad_config1, mut kad_handle1) = KademliaConfigBuilder::new().build();
let (kad_config2, mut kad_handle2) = KademliaConfigBuilder::new().build();
let config1 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config1)
.build();
let config2 = ConfigBuilder::new()
.with_tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
})
.with_libp2p_kademlia(kad_config2)
.build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
// Register at least one public address.
let peer1 = *litep2p1.local_peer_id();
let peer1_public_address = "/ip4/192.168.0.1/tcp/10000"
.parse::<Multiaddr>()
.unwrap()
.with(Protocol::P2p(peer1.into()));
litep2p1.public_addresses().add_address(peer1_public_address.clone());
assert_eq!(
litep2p1.public_addresses().get_addresses(),
vec![peer1_public_address.clone()],
);
// Let peer1 know about peer2.
kad_handle1
.add_known_peer(
*litep2p2.local_peer_id(),
litep2p2.listen_addresses().cloned().collect(),
)
.await;
// Start provodong.
let key = RecordKey::new(&vec![1, 2, 3]);
let query = kad_handle1.start_providing(key.clone(), Quorum::All).await;
let mut add_provider_success = false;
let mut incoming_provider = false;
// This is the expected provider.
let expected_provider = ContentProvider {
peer: peer1,
addresses: vec![peer1_public_address],
};
loop {
tokio::select! {
_ = tokio::time::sleep(tokio::time::Duration::from_secs(10)) => {
panic!("provider was not retrieved in 10 secs")
}
event = litep2p1.next_event() => {}
event = litep2p2.next_event() => {}
event = kad_handle1.next() => {
if let Some(KademliaEvent::AddProviderSuccess { query_id, provided_key }) = event {
assert_eq!(query_id, query);
assert_eq!(provided_key, key);
add_provider_success = true;
if incoming_provider {
break
}
}
}
event = kad_handle2.next() => {
if let Some(KademliaEvent::IncomingProvider { provided_key, provider }) = event {
assert_eq!(provided_key, key);
assert_eq!(provider, expected_provider);
incoming_provider = true;
if add_provider_success {
break
}
}
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/protocol/notification.rs | tests/protocol/notification.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use litep2p::{
config::ConfigBuilder as Litep2pConfigBuilder,
crypto::ed25519::Keypair,
error::Error,
protocol::notification::{
Config as NotificationConfig, ConfigBuilder, Direction, NotificationError,
NotificationEvent, NotificationHandle, ValidationResult,
},
transport::tcp::config::Config as TcpConfig,
types::protocol::ProtocolName,
Litep2p, Litep2pEvent, PeerId,
};
#[cfg(feature = "websocket")]
use litep2p::transport::websocket::config::Config as WebSocketConfig;
use bytes::BytesMut;
use futures::StreamExt;
use multiaddr::{Multiaddr, Protocol};
use multihash::Multihash;
#[cfg(feature = "quic")]
use std::net::Ipv4Addr;
use std::{net::Ipv6Addr, task::Poll, time::Duration};
use crate::common::{add_transport, Transport};
async fn connect_peers(litep2p1: &mut Litep2p, litep2p2: &mut Litep2p) {
let address = litep2p2.listen_addresses().next().unwrap().clone();
litep2p1.dial_address(address).await.unwrap();
let mut litep2p1_connected = false;
let mut litep2p2_connected = false;
// Disarm the first tick to avoid immediate timeouts.
let mut ticker = tokio::time::interval(std::time::Duration::from_secs(5));
ticker.tick().await;
loop {
tokio::select! {
event = litep2p1.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
litep2p1_connected = true;
},
event = litep2p2.next_event() => if let Litep2pEvent::ConnectionEstablished { .. } = event.unwrap() {
litep2p2_connected = true;
},
_ = ticker.tick() => {
panic!("peers failed to connect within timeout");
}
}
if litep2p1_connected && litep2p2_connected {
break;
}
}
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
}
async fn make_default_litep2p(transport: Transport) -> (Litep2p, NotificationHandle) {
let (notif_config, handle) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config);
let config = add_transport(config, transport).build();
(Litep2p::new(config).unwrap(), handle)
}
#[tokio::test]
async fn open_substreams_tcp() {
open_substreams(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn open_substreams_quic() {
open_substreams(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn open_substreams_websocket() {
open_substreams(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn open_substreams(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (notif_config1, mut handle1) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config1 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config1);
let config1 = add_transport(config1, transport1).build();
let (notif_config2, mut handle2) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config2 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer1 = *litep2p1.local_peer_id();
let peer2 = *litep2p2.local_peer_id();
// wait until peers have connected and spawn the litep2p objects in the background
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {},
_ = litep2p2.next_event() => {},
}
}
});
// open substream for `peer2` and accept it
handle1.open_substream(peer2).await.unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
handle2.send_validation_result(peer1, ValidationResult::Accept);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_validation_result(peer2, ValidationResult::Accept);
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
direction: Direction::Inbound,
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Outbound,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_sync_notification(peer2, vec![1, 3, 3, 7]).unwrap();
handle2.send_sync_notification(peer1, vec![1, 3, 3, 8]).unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer1,
notification: BytesMut::from(&[1, 3, 3, 7][..]),
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer2,
notification: BytesMut::from(&[1, 3, 3, 8][..]),
}
);
}
#[tokio::test]
async fn reject_substream_tcp() {
reject_substream(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await;
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn reject_substream_quic() {
reject_substream(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn reject_substream_websocket() {
reject_substream(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn reject_substream(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (notif_config1, mut handle1) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config1 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config1);
let config1 = add_transport(config1, transport1).build();
let (notif_config2, mut handle2) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config2 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer1 = *litep2p1.local_peer_id();
let peer2 = *litep2p2.local_peer_id();
// wait until peers have connected and spawn the litep2p objects in the background
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {},
_ = litep2p2.next_event() => {},
}
}
});
// open substream for `peer2` and accept it
handle1.open_substream(peer2).await.unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
handle2.send_validation_result(peer1, ValidationResult::Reject);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpenFailure {
peer: peer2,
error: NotificationError::Rejected,
}
);
}
#[tokio::test]
async fn notification_stream_closed_tcp() {
notification_stream_closed(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn notification_stream_closed_quic() {
notification_stream_closed(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn notification_stream_closed_websocket() {
notification_stream_closed(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn notification_stream_closed(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (notif_config1, mut handle1) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config1 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config1);
let config1 = add_transport(config1, transport1).build();
let (notif_config2, mut handle2) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config2 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer1 = *litep2p1.local_peer_id();
let peer2 = *litep2p2.local_peer_id();
// wait until peers have connected and spawn the litep2p objects in the background
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {},
_ = litep2p2.next_event() => {},
}
}
});
// open substream for `peer2` and accept it
handle1.open_substream(peer2).await.unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
handle2.send_validation_result(peer1, ValidationResult::Accept);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_validation_result(peer2, ValidationResult::Accept);
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Outbound,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_sync_notification(peer2, vec![1, 3, 3, 7]).unwrap();
handle2.send_sync_notification(peer1, vec![1, 3, 3, 8]).unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer1,
notification: BytesMut::from(&[1, 3, 3, 7][..]),
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer2,
notification: BytesMut::from(&[1, 3, 3, 8][..]),
}
);
handle1.close_substream(peer2).await;
match handle2.next().await.unwrap() {
NotificationEvent::NotificationStreamClosed { peer } => assert_eq!(peer, peer1),
_ => panic!("invalid event received"),
}
}
#[tokio::test]
async fn reconnect_after_disconnect_tcp() {
reconnect_after_disconnect(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn reconnect_after_disconnect_quic() {
reconnect_after_disconnect(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn reconnect_after_disconnect_websocket() {
reconnect_after_disconnect(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn reconnect_after_disconnect(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (notif_config1, mut handle1) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config1 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config1);
let config1 = add_transport(config1, transport1).build();
let (notif_config2, mut handle2) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config2 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer1 = *litep2p1.local_peer_id();
let peer2 = *litep2p2.local_peer_id();
// wait until peers have connected and spawn the litep2p objects in the background
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {},
_ = litep2p2.next_event() => {},
}
}
});
// open substream for `peer2` and accept it
handle1.open_substream(peer2).await.unwrap();
// accept the inbound substreams
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
handle2.send_validation_result(peer1, ValidationResult::Accept);
// accept the inbound substreams
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_validation_result(peer2, ValidationResult::Accept);
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Outbound,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
// close the substream
handle2.close_substream(peer1).await;
match handle2.next().await.unwrap() {
NotificationEvent::NotificationStreamClosed { peer } => assert_eq!(peer, peer1),
_ => panic!("invalid event received"),
}
match handle1.next().await.unwrap() {
NotificationEvent::NotificationStreamClosed { peer } => assert_eq!(peer, peer2),
_ => panic!("invalid event received"),
}
// open the substream
handle2.open_substream(peer1).await.unwrap();
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_validation_result(peer2, ValidationResult::Accept);
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
handle2.send_validation_result(peer1, ValidationResult::Accept);
// verify that both peers get the open event
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Outbound,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
// send notifications to verify that the connection works again
handle1.send_sync_notification(peer2, vec![1, 3, 3, 7]).unwrap();
handle2.send_sync_notification(peer1, vec![1, 3, 3, 8]).unwrap();
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer1,
notification: BytesMut::from(&[1, 3, 3, 7][..]),
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationReceived {
peer: peer2,
notification: BytesMut::from(&[1, 3, 3, 8][..]),
}
);
}
#[tokio::test]
async fn set_new_handshake_tcp() {
set_new_handshake(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn set_new_handshake_quic() {
set_new_handshake(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn set_new_handshake_websocket() {
set_new_handshake(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn set_new_handshake(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (notif_config1, mut handle1) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config1 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config1);
let config1 = match transport1 {
Transport::Tcp(config) => config1.with_tcp(config),
#[cfg(feature = "quic")]
Transport::Quic(config) => config1.with_quic(config),
#[cfg(feature = "websocket")]
Transport::WebSocket(config) => config1.with_websocket(config),
}
.build();
let (notif_config2, mut handle2) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config2 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer1 = *litep2p1.local_peer_id();
let peer2 = *litep2p2.local_peer_id();
// wait until peers have connected and spawn the litep2p objects in the background
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {},
_ = litep2p2.next_event() => {},
}
}
});
// open substream for `peer2` and accept it
handle1.open_substream(peer2).await.unwrap();
// accept the substreams
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
handle2.send_validation_result(peer1, ValidationResult::Accept);
// accept the substreams
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
handle1.send_validation_result(peer2, ValidationResult::Accept);
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
peer: peer1,
handshake: vec![1, 2, 3, 4],
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Outbound,
peer: peer2,
handshake: vec![1, 2, 3, 4],
}
);
// close the substream
handle2.close_substream(peer1).await;
match handle2.next().await.unwrap() {
NotificationEvent::NotificationStreamClosed { peer } => assert_eq!(peer, peer1),
_ => panic!("invalid event received"),
}
match handle1.next().await.unwrap() {
NotificationEvent::NotificationStreamClosed { peer } => assert_eq!(peer, peer2),
_ => panic!("invalid event received"),
}
// set new handshakes and open the substream
handle1.set_handshake(vec![5, 5, 5, 5]);
handle2.set_handshake(vec![6, 6, 6, 6]);
handle2.open_substream(peer1).await.unwrap();
// accept the substreams
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer2,
handshake: vec![6, 6, 6, 6],
}
);
handle1.send_validation_result(peer2, ValidationResult::Accept);
// accept the substreams
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::ValidateSubstream {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
peer: peer1,
handshake: vec![5, 5, 5, 5],
}
);
handle2.send_validation_result(peer1, ValidationResult::Accept);
// verify that both peers get the open event
assert_eq!(
handle2.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Outbound,
peer: peer1,
handshake: vec![5, 5, 5, 5],
}
);
assert_eq!(
handle1.next().await.unwrap(),
NotificationEvent::NotificationStreamOpened {
protocol: ProtocolName::from("/notif/1"),
fallback: None,
direction: Direction::Inbound,
peer: peer2,
handshake: vec![6, 6, 6, 6],
}
);
}
#[tokio::test]
async fn both_nodes_open_substreams_tcp() {
both_nodes_open_substreams(
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
Transport::Tcp(TcpConfig {
listen_addresses: vec!["/ip6/::1/tcp/0".parse().unwrap()],
..Default::default()
}),
)
.await
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn both_nodes_open_substreams_quic() {
both_nodes_open_substreams(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn both_nodes_open_substreams_websocket() {
both_nodes_open_substreams(
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
Transport::WebSocket(WebSocketConfig {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/0/ws".parse().unwrap()],
..Default::default()
}),
)
.await;
}
async fn both_nodes_open_substreams(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (notif_config1, mut handle1) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config1 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config1);
let config1 = add_transport(config1, transport1).build();
let (notif_config2, mut handle2) = NotificationConfig::new(
ProtocolName::from("/notif/1"),
1024usize,
vec![1, 2, 3, 4],
Vec::new(),
false,
64,
64,
true,
);
let config2 = Litep2pConfigBuilder::new()
.with_keypair(Keypair::generate())
.with_notification_protocol(notif_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let peer1 = *litep2p1.local_peer_id();
let peer2 = *litep2p2.local_peer_id();
// wait until peers have connected and spawn the litep2p objects in the background
connect_peers(&mut litep2p1, &mut litep2p2).await;
tokio::spawn(async move {
loop {
tokio::select! {
_ = litep2p1.next_event() => {},
_ = litep2p2.next_event() => {},
}
}
});
// both nodes open a substream at the same time
handle1.open_substream(peer2).await.unwrap();
handle2.open_substream(peer1).await.unwrap();
// accept the substreams
assert_eq!(
handle1.next().await.unwrap(),
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | true |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/protocol/ping.rs | tests/protocol/ping.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
use futures::StreamExt;
use litep2p::{
config::ConfigBuilder, protocol::libp2p::ping::ConfigBuilder as PingConfigBuilder, Litep2p,
};
use crate::common::{add_transport, Transport};
#[tokio::test]
async fn ping_supported_tcp() {
ping_supported(
Transport::Tcp(Default::default()),
Transport::Tcp(Default::default()),
)
.await;
}
#[cfg(feature = "websocket")]
#[tokio::test]
async fn ping_supported_websocket() {
ping_supported(
Transport::WebSocket(Default::default()),
Transport::WebSocket(Default::default()),
)
.await;
}
#[cfg(feature = "quic")]
#[tokio::test]
async fn ping_supported_quic() {
ping_supported(
Transport::Quic(Default::default()),
Transport::Quic(Default::default()),
)
.await;
}
async fn ping_supported(transport1: Transport, transport2: Transport) {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.try_init();
let (ping_config1, mut ping_event_stream1) =
PingConfigBuilder::new().with_max_failure(3usize).build();
let config1 = ConfigBuilder::new().with_libp2p_ping(ping_config1);
let config1 = add_transport(config1, transport1).build();
let (ping_config2, mut ping_event_stream2) = PingConfigBuilder::new().build();
let config2 = ConfigBuilder::new().with_libp2p_ping(ping_config2);
let config2 = add_transport(config2, transport2).build();
let mut litep2p1 = Litep2p::new(config1).unwrap();
let mut litep2p2 = Litep2p::new(config2).unwrap();
let address = litep2p2.listen_addresses().next().unwrap().clone();
litep2p1.dial_address(address).await.unwrap();
let mut litep2p1_done = false;
let mut litep2p2_done = false;
loop {
tokio::select! {
_event = litep2p1.next_event() => {}
_event = litep2p2.next_event() => {}
event = ping_event_stream1.next() => {
tracing::trace!("ping event for litep2p1: {event:?}");
litep2p1_done = true;
if litep2p1_done && litep2p2_done {
break
}
}
event = ping_event_stream2.next() => {
tracing::trace!("ping event for litep2p2: {event:?}");
litep2p2_done = true;
if litep2p1_done && litep2p2_done {
break
}
}
}
}
}
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
paritytech/litep2p | https://github.com/paritytech/litep2p/blob/991aa12f60db41543735394bf71fba09332752f8/tests/protocol/mod.rs | tests/protocol/mod.rs | // Copyright 2023 litep2p developers
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#[cfg(test)]
mod identify;
#[cfg(test)]
mod kademlia;
#[cfg(test)]
mod notification;
#[cfg(test)]
mod ping;
#[cfg(test)]
mod request_response;
| rust | MIT | 991aa12f60db41543735394bf71fba09332752f8 | 2026-01-04T20:20:42.179941Z | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.