source
large_stringclasses 2
values | subject
large_stringclasses 112
values | code
large_stringclasses 112
values | critique
large_stringlengths 61
3.04M
⌀ | metadata
dict |
|---|---|---|---|---|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Niklas,
On Fri, 3 Jan 2025 at 19:55, Niklas Cassel <cassel@kernel.org> wrote:
I am unable to reproduce this issue on my end. Could you share your
config file with me?
Additionally, if we build most of the ROCKCHIP components as modules..."
You will see this warning, which is the main reason for this patch
[ 34.642365] platform fc400000.usb: deferred probe pending: dwc3:
failed to initialize core
[ 34.642529] platform a41000000.pcie: deferred probe pending:
rockchip-dw-pcie: missing PHY
[ 34.642604] platform a40800000.pcie: deferred probe pending:
rockchip-dw-pcie: missing PHY
[ 34.642674] platform fcd00000.usb: deferred probe pending: dwc3:
failed to initialize core
According to RK3588 (TRM) documentation specifies the requirement for
a dedicated GMAC controller
to effectively manage certain critical network features. In the
absence of this specialized controller,
the network interface card (NIC) may operate solely as a standard PCIe
NIC, potentially leading to
suboptimal performance and a lack of robust flow control mechanisms.
Log analysis indicates that Ethernet probing occurs before the
initialization of the PCIe PHY and PCIe hosts.
To investigate this issue, please test with the following configuration changes:
1 Set CONFIG_DWMAC_ROCKCHIP=m.
2 Enable the probe mode PROBE_PREFER_ASYNCHRONOUS for the DWMAC_ROCKCHIP driver.
Thanks
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Sun, 5 Jan 2025 23:16:05 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Andrew,
On Fri, 3 Jan 2025 at 21:34, Andrew Lunn <andrew@lunn.ch> wrote:
According to the RKRK3588 TRM-Part1 (section 25.6.11 Clock
Architecture), in RGMII mode,
the TX clock source is exclusively derived from the CRU (Clock Request Unit).
To dynamically adjust the timing alignment between TX/RX clocks and
data, delay lines are
integrated into both the TX and RX clock paths.
Register SYS_GRF_SOC_CON7[5:2] enables these delay lines,
while registers SYS_GRF_SOC_CON8[15:0] and SYS_GRF_SOC_CON9[15:0]
are used to configure the delay length for each path respectively.
Each delay line comprises 200 individual delay elements.
Therefore, it is necessary to configure both TX and RX delay values
appropriately
with phy-mode set as rgmii.
[1[ https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c#L1889-L1914
I have gone through a few of the archives about this topic
[2] https://lore.kernel.org/linux-rockchip/4fdcb631-16cd-d5f1-e2be-19ecedb436eb@linaro.org/T/
Thanks
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Sun, 5 Jan 2025 23:16:21 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
On Sun, Jan 05, 2025 at 11:16:21PM +0530, Anand Moon wrote:
O.K, let me repeat what i have said a number of times over the last
couple of years.
phy-mode = "rgmii" means the PCB has extra long clock lines on the
PCB, so the 2ns delay is provided by them.
phy-mode = "rgmii-id" means the MAC/PHY pair need to arrange to add
the 2ns delay. As far as the DT binding is concerned, it does not
matter which of the two does the delay. However, there is a convention
that the PHY adds the delay, if possible.
So, does your PCB have extra long clock lines?
Vendors often just hack until it works. But works does not mean
correct. I try to review as many .dts files as i can, but some do get
passed me, so there are plenty of bad examples in mainline.
Andrew
|
{
"author": "Andrew Lunn <andrew@lunn.ch>",
"date": "Sun, 5 Jan 2025 18:57:23 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Andrew,
On Sun, 5 Jan 2025 at 23:27, Andrew Lunn <andrew@lunn.ch> wrote:
Thanks for this tip, I am no expert in hardware design.
Here is the schematic design of the board, it looks like RTL8125B page 24
is controlled by a PCIe 2.0 bus
[0] https://dl.radxa.com/rock5/5b/docs/hw/radxa_rock5b_v13_sch.pdf
PERSTB ---<< PCIE_PERST_L (GPIO3_B0_u)
LANWAKER --->> PCIE20_1_2_WAKEn_M1_L (GPIO3_D0_u)
LAN_CLKREQB --->> PCIE20_1_2_CLKREQn_M1_L( GPIO3_C7_u)
IOLATEB --->> +V3P3A
PCIE2.0 DATA Impedance 85 R
PCIE_WLAN_TX_C_DP ----->PCIE20_0_TXP
PCIE_WLAN_TX_C_DN ----->PCIE20_0_TXN
PCIE2.0 CLK Impedance 100 R
PCIE3_WLAN_REFCLK0_DP --> PCIE20_0_REFCLKP
PCIE3_WLAN_REFCLK0_DN--->PCIE20_0_REFCLKN
I have no idea about the grf clk and data path delay over here.
Thanks
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Mon, 6 Jan 2025 13:28:27 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
On Mon, Jan 06, 2025 at 01:28:27PM +0530, Anand Moon wrote:
As both me an Manivannan said earlier in this thread,
PCIe endpoint devices should not be described in device tree
(the exception is an FPGA, and when you need to describe devices
within the FPGA).
So I think that adding a "ethernet-phy" device tree node in this case is
wrong (as the Ethernet PHY in this case is integrated in the PCIe connected
NIC, and not a discrete component on the SoC).
Kind regards,
Niklas
|
{
"author": "Niklas Cassel <cassel@kernel.org>",
"date": "Mon, 6 Jan 2025 13:02:38 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
There are other cases when PCIe devices need a DT node. One is when
you have an onboard ethernet switch connected to the Ethernet
device. The switch has to be described in DT, and it needs a phandle
to the ethernet interface. Hence you need a DT node the phandle points
to.
You are also making the assumption that the PCIe ethernet interface
has firmware driving all its subsystems. Which results in every PCIe
ethernet device manufacture re-inventing what Linux can already do for
SoC style Ethernet interfaces which do not have firmware, linux drives
it all. I personally would prefer Linux to drive the hardware, via a
DT node, since i then don't have to deal with firmware bugs i cannot
fix, its just Linux all the way down.
Andrew
|
{
"author": "Andrew Lunn <andrew@lunn.ch>",
"date": "Mon, 6 Jan 2025 14:44:19 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Andrewm,
On Mon, 6 Jan 2025 at 19:14, Andrew Lunn <andrew@lunn.ch> wrote:
Ok Thanks for clarifying.
I was just trying to understand the call trace for mdio bus which got
me confused.
[0] https://lore.kernel.org/all/Z3fKkTSFFcU9gQLg@ryzen/
Thanks
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Tue, 7 Jan 2025 16:43:38 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
There is nothing particularly unusual in there. We see PCI bus
enumeration has found a device and bound a driver to it. The driver
has instantiated an MDIO bus, which has scanned the MDIO bus and found
a PHY. The phylib core then tried to load the kernel module needed to
drive the PHY.
Just because it is a PCI device does not mean firmware has to control
all the hardware. Linux has no problems controlling all this, and it
saves reinventing the wheel in firmware, avoids firmware bugs, and
allows new features to be added etc.
Andrew
|
{
"author": "Andrew Lunn <andrew@lunn.ch>",
"date": "Tue, 7 Jan 2025 14:13:34 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Andrew
On Tue, 7 Jan 2025 at 18:43, Andrew Lunn <andrew@lunn.ch> wrote:
Thanks for clarifying.
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Tue, 7 Jan 2025 20:27:58 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
On Tue, Jan 07, 2025 at 02:13:34PM +0100, Andrew Lunn wrote:
Most of the time, it would be hard to define the properties of the PCI device's
internal bus in devicetree. For instance, the pinctrl/clock properties which
linux expects are to be connected to the host SoC, and not to the PCI device's
SoC (unless the whole device's SoC is defined).
Not saying that it is not possible but all, but very rare.
- Mani
--
மணிவண்ணன் சதாசிவம்
|
{
"author": "Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>",
"date": "Wed, 15 Jan 2025 23:19:48 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hello Anand,
I have tested this patch.
Hardware/Kernel information:
- radxa rock 5c lite
- rk3588s CPU, arm64
- defconfig NixOS kernel
- picked onto 6.18.7
- DT: rockchip/rk3588s-rock-5c.dtb
- tested both uboot (mainline) and edk2 (vendor)
On Fri, Aug 09, 2024 at 01:06:09PM +0530, Anand Moon wrote:
I found that without this patch, USB 3 ports as well as the PCIe connector seemingly stay uninitialized during boot on my hardware.
This manifests in a bootable USB flash drive loading initrd from bootloader (both uboot and edk2) perfectly, but then fails to mount the rootfs from the drive.
In effect, boot is not just slower than it should be, it just does not boot all the way at all.
In that scenario, the devfs entries corresponding to the flash drive are simply missing, same for the sysfs where i would expect the USB device listed.
Replugging the USB flash drive during initrd does seem to fix that, but is tedious and not viable for a server.
Similarly, booting from m.2 SSD attached via PCIe fails the same way, with rootfs timing out despite the bootloader correctly reading initrd on the same drive.
Fwiw, replugging the SSD does not work like it does for USB flash drives, and is even worse of an idea.
USB 2 ports as well as SD card boots correctly, even without your patch.
Without your patch, i am seeing "deferred probe pending" in dmesg before boot gets stuck, which was the hint which made me find your patch.
I am not sure whether that is the actual cause or just a symptom for why drives are not recognized during boot, and am not quite sure how to debug this further.
With this patch, booting from SSD or USB 3 port works flawlessly, and i have not seen any regressions with SD card or USB 2 boot, nor any other hardware component.
This setup has worked for multiple boots without fail, both with traditional ext4 and zfs rootfs being loaded from USB 3 and PCIe.
Because i require this patch to run my rock 5c from SSD, i am currently running a custom patched kernel, but would highly appreciate this patch making its way to mainline eventually.
There might well be something else going on here. The proposed patch may not be the "proper" fix to the issues i am seeing, but it does at least work.
I have NOT tested boot from eMMC (either with or without this patch), though i have no reason to believe it would be impacted.
I am happy to provide more info as needed. First time posting to the LKML, i hope i am doing this right...
Tested-by: Grimmauld <grimmauld@grimmauld.de>
Regards,
Grimmauld
|
{
"author": "Grimmauld <grimmauld@grimmauld.de>",
"date": "Thu, 29 Jan 2026 15:06:59 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hello Grimmauld,
On Thu, Jan 29, 2026 at 03:06:59PM +0100, Grimmauld wrote:
I tested this patch again on the latest kernel, and it still results in
the "requesting loading a module with wait allowed while being called from
async context can result in a deadlock" warning from the modules code.
(With the calling code being phylib.)
See the phylib splat that I previously reported in this thread.
Note that I've built the network PHY driver that phylib wants to load
(CONFIG_REALTEK_PHY=y) as built-in. As long as the PHY driver is built
as built-in, I don't think that the problem the modules code is warning
about can happen. (But I also don't understand why it is trying to load
a module when the driver is built as built-in in the first place...)
Anyway, my networking is working perfectly fine even with the splat.
Having async probing for the Rockchip PCIe controller driver, which is
used a LOT of rockchip based SoCs, is a good thing, so I don't think it
is right to avoid enabling async probing just because it results in a
splat on a single rockchip based board (rock5b).
Kind regards,
Niklas
|
{
"author": "Niklas Cassel <cassel@kernel.org>",
"date": "Fri, 30 Jan 2026 11:25:37 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Niklas/ Grimmauld,
On Fri, 30 Jan 2026 at 15:55, Niklas Cassel <cassel@kernel.org> wrote:
Thanks for testing this patch.
I’ve attempted to reproduce the warning but was unable to trigger it locally.
But both CONFIG_PHYLIB and CONFIG_REALTEK_PHY are selected as buildin
for R8169 module.
I have tested with the built-in module
CONFIG_R8169=y
CONFIG_R8169_LEDS=y
As well as the module
CONFIG_R8169=m
CONFIG_R8169_LEDS=y
Yes, this could help in the module probe pci module.
Earlier, I thought gmac will control the r8169 module, but I was wrong.
Could you please try these changes at your end? These changes are
related to MIMO..
$ git diff ./arch/arm64/boot/dts/rockchip/rk3588-rock-5b-5bp-5t.dtsi
diff --git a/arch/arm64/boot/dts/rockchip/rk3588-rock-5b-5bp-5t.dtsi
b/arch/arm64/boot/dts/rockchip/rk3588-rock-5b-5bp-5t.dtsi
index b3e76ad2d869..fb3a8ba4085a 100644
--- a/arch/arm64/boot/dts/rockchip/rk3588-rock-5b-5bp-5t.dtsi
+++ b/arch/arm64/boot/dts/rockchip/rk3588-rock-5b-5bp-5t.dtsi
@@ -477,7 +477,6 @@ &pcie2x1l0 {
&pcie2x1l2 {
pinctrl-names = "default";
pinctrl-0 = <&pcie2_2_rst>;
- reset-gpios = <&gpio3 RK_PB0 GPIO_ACTIVE_HIGH>;
vpcie3v3-supply = <&vcc3v3_pcie2x1l2>;
status = "okay";
};
@@ -535,6 +534,12 @@ pcie3_vcc3v3_en: pcie3-vcc3v3-en {
};
};
+ rtl8211f {
+ rtl8211f_0_rst: rtl8211f-0-rst {
+ rockchip,pins = <3 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+
usb {
usbc0_int: usbc0-int {
rockchip,pins = <3 RK_PB4 RK_FUNC_GPIO &pcfg_pull_none>;
@@ -550,6 +555,19 @@ &pwm1 {
status = "okay";
};
+&mdio0 {
+ rgmii_phy0: ethernet-phy@1 {
+ /* RTL8211F */
+ compatible = "ethernet-phy-id001c.c916";
+ reg = <0x1>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&rtl8211f_0_rst>;
+ reset-assert-us = <20000>;
+ reset-deassert-us = <100000>;
+ reset-gpios = <&gpio3 RK_PB0 GPIO_ACTIVE_LOW>;
+ };
+};
+
&rknn_core_0 {
npu-supply = <&vdd_npu_s0>;
sram-supply = <&vdd_npu_s0>;
Thanks
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Sat, 31 Jan 2026 15:08:42 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
On Sat, Jan 31, 2026 at 03:08:42PM +0530, Anand Moon wrote:
I'm running with:
CONFIG_R8169=y
CONFIG_PHYLIB=y
CONFIG_REALTEK_PHY=y
CONFIG_REALTEK_PHY_HWMON=y
CONFIG_PCIE_ROCKCHIP_DW=y
CONFIG_PCIE_ROCKCHIP_DW_HOST=y
CONFIG_PHY_ROCKCHIP_NANENG_COMBO_PHY=y
(PHY for the PCIe 2x)
$ cat /proc/cmdline
root=/dev/nfs nfsroot=192.168.1.10:/srv/nfs/rootfs_rc,nfsvers=4 ip=dhcp earlycon rootwait loglevel=8
Considering that all PHY drivers (for both Ethernet and PCIe),
and drivers (for both Ethernet and PCIe) are built as built-in,
having nfsroot= on the kernel command line should be no issue.
But, perhaps that is the reason why you cannot reproduce it?
I tried your patch above, but I still see the splat.
But as I said, I don't think the splat should be a showstopper.
Kind regards,
Niklas
|
{
"author": "Niklas Cassel <cassel@kernel.org>",
"date": "Mon, 2 Feb 2026 10:54:58 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
On Fri, Jan 30, 2026 at 11:25:37AM +0100, Niklas Cassel wrote:
FWIW, the reason why PHYLIB tries to load the module even though it is built
as built-in (i.e. is already loaded) is explained by the following comment:
https://github.com/torvalds/linux/blob/v6.19-rc8/drivers/net/phy/phy_device.c#L852-L855
Kind regards,
Niklas
|
{
"author": "Niklas Cassel <cassel@kernel.org>",
"date": "Mon, 2 Feb 2026 11:02:09 +0100",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Niklas,
On Mon, 2 Feb 2026 at 15:25, Niklas Cassel <cassel@kernel.org> wrote:
I feel CONFIG_R8169 should not be built into the kernel image.
Since the driver is registered via module_pci_driver(rtl8169_pci_driver),
it is intended to be loaded as a module. In addition, this driver
requires external firmware
during initialization, which could make a built‑in configuration problematic.
Keeping it modular ensures proper firmware loading and avoids
early‑boot failures.
Thanks for sharing your setup.
Thanks for clarifying this issue. I will resubmit once I have
conducted further testing.
Thanks
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Mon, 2 Feb 2026 23:35:48 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v2] PCI: dw-rockchip: Enable async probe by default
|
Rockchip DWC PCIe driver currently waits for the combo PHY link
(PCIe 3.0, PCIe 2.0, and SATA 3.0) to be established link training
during boot, it also waits for the link to be up, which could consume
several milliseconds during boot.
To optimize boot time, this commit allows asynchronous probing.
This change enables the PCIe link establishment to occur in the
background while other devices are being probed.
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
---
v2: update the commit message to describe the changs.
---
drivers/pci/controller/dwc/pcie-dw-rockchip.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/controller/dwc/pcie-dw-rockchip.c b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
index 1170e1107508..7a895b66e4e4 100644
--- a/drivers/pci/controller/dwc/pcie-dw-rockchip.c
+++ b/drivers/pci/controller/dwc/pcie-dw-rockchip.c
@@ -616,6 +616,7 @@ static struct platform_driver rockchip_pcie_driver = {
.name = "rockchip-dw-pcie",
.of_match_table = rockchip_pcie_of_match,
.suppress_bind_attrs = true,
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rockchip_pcie_probe,
};
base-commit: ee9a43b7cfe2d8a3520335fea7d8ce71b8cabd9d
--
2.44.0
|
Hi Niklas
On Mon, 2 Feb 2026 at 15:32, Niklas Cassel <cassel@kernel.org> wrote:
Yes, I have gone through the history of changes.
Thanks
-Anand
|
{
"author": "Anand Moon <linux.amoon@gmail.com>",
"date": "Mon, 2 Feb 2026 23:37:50 +0530",
"thread_id": "CANAwSgQtWifFNFe-rK7s9VCPJ68A7LSP+va2zZWr8W+vgZOjYw@mail.gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
These parameters can be discovered from a config register. As they will
not be used any more, mark them deprecated, make them optional, and
remove them from the example.
Signed-off-by: Sean Anderson <sean.anderson@linux.dev>
---
Documentation/devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/Documentation/devicetree/bindings/sound/xlnx,i2s.yaml b/Documentation/devicetree/bindings/sound/xlnx,i2s.yaml
index 3c2b0be07c53..180f43f2b230 100644
--- a/Documentation/devicetree/bindings/sound/xlnx,i2s.yaml
+++ b/Documentation/devicetree/bindings/sound/xlnx,i2s.yaml
@@ -29,6 +29,7 @@ properties:
enum:
- 16
- 24
+ deprecated: true
description: |
Sample data width.
@@ -36,14 +37,13 @@ properties:
$ref: /schemas/types.yaml#/definitions/uint32
minimum: 1
maximum: 4
+ deprecated: true
description: |
Number of I2S streams.
required:
- compatible
- reg
- - xlnx,dwidth
- - xlnx,num-channels
additionalProperties: false
@@ -52,14 +52,10 @@ examples:
i2s@a0080000 {
compatible = "xlnx,i2s-receiver-1.0";
reg = <0xa0080000 0x10000>;
- xlnx,dwidth = <0x18>;
- xlnx,num-channels = <1>;
};
i2s@a0090000 {
compatible = "xlnx,i2s-transmitter-1.0";
reg = <0xa0090000 0x10000>;
- xlnx,dwidth = <0x18>;
- xlnx,num-channels = <1>;
};
...
--
2.35.1.1320.gc452695387.dirty
|
{
"author": "Sean Anderson <sean.anderson@linux.dev>",
"date": "Thu, 29 Jan 2026 12:23:14 -0500",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Signed-off-by: Sean Anderson <sean.anderson@linux.dev>
---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++++++---------------------
1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/sound/soc/xilinx/xlnx_i2s.c b/sound/soc/xilinx/xlnx_i2s.c
index ca915a001ad5..a2b426376676 100644
--- a/sound/soc/xilinx/xlnx_i2s.c
+++ b/sound/soc/xilinx/xlnx_i2s.c
@@ -17,6 +17,9 @@
#define DRV_NAME "xlnx_i2s"
+#define I2S_CORE_CFG 0x04
+#define I2S_CORE_CFG_DATA_24BIT BIT(16)
+#define I2S_CORE_CFG_CHANNELS GENMASK(11, 8)
#define I2S_CORE_CTRL_OFFSET 0x08
#define I2S_CORE_CTRL_32BIT_LRCLK BIT(3)
#define I2S_CORE_CTRL_ENABLE BIT(0)
@@ -172,7 +175,7 @@ static int xlnx_i2s_probe(struct platform_device *pdev)
{
struct xlnx_i2s_drv_data *drv_data;
int ret;
- u32 format;
+ u32 format, cfg;
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
@@ -184,27 +187,14 @@ static int xlnx_i2s_probe(struct platform_device *pdev)
if (IS_ERR(drv_data->base))
return PTR_ERR(drv_data->base);
- ret = of_property_read_u32(node, "xlnx,num-channels", &drv_data->channels);
- if (ret < 0) {
- dev_err(dev, "cannot get supported channels\n");
- return ret;
- }
- drv_data->channels *= 2;
-
- ret = of_property_read_u32(node, "xlnx,dwidth", &drv_data->data_width);
- if (ret < 0) {
- dev_err(dev, "cannot get data width\n");
- return ret;
- }
- switch (drv_data->data_width) {
- case 16:
- format = SNDRV_PCM_FMTBIT_S16_LE;
- break;
- case 24:
+ cfg = readl(drv_data->base + I2S_CORE_CFG);
+ drv_data->channels = FIELD_GET(I2S_CORE_CFG_CHANNELS, cfg);
+ if (cfg & I2S_CORE_CFG_DATA_24BIT) {
+ drv_data->data_width = 24;
format = SNDRV_PCM_FMTBIT_S24_LE;
- break;
- default:
- return -EINVAL;
+ } else {
+ drv_data->data_width = 16;
+ format = SNDRV_PCM_FMTBIT_S16_LE;
}
if (of_device_is_compatible(node, "xlnx,i2s-transmitter-1.0")) {
--
2.35.1.1320.gc452695387.dirty
|
{
"author": "Sean Anderson <sean.anderson@linux.dev>",
"date": "Thu, 29 Jan 2026 12:23:15 -0500",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On Thu, Jan 29, 2026 at 12:23:15PM -0500, Sean Anderson wrote:
Given that the properties already exist it seems wise to continue to
parse them if available and prefer them over what we read from the
hardware, it would not shock me to discover that hardware exists where
the registers are inaccurate or need overriding due to bugs.
|
{
"author": "Mark Brown <broonie@kernel.org>",
"date": "Thu, 29 Jan 2026 17:27:58 +0000",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
Acked-by: Conor Dooley <conor.dooley@microchip.com>
pw-bot: not-applicable
|
{
"author": "Conor Dooley <conor@kernel.org>",
"date": "Thu, 29 Jan 2026 17:37:09 +0000",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
I don't know this device at all, so i might be asking dumb
questions....
It is possible that the device supports multiple channels, but the use
case is mono, and so xlnx,num-channels is 1 in DT? Would that break
given your change?
Could it be the device supports 24 bits, but the use case only wants
16, and so has this property set to 16?
Andrew
|
{
"author": "Andrew Lunn <andrew@lunn.ch>",
"date": "Thu, 29 Jan 2026 18:37:30 +0100",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On 1/29/26 12:27, Mark Brown wrote:
I would be surprised if such hardware exists. These properties are
automatically generated by Xilinx's tools based on the HDL core's
properties. This has a few consequences:
- They always exactly match the hardware unless someone has gone in and
modified them. I think this is unlikely in this case because they
directly reflect parameters that should not need to be adjusted.
- Driver authors tend to use them even when there are hardware registers
available with the same information, as Xilinx has not always been
consistent in adding such registers.
I am not aware of any errata regarding incorrect generation of
properties for this device or cases where the number of channels or bit
depth was incorrect.
--Sean
|
{
"author": "Sean Anderson <sean.anderson@linux.dev>",
"date": "Thu, 29 Jan 2026 12:46:27 -0500",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On 1/29/26 12:37, Andrew Lunn wrote:
drv_data->channels is multiplied by 2, so there is always an even number
of channels. Pairs of channels are always muxed together and AFAICT
there's no way to disable them individually.
I don't think that's possible. There's an option to output 32-bit audio,
but none to reduce 24-bit audio to 16 bit.
For some perspective, this is a soft core and these properties reflect
the configuration chosen when the core was built. The data path is fixed
and these devicetree properties exist to tell the driver how the core
was configured. If you set xlnx,dwidth to 16 and the core was configured
for 24-bit audio, you will silently get 24-bit audio (and the clocks
will be incorrect).
--Sean
|
{
"author": "Sean Anderson <sean.anderson@linux.dev>",
"date": "Thu, 29 Jan 2026 12:51:47 -0500",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On Thu, Jan 29, 2026 at 12:46:27PM -0500, Sean Anderson wrote:
I'm not sure I follow your second point - driver authors tend to use
what?
I'd still rather see the properties get used if present, worst case
they're redundant best case we avoid regressing a currently working
system. The code is already there, it just needs tweaking to make parse
failures non-fatal.
|
{
"author": "Mark Brown <broonie@kernel.org>",
"date": "Thu, 29 Jan 2026 18:09:28 +0000",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On 1/29/26 13:09, Mark Brown wrote:
Authors look at the devicetree node and see something like
i2s0_tx: i2s_transmitter@80120000 {
aud_mclk = <99999001>;
clock-names = "aud_mclk", "s_axi_ctrl_aclk", "s_axis_aud_aclk";
clocks = <&zynqmp_clk 74>, <&zynqmp_clk 71>, <&zynqmp_clk 71>;
compatible = "xlnx,i2s-transmitter-1.0", "xlnx,i2s-transmitter-1.0";
interrupt-names = "irq";
interrupt-parent = <&gic>;
interrupts = <0 105 4>;
reg = <0x0 0x80120000 0x0 0x10000>;
xlnx,depth = <0x80>;
xlnx,dwidth = <0x18>;
xlnx,num-channels = <0x1>;
xlnx,snd-pcm = <&i2s0_dma>;
};
and go "Ah, there are the properties I need." On some Xilinx cores this
is the only way to discover certain properties, so people have gotten into
the habit of using them even when these properties can be read from the
device itself.
I would rather remove it for the code size reduction and simplication.
--Sean
|
{
"author": "Sean Anderson <sean.anderson@linux.dev>",
"date": "Thu, 29 Jan 2026 13:17:45 -0500",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On Thu, Jan 29, 2026 at 01:17:45PM -0500, Sean Anderson wrote:
Oh. If the properties are there it's reasonable and sensible to use
them, them being redundant is a concern when specifying the binding but
once things are there any discrepency should usually be resolved in
favour of the binding.
We're talking a couple of function calls with no error handling here,
I'm not sure anyone concerned about that kind of impact is running
Linux.
|
{
"author": "Mark Brown <broonie@kernel.org>",
"date": "Thu, 29 Jan 2026 18:46:23 +0000",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On Thu, Jan 29, 2026 at 12:46:27PM -0500, Sean Anderson wrote:
Does version 0.0 of this IP core have this register? Its not a new
addition?
Is there a synthesis option to disable this register?
Andrew
|
{
"author": "Andrew Lunn <andrew@lunn.ch>",
"date": "Thu, 29 Jan 2026 20:58:19 +0100",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
On 1/29/26 14:58, Andrew Lunn wrote:
As far as I know, this register was present in 1.0 revision 0. I
reviewed the changelog for the core as well as the product guide
changelog and found no mention of any register additions.
No.
--Sean
|
{
"author": "Sean Anderson <sean.anderson@linux.dev>",
"date": "Thu, 29 Jan 2026 15:13:07 -0500",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
Hi Sean,
kernel test robot noticed the following build errors:
[auto build test ERROR on broonie-sound/for-next]
[also build test ERROR on broonie-spi/for-next linus/master v6.19-rc7 next-20260129]
[cannot apply to xilinx-xlnx/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Sean-Anderson/dt-bindings-sound-xlnx-i2s-Make-discoverable-parameters-optional/20260130-012955
base: https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git for-next
patch link: https://lore.kernel.org/r/20260129172315.3871602-3-sean.anderson%40linux.dev
patch subject: [PATCH 2/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
config: sh-allyesconfig (https://download.01.org/0day-ci/archive/20260130/202601301436.qPUffKmd-lkp@intel.com/config)
compiler: sh4-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260130/202601301436.qPUffKmd-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601301436.qPUffKmd-lkp@intel.com/
All errors (new ones prefixed by >>):
sound/soc/xilinx/xlnx_i2s.c: In function 'xlnx_i2s_probe':
191 | drv_data->channels = FIELD_GET(I2S_CORE_CFG_CHANNELS, cfg);
| ^~~~~~~~~
vim +/FIELD_GET +191 sound/soc/xilinx/xlnx_i2s.c
173
174 static int xlnx_i2s_probe(struct platform_device *pdev)
175 {
176 struct xlnx_i2s_drv_data *drv_data;
177 int ret;
178 u32 format, cfg;
179 struct device *dev = &pdev->dev;
180 struct device_node *node = dev->of_node;
181
182 drv_data = devm_kzalloc(&pdev->dev, sizeof(*drv_data), GFP_KERNEL);
183 if (!drv_data)
184 return -ENOMEM;
185
186 drv_data->base = devm_platform_ioremap_resource(pdev, 0);
187 if (IS_ERR(drv_data->base))
188 return PTR_ERR(drv_data->base);
189
190 cfg = readl(drv_data->base + I2S_CORE_CFG);
> 191 drv_data->channels = FIELD_GET(I2S_CORE_CFG_CHANNELS, cfg);
192 if (cfg & I2S_CORE_CFG_DATA_24BIT) {
193 drv_data->data_width = 24;
194 format = SNDRV_PCM_FMTBIT_S24_LE;
195 } else {
196 drv_data->data_width = 16;
197 format = SNDRV_PCM_FMTBIT_S16_LE;
198 }
199
200 if (of_device_is_compatible(node, "xlnx,i2s-transmitter-1.0")) {
201 drv_data->dai_drv.name = "xlnx_i2s_playback";
202 drv_data->dai_drv.playback.stream_name = "Playback";
203 drv_data->dai_drv.playback.formats = format;
204 drv_data->dai_drv.playback.channels_min = drv_data->channels;
205 drv_data->dai_drv.playback.channels_max = drv_data->channels;
206 drv_data->dai_drv.playback.rates = SNDRV_PCM_RATE_8000_192000;
207 drv_data->dai_drv.ops = &xlnx_i2s_dai_ops;
208 } else if (of_device_is_compatible(node, "xlnx,i2s-receiver-1.0")) {
209 drv_data->dai_drv.name = "xlnx_i2s_capture";
210 drv_data->dai_drv.capture.stream_name = "Capture";
211 drv_data->dai_drv.capture.formats = format;
212 drv_data->dai_drv.capture.channels_min = drv_data->channels;
213 drv_data->dai_drv.capture.channels_max = drv_data->channels;
214 drv_data->dai_drv.capture.rates = SNDRV_PCM_RATE_8000_192000;
215 drv_data->dai_drv.ops = &xlnx_i2s_dai_ops;
216 } else {
217 return -ENODEV;
218 }
219 drv_data->is_32bit_lrclk = readl(drv_data->base + I2S_CORE_CTRL_OFFSET) &
220 I2S_CORE_CTRL_32BIT_LRCLK;
221
222 dev_set_drvdata(&pdev->dev, drv_data);
223
224 ret = devm_snd_soc_register_component(&pdev->dev, &xlnx_i2s_component,
225 &drv_data->dai_drv, 1);
226 if (ret) {
227 dev_err(&pdev->dev, "i2s component registration failed\n");
228 return ret;
229 }
230
231 dev_info(&pdev->dev, "%s DAI registered\n", drv_data->dai_drv.name);
232
233 return ret;
234 }
235
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
|
{
"author": "kernel test robot <lkp@intel.com>",
"date": "Fri, 30 Jan 2026 14:35:30 +0800",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
+Katta, Vishal
On 1/29/26 19:46, Mark Brown wrote:
Let me add our driver owner of this device to answer some questions.
Katta: Can you please look at it?
Thanks,
Michal
|
{
"author": "Michal Simek <michal.simek@amd.com>",
"date": "Fri, 30 Jan 2026 09:19:26 +0100",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/2] ASoC: xilinx: xlnx_i2s: Discover parameters from registers
|
Xilinx helpfully included a read-only "config" register that contains
configuration parameters. Discover our parameters from this register
instead of reading them from the device tree.
Sean Anderson (2):
dt-bindings: sound: xlnx,i2s: Make discoverable parameters optional
ASoC: xilinx: xlnx_i2s: Discover parameters from registers
.../devicetree/bindings/sound/xlnx,i2s.yaml | 8 ++---
sound/soc/xilinx/xlnx_i2s.c | 32 +++++++------------
2 files changed, 13 insertions(+), 27 deletions(-)
--
2.35.1.1320.gc452695387.dirty
|
Hi,
>> and go "Ah, there are the properties I need." On some Xilinx cores this
>> is the only way to discover certain properties, so people have gotten into
>> the habit of using them even when these properties can be read from the
>> device itself.
> Oh. If the properties are there it's reasonable and sensible to use
> them, them being redundant is a concern when specifying the binding but
> once things are there any discrepency should usually be resolved in
> favour of the binding.
I also think making the hardware registers take priority over the DTS
makes sense (E.G. what this patch does), as the DTS can get out of sync
with the (programmable) HW configuration if the FPGA config is changed
and a DTS update is forgotten.
>> I would rather remove it for the code size reduction and simplication.
> We're talking a couple of function calls with no error handling here,
> I'm not sure anyone concerned about that kind of impact is running
> Linux.
Agreed, but the HW registers should by definition always be in sync with
the IP block configuration, where the DTS update involves a manual
update so it may not, so the HW registers are more reliable than the DTS.
--
Bye, Peter Korsgaard
|
{
"author": "Peter Korsgaard <peter@korsgaard.com>",
"date": "Mon, 02 Feb 2026 18:52:27 +0100",
"thread_id": "87jywvvxr8.fsf@dell.be.48ers.dk.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
Modify online_memory_block() to accept the online type through its arg
parameter rather than calling mhp_get_default_online_type() internally.
This prepares for allowing callers to specify explicit online types.
Update the caller in add_memory_resource() to pass the default online
type via a local variable.
No functional change.
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Gregory Price <gourry@gourry.net>
---
mm/memory_hotplug.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc805029da51..87796b617d9e 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1337,7 +1337,9 @@ static int check_hotplug_memory_range(u64 start, u64 size)
static int online_memory_block(struct memory_block *mem, void *arg)
{
- mem->online_type = mhp_get_default_online_type();
+ int *online_type = arg;
+
+ mem->online_type = *online_type;
return device_online(&mem->dev);
}
@@ -1578,8 +1580,12 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
merge_system_ram_resource(res);
/* online pages if requested */
- if (mhp_get_default_online_type() != MMOP_OFFLINE)
- walk_memory_blocks(start, size, NULL, online_memory_block);
+ if (mhp_get_default_online_type() != MMOP_OFFLINE) {
+ int online_type = mhp_get_default_online_type();
+
+ walk_memory_blocks(start, size, &online_type,
+ online_memory_block);
+ }
return ret;
error:
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:34 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
Enable dax kmem driver to select how to online the memory rather than
implicitly depending on the system default. This will allow users of
dax to plumb through a preferred auto-online policy for their region.
Refactor and new interface:
Add __add_memory_driver_managed() which accepts an explicit online_type
and export mhp_get_default_online_type() so callers can pass it when
they want the default behavior.
Refactor:
Extract __add_memory_resource() to take an explicit online_type parameter,
and update add_memory_resource() to pass the system default.
No functional change for existing users.
Cc: David Hildenbrand <david@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Gregory Price <gourry@gourry.net>
---
include/linux/memory_hotplug.h | 3 ++
mm/memory_hotplug.c | 91 ++++++++++++++++++++++++----------
2 files changed, 67 insertions(+), 27 deletions(-)
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index f2f16cdd73ee..1eb63d1a247d 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -293,6 +293,9 @@ extern int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags);
extern int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags);
extern int add_memory_resource(int nid, struct resource *resource,
mhp_t mhp_flags);
+int __add_memory_driver_managed(int nid, u64 start, u64 size,
+ const char *resource_name, mhp_t mhp_flags,
+ int online_type);
extern int add_memory_driver_managed(int nid, u64 start, u64 size,
const char *resource_name,
mhp_t mhp_flags);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 87796b617d9e..d3ca95b872bd 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -239,6 +239,7 @@ int mhp_get_default_online_type(void)
return mhp_default_online_type;
}
+EXPORT_SYMBOL_GPL(mhp_get_default_online_type);
void mhp_set_default_online_type(int online_type)
{
@@ -1490,7 +1491,8 @@ static int create_altmaps_and_memory_blocks(int nid, struct memory_group *group,
*
* we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG
*/
-int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
+static int __add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags,
+ int online_type)
{
struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) };
enum memblock_flags memblock_flags = MEMBLOCK_NONE;
@@ -1580,12 +1582,9 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
merge_system_ram_resource(res);
/* online pages if requested */
- if (mhp_get_default_online_type() != MMOP_OFFLINE) {
- int online_type = mhp_get_default_online_type();
-
+ if (online_type != MMOP_OFFLINE)
walk_memory_blocks(start, size, &online_type,
online_memory_block);
- }
return ret;
error:
@@ -1601,7 +1600,13 @@ int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
return ret;
}
-/* requires device_hotplug_lock, see add_memory_resource() */
+int add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
+{
+ return __add_memory_resource(nid, res, mhp_flags,
+ mhp_get_default_online_type());
+}
+
+/* requires device_hotplug_lock, see __add_memory_resource() */
int __add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags)
{
struct resource *res;
@@ -1629,29 +1634,24 @@ int add_memory(int nid, u64 start, u64 size, mhp_t mhp_flags)
}
EXPORT_SYMBOL_GPL(add_memory);
-/*
- * Add special, driver-managed memory to the system as system RAM. Such
- * memory is not exposed via the raw firmware-provided memmap as system
- * RAM, instead, it is detected and added by a driver - during cold boot,
- * after a reboot, and after kexec.
- *
- * Reasons why this memory should not be used for the initial memmap of a
- * kexec kernel or for placing kexec images:
- * - The booting kernel is in charge of determining how this memory will be
- * used (e.g., use persistent memory as system RAM)
- * - Coordination with a hypervisor is required before this memory
- * can be used (e.g., inaccessible parts).
+/**
+ * __add_memory_driver_managed - add driver-managed memory with explicit online_type
+ * @nid: NUMA node ID where the memory will be added
+ * @start: Start physical address of the memory range
+ * @size: Size of the memory range in bytes
+ * @resource_name: Resource name in format "System RAM ($DRIVER)"
+ * @mhp_flags: Memory hotplug flags
+ * @online_type: Online behavior (MMOP_ONLINE, MMOP_ONLINE_KERNEL,
+ * MMOP_ONLINE_MOVABLE, or MMOP_OFFLINE)
*
- * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided
- * memory map") are created. Also, the created memory resource is flagged
- * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case
- * this memory as well (esp., not place kexec images onto it).
+ * Add driver-managed memory with explicit online_type specification.
+ * The resource_name must have the format "System RAM ($DRIVER)".
*
- * The resource_name (visible via /proc/iomem) has to have the format
- * "System RAM ($DRIVER)".
+ * Return: 0 on success, negative error code on failure.
*/
-int add_memory_driver_managed(int nid, u64 start, u64 size,
- const char *resource_name, mhp_t mhp_flags)
+int __add_memory_driver_managed(int nid, u64 start, u64 size,
+ const char *resource_name, mhp_t mhp_flags,
+ int online_type)
{
struct resource *res;
int rc;
@@ -1661,6 +1661,9 @@ int add_memory_driver_managed(int nid, u64 start, u64 size,
resource_name[strlen(resource_name) - 1] != ')')
return -EINVAL;
+ if (online_type < 0 || online_type > MMOP_ONLINE_MOVABLE)
+ return -EINVAL;
+
lock_device_hotplug();
res = register_memory_resource(start, size, resource_name);
@@ -1669,7 +1672,7 @@ int add_memory_driver_managed(int nid, u64 start, u64 size,
goto out_unlock;
}
- rc = add_memory_resource(nid, res, mhp_flags);
+ rc = __add_memory_resource(nid, res, mhp_flags, online_type);
if (rc < 0)
release_memory_resource(res);
@@ -1677,6 +1680,40 @@ int add_memory_driver_managed(int nid, u64 start, u64 size,
unlock_device_hotplug();
return rc;
}
+EXPORT_SYMBOL_FOR_MODULES(__add_memory_driver_managed, "kmem");
+
+/*
+ * Add special, driver-managed memory to the system as system RAM. Such
+ * memory is not exposed via the raw firmware-provided memmap as system
+ * RAM, instead, it is detected and added by a driver - during cold boot,
+ * after a reboot, and after kexec.
+ *
+ * Reasons why this memory should not be used for the initial memmap of a
+ * kexec kernel or for placing kexec images:
+ * - The booting kernel is in charge of determining how this memory will be
+ * used (e.g., use persistent memory as system RAM)
+ * - Coordination with a hypervisor is required before this memory
+ * can be used (e.g., inaccessible parts).
+ *
+ * For this memory, no entries in /sys/firmware/memmap ("raw firmware-provided
+ * memory map") are created. Also, the created memory resource is flagged
+ * with IORESOURCE_SYSRAM_DRIVER_MANAGED, so in-kernel users can special-case
+ * this memory as well (esp., not place kexec images onto it).
+ *
+ * The resource_name (visible via /proc/iomem) has to have the format
+ * "System RAM ($DRIVER)".
+ *
+ * Memory will be onlined using the system default online type.
+ *
+ * Returns 0 on success, negative error code on failure.
+ */
+int add_memory_driver_managed(int nid, u64 start, u64 size,
+ const char *resource_name, mhp_t mhp_flags)
+{
+ return __add_memory_driver_managed(nid, start, size, resource_name,
+ mhp_flags,
+ mhp_get_default_online_type());
+}
EXPORT_SYMBOL_GPL(add_memory_driver_managed);
/*
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:35 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
There is no way for drivers leveraging dax_kmem to plumb through a
preferred auto-online policy - the system default policy is forced.
Add online_type field to DAX device creation path to allow drivers
to specify an auto-online policy when using the kmem driver.
Current callers initialize online_type to mhp_get_default_online_type()
which resolves to the system default (memhp_default_online_type).
No functional change to existing drivers.
Cc:David Hildenbrand <david@kernel.org>
Signed-off-by: Gregory Price <gourry@gourry.net>
---
drivers/cxl/core/region.c | 2 ++
drivers/cxl/cxl.h | 1 +
drivers/dax/bus.c | 3 +++
drivers/dax/bus.h | 1 +
drivers/dax/cxl.c | 1 +
drivers/dax/dax-private.h | 2 ++
drivers/dax/hmem/hmem.c | 2 ++
drivers/dax/kmem.c | 13 +++++++++++--
drivers/dax/pmem.c | 2 ++
9 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 5bd1213737fa..eef5d5fe3f95 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
#include <linux/memregion.h>
+#include <linux/memory_hotplug.h>
#include <linux/genalloc.h>
#include <linux/debugfs.h>
#include <linux/device.h>
@@ -3459,6 +3460,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr)
if (IS_ERR(cxlr_dax))
return PTR_ERR(cxlr_dax);
+ cxlr_dax->online_type = mhp_get_default_online_type();
dev = &cxlr_dax->dev;
rc = dev_set_name(dev, "dax_region%d", cxlr->id);
if (rc)
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index ba17fa86d249..07d57d13f4c7 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -591,6 +591,7 @@ struct cxl_dax_region {
struct device dev;
struct cxl_region *cxlr;
struct range hpa_range;
+ int online_type; /* MMOP_ value for kmem driver */
};
/**
diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
index fde29e0ad68b..121a6dd0afe7 100644
--- a/drivers/dax/bus.c
+++ b/drivers/dax/bus.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2017-2018 Intel Corporation. All rights reserved. */
#include <linux/memremap.h>
+#include <linux/memory_hotplug.h>
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/list.h>
@@ -395,6 +396,7 @@ static ssize_t create_store(struct device *dev, struct device_attribute *attr,
.size = 0,
.id = -1,
.memmap_on_memory = false,
+ .online_type = mhp_get_default_online_type(),
};
struct dev_dax *dev_dax = __devm_create_dev_dax(&data);
@@ -1494,6 +1496,7 @@ static struct dev_dax *__devm_create_dev_dax(struct dev_dax_data *data)
ida_init(&dev_dax->ida);
dev_dax->memmap_on_memory = data->memmap_on_memory;
+ dev_dax->online_type = data->online_type;
inode = dax_inode(dax_dev);
dev->devt = inode->i_rdev;
diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h
index cbbf64443098..4ac92a4edfe7 100644
--- a/drivers/dax/bus.h
+++ b/drivers/dax/bus.h
@@ -24,6 +24,7 @@ struct dev_dax_data {
resource_size_t size;
int id;
bool memmap_on_memory;
+ int online_type; /* MMOP_ value for kmem driver */
};
struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data);
diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c
index 13cd94d32ff7..856a0cd24f3b 100644
--- a/drivers/dax/cxl.c
+++ b/drivers/dax/cxl.c
@@ -27,6 +27,7 @@ static int cxl_dax_region_probe(struct device *dev)
.id = -1,
.size = range_len(&cxlr_dax->hpa_range),
.memmap_on_memory = true,
+ .online_type = cxlr_dax->online_type,
};
return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data));
diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
index c6ae27c982f4..9559718cc988 100644
--- a/drivers/dax/dax-private.h
+++ b/drivers/dax/dax-private.h
@@ -77,6 +77,7 @@ struct dev_dax_range {
* @dev: device core
* @pgmap: pgmap for memmap setup / lifetime (driver owned)
* @memmap_on_memory: allow kmem to put the memmap in the memory
+ * @online_type: MMOP_* online type for memory hotplug
* @nr_range: size of @ranges
* @ranges: range tuples of memory used
*/
@@ -91,6 +92,7 @@ struct dev_dax {
struct device dev;
struct dev_pagemap *pgmap;
bool memmap_on_memory;
+ int online_type;
int nr_range;
struct dev_dax_range *ranges;
};
diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c
index c18451a37e4f..119914b08fd9 100644
--- a/drivers/dax/hmem/hmem.c
+++ b/drivers/dax/hmem/hmem.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/platform_device.h>
+#include <linux/memory_hotplug.h>
#include <linux/memregion.h>
#include <linux/module.h>
#include <linux/dax.h>
@@ -36,6 +37,7 @@ static int dax_hmem_probe(struct platform_device *pdev)
.id = -1,
.size = region_idle ? 0 : range_len(&mri->range),
.memmap_on_memory = false,
+ .online_type = mhp_get_default_online_type(),
};
return PTR_ERR_OR_ZERO(devm_create_dev_dax(&data));
diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
index c036e4d0b610..550dc605229e 100644
--- a/drivers/dax/kmem.c
+++ b/drivers/dax/kmem.c
@@ -16,6 +16,11 @@
#include "dax-private.h"
#include "bus.h"
+/* Internal function exported only to kmem module */
+extern int __add_memory_driver_managed(int nid, u64 start, u64 size,
+ const char *resource_name,
+ mhp_t mhp_flags, int online_type);
+
/*
* Default abstract distance assigned to the NUMA node onlined
* by DAX/kmem if the low level platform driver didn't initialize
@@ -72,6 +77,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
struct dax_kmem_data *data;
struct memory_dev_type *mtype;
int i, rc, mapped = 0;
+ int online_type;
mhp_t mhp_flags;
int numa_node;
int adist = MEMTIER_DEFAULT_DAX_ADISTANCE;
@@ -134,6 +140,8 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
goto err_reg_mgid;
data->mgid = rc;
+ online_type = dev_dax->online_type;
+
for (i = 0; i < dev_dax->nr_range; i++) {
struct resource *res;
struct range range;
@@ -174,8 +182,9 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
* Ensure that future kexec'd kernels will not treat
* this as RAM automatically.
*/
- rc = add_memory_driver_managed(data->mgid, range.start,
- range_len(&range), kmem_name, mhp_flags);
+ rc = __add_memory_driver_managed(data->mgid, range.start,
+ range_len(&range), kmem_name, mhp_flags,
+ online_type);
if (rc) {
dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n",
diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
index bee93066a849..a5925146b09f 100644
--- a/drivers/dax/pmem.c
+++ b/drivers/dax/pmem.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
+#include <linux/memory_hotplug.h>
#include <linux/memremap.h>
#include <linux/module.h>
#include "../nvdimm/pfn.h"
@@ -63,6 +64,7 @@ static struct dev_dax *__dax_pmem_probe(struct device *dev)
.pgmap = &pgmap,
.size = range_len(&range),
.memmap_on_memory = false,
+ .online_type = mhp_get_default_online_type(),
};
return devm_create_dev_dax(&data);
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:36 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
Move the pmem region driver logic from region.c into pmem_region.c.
No functional changes.
Signed-off-by: Gregory Price <gourry@gourry.net>
---
drivers/cxl/core/Makefile | 1 +
drivers/cxl/core/core.h | 1 +
drivers/cxl/core/pmem_region.c | 191 +++++++++++++++++++++++++++++++++
drivers/cxl/core/region.c | 184 -------------------------------
4 files changed, 193 insertions(+), 184 deletions(-)
create mode 100644 drivers/cxl/core/pmem_region.c
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index 5ad8fef210b5..23269c81fd44 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -17,6 +17,7 @@ cxl_core-y += cdat.o
cxl_core-y += ras.o
cxl_core-$(CONFIG_TRACING) += trace.o
cxl_core-$(CONFIG_CXL_REGION) += region.o
+cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o
cxl_core-$(CONFIG_CXL_MCE) += mce.o
cxl_core-$(CONFIG_CXL_FEATURES) += features.o
cxl_core-$(CONFIG_CXL_EDAC_MEM_FEATURES) += edac.o
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index dd987ef2def5..26991de12d76 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port);
struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa);
u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd,
u64 dpa);
+int devm_cxl_add_pmem_region(struct cxl_region *cxlr);
#else
static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr,
diff --git a/drivers/cxl/core/pmem_region.c b/drivers/cxl/core/pmem_region.c
new file mode 100644
index 000000000000..81b66e548bb5
--- /dev/null
+++ b/drivers/cxl/core/pmem_region.c
@@ -0,0 +1,191 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2022 Intel Corporation. All rights reserved. */
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <cxlmem.h>
+#include <cxl.h>
+#include "core.h"
+
+static void cxl_pmem_region_release(struct device *dev)
+{
+ struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev);
+ int i;
+
+ for (i = 0; i < cxlr_pmem->nr_mappings; i++) {
+ struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd;
+
+ put_device(&cxlmd->dev);
+ }
+
+ kfree(cxlr_pmem);
+}
+
+static const struct attribute_group *cxl_pmem_region_attribute_groups[] = {
+ &cxl_base_attribute_group,
+ NULL,
+};
+
+const struct device_type cxl_pmem_region_type = {
+ .name = "cxl_pmem_region",
+ .release = cxl_pmem_region_release,
+ .groups = cxl_pmem_region_attribute_groups,
+};
+bool is_cxl_pmem_region(struct device *dev)
+{
+ return dev->type == &cxl_pmem_region_type;
+}
+EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL");
+
+struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev)
+{
+ if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev),
+ "not a cxl_pmem_region device\n"))
+ return NULL;
+ return container_of(dev, struct cxl_pmem_region, dev);
+}
+EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL");
+static struct lock_class_key cxl_pmem_region_key;
+
+static int cxl_pmem_region_alloc(struct cxl_region *cxlr)
+{
+ struct cxl_region_params *p = &cxlr->params;
+ struct cxl_nvdimm_bridge *cxl_nvb;
+ struct device *dev;
+ int i;
+
+ guard(rwsem_read)(&cxl_rwsem.region);
+ if (p->state != CXL_CONFIG_COMMIT)
+ return -ENXIO;
+
+ struct cxl_pmem_region *cxlr_pmem __free(kfree) =
+ kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL);
+ if (!cxlr_pmem)
+ return -ENOMEM;
+
+ cxlr_pmem->hpa_range.start = p->res->start;
+ cxlr_pmem->hpa_range.end = p->res->end;
+
+ /* Snapshot the region configuration underneath the cxl_rwsem.region */
+ cxlr_pmem->nr_mappings = p->nr_targets;
+ for (i = 0; i < p->nr_targets; i++) {
+ struct cxl_endpoint_decoder *cxled = p->targets[i];
+ struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
+ struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i];
+
+ /*
+ * Regions never span CXL root devices, so by definition the
+ * bridge for one device is the same for all.
+ */
+ if (i == 0) {
+ cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint);
+ if (!cxl_nvb)
+ return -ENODEV;
+ cxlr->cxl_nvb = cxl_nvb;
+ }
+ m->cxlmd = cxlmd;
+ get_device(&cxlmd->dev);
+ m->start = cxled->dpa_res->start;
+ m->size = resource_size(cxled->dpa_res);
+ m->position = i;
+ }
+
+ dev = &cxlr_pmem->dev;
+ device_initialize(dev);
+ lockdep_set_class(&dev->mutex, &cxl_pmem_region_key);
+ device_set_pm_not_required(dev);
+ dev->parent = &cxlr->dev;
+ dev->bus = &cxl_bus_type;
+ dev->type = &cxl_pmem_region_type;
+ cxlr_pmem->cxlr = cxlr;
+ cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem);
+
+ return 0;
+}
+
+static void cxlr_pmem_unregister(void *_cxlr_pmem)
+{
+ struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem;
+ struct cxl_region *cxlr = cxlr_pmem->cxlr;
+ struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb;
+
+ /*
+ * Either the bridge is in ->remove() context under the device_lock(),
+ * or cxlr_release_nvdimm() is cancelling the bridge's release action
+ * for @cxlr_pmem and doing it itself (while manually holding the bridge
+ * lock).
+ */
+ device_lock_assert(&cxl_nvb->dev);
+ cxlr->cxlr_pmem = NULL;
+ cxlr_pmem->cxlr = NULL;
+ device_unregister(&cxlr_pmem->dev);
+}
+
+static void cxlr_release_nvdimm(void *_cxlr)
+{
+ struct cxl_region *cxlr = _cxlr;
+ struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb;
+
+ scoped_guard(device, &cxl_nvb->dev) {
+ if (cxlr->cxlr_pmem)
+ devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister,
+ cxlr->cxlr_pmem);
+ }
+ cxlr->cxl_nvb = NULL;
+ put_device(&cxl_nvb->dev);
+}
+
+/**
+ * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge
+ * @cxlr: parent CXL region for this pmem region bridge device
+ *
+ * Return: 0 on success negative error code on failure.
+ */
+int devm_cxl_add_pmem_region(struct cxl_region *cxlr)
+{
+ struct cxl_pmem_region *cxlr_pmem;
+ struct cxl_nvdimm_bridge *cxl_nvb;
+ struct device *dev;
+ int rc;
+
+ rc = cxl_pmem_region_alloc(cxlr);
+ if (rc)
+ return rc;
+ cxlr_pmem = cxlr->cxlr_pmem;
+ cxl_nvb = cxlr->cxl_nvb;
+
+ dev = &cxlr_pmem->dev;
+ rc = dev_set_name(dev, "pmem_region%d", cxlr->id);
+ if (rc)
+ goto err;
+
+ rc = device_add(dev);
+ if (rc)
+ goto err;
+
+ dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent),
+ dev_name(dev));
+
+ scoped_guard(device, &cxl_nvb->dev) {
+ if (cxl_nvb->dev.driver)
+ rc = devm_add_action_or_reset(&cxl_nvb->dev,
+ cxlr_pmem_unregister,
+ cxlr_pmem);
+ else
+ rc = -ENXIO;
+ }
+
+ if (rc)
+ goto err_bridge;
+
+ /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */
+ return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr);
+
+err:
+ put_device(dev);
+err_bridge:
+ put_device(&cxl_nvb->dev);
+ cxlr->cxl_nvb = NULL;
+ return rc;
+}
+
+
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index e4097c464ed3..fc56f8f03805 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -2747,46 +2747,6 @@ static ssize_t delete_region_store(struct device *dev,
}
DEVICE_ATTR_WO(delete_region);
-static void cxl_pmem_region_release(struct device *dev)
-{
- struct cxl_pmem_region *cxlr_pmem = to_cxl_pmem_region(dev);
- int i;
-
- for (i = 0; i < cxlr_pmem->nr_mappings; i++) {
- struct cxl_memdev *cxlmd = cxlr_pmem->mapping[i].cxlmd;
-
- put_device(&cxlmd->dev);
- }
-
- kfree(cxlr_pmem);
-}
-
-static const struct attribute_group *cxl_pmem_region_attribute_groups[] = {
- &cxl_base_attribute_group,
- NULL,
-};
-
-const struct device_type cxl_pmem_region_type = {
- .name = "cxl_pmem_region",
- .release = cxl_pmem_region_release,
- .groups = cxl_pmem_region_attribute_groups,
-};
-
-bool is_cxl_pmem_region(struct device *dev)
-{
- return dev->type == &cxl_pmem_region_type;
-}
-EXPORT_SYMBOL_NS_GPL(is_cxl_pmem_region, "CXL");
-
-struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev)
-{
- if (dev_WARN_ONCE(dev, !is_cxl_pmem_region(dev),
- "not a cxl_pmem_region device\n"))
- return NULL;
- return container_of(dev, struct cxl_pmem_region, dev);
-}
-EXPORT_SYMBOL_NS_GPL(to_cxl_pmem_region, "CXL");
-
struct cxl_poison_context {
struct cxl_port *port;
int part;
@@ -3236,64 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset,
return -ENXIO;
}
-static struct lock_class_key cxl_pmem_region_key;
-
-static int cxl_pmem_region_alloc(struct cxl_region *cxlr)
-{
- struct cxl_region_params *p = &cxlr->params;
- struct cxl_nvdimm_bridge *cxl_nvb;
- struct device *dev;
- int i;
-
- guard(rwsem_read)(&cxl_rwsem.region);
- if (p->state != CXL_CONFIG_COMMIT)
- return -ENXIO;
-
- struct cxl_pmem_region *cxlr_pmem __free(kfree) =
- kzalloc(struct_size(cxlr_pmem, mapping, p->nr_targets), GFP_KERNEL);
- if (!cxlr_pmem)
- return -ENOMEM;
-
- cxlr_pmem->hpa_range.start = p->res->start;
- cxlr_pmem->hpa_range.end = p->res->end;
-
- /* Snapshot the region configuration underneath the cxl_rwsem.region */
- cxlr_pmem->nr_mappings = p->nr_targets;
- for (i = 0; i < p->nr_targets; i++) {
- struct cxl_endpoint_decoder *cxled = p->targets[i];
- struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
- struct cxl_pmem_region_mapping *m = &cxlr_pmem->mapping[i];
-
- /*
- * Regions never span CXL root devices, so by definition the
- * bridge for one device is the same for all.
- */
- if (i == 0) {
- cxl_nvb = cxl_find_nvdimm_bridge(cxlmd->endpoint);
- if (!cxl_nvb)
- return -ENODEV;
- cxlr->cxl_nvb = cxl_nvb;
- }
- m->cxlmd = cxlmd;
- get_device(&cxlmd->dev);
- m->start = cxled->dpa_res->start;
- m->size = resource_size(cxled->dpa_res);
- m->position = i;
- }
-
- dev = &cxlr_pmem->dev;
- device_initialize(dev);
- lockdep_set_class(&dev->mutex, &cxl_pmem_region_key);
- device_set_pm_not_required(dev);
- dev->parent = &cxlr->dev;
- dev->bus = &cxl_bus_type;
- dev->type = &cxl_pmem_region_type;
- cxlr_pmem->cxlr = cxlr;
- cxlr->cxlr_pmem = no_free_ptr(cxlr_pmem);
-
- return 0;
-}
-
static void cxl_dax_region_release(struct device *dev)
{
struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev);
@@ -3357,92 +3259,6 @@ static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr)
return cxlr_dax;
}
-static void cxlr_pmem_unregister(void *_cxlr_pmem)
-{
- struct cxl_pmem_region *cxlr_pmem = _cxlr_pmem;
- struct cxl_region *cxlr = cxlr_pmem->cxlr;
- struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb;
-
- /*
- * Either the bridge is in ->remove() context under the device_lock(),
- * or cxlr_release_nvdimm() is cancelling the bridge's release action
- * for @cxlr_pmem and doing it itself (while manually holding the bridge
- * lock).
- */
- device_lock_assert(&cxl_nvb->dev);
- cxlr->cxlr_pmem = NULL;
- cxlr_pmem->cxlr = NULL;
- device_unregister(&cxlr_pmem->dev);
-}
-
-static void cxlr_release_nvdimm(void *_cxlr)
-{
- struct cxl_region *cxlr = _cxlr;
- struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb;
-
- scoped_guard(device, &cxl_nvb->dev) {
- if (cxlr->cxlr_pmem)
- devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister,
- cxlr->cxlr_pmem);
- }
- cxlr->cxl_nvb = NULL;
- put_device(&cxl_nvb->dev);
-}
-
-/**
- * devm_cxl_add_pmem_region() - add a cxl_region-to-nd_region bridge
- * @cxlr: parent CXL region for this pmem region bridge device
- *
- * Return: 0 on success negative error code on failure.
- */
-static int devm_cxl_add_pmem_region(struct cxl_region *cxlr)
-{
- struct cxl_pmem_region *cxlr_pmem;
- struct cxl_nvdimm_bridge *cxl_nvb;
- struct device *dev;
- int rc;
-
- rc = cxl_pmem_region_alloc(cxlr);
- if (rc)
- return rc;
- cxlr_pmem = cxlr->cxlr_pmem;
- cxl_nvb = cxlr->cxl_nvb;
-
- dev = &cxlr_pmem->dev;
- rc = dev_set_name(dev, "pmem_region%d", cxlr->id);
- if (rc)
- goto err;
-
- rc = device_add(dev);
- if (rc)
- goto err;
-
- dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent),
- dev_name(dev));
-
- scoped_guard(device, &cxl_nvb->dev) {
- if (cxl_nvb->dev.driver)
- rc = devm_add_action_or_reset(&cxl_nvb->dev,
- cxlr_pmem_unregister,
- cxlr_pmem);
- else
- rc = -ENXIO;
- }
-
- if (rc)
- goto err_bridge;
-
- /* @cxlr carries a reference on @cxl_nvb until cxlr_release_nvdimm */
- return devm_add_action_or_reset(&cxlr->dev, cxlr_release_nvdimm, cxlr);
-
-err:
- put_device(dev);
-err_bridge:
- put_device(&cxl_nvb->dev);
- cxlr->cxl_nvb = NULL;
- return rc;
-}
-
static void cxlr_dax_unregister(void *_cxlr_dax)
{
struct cxl_dax_region *cxlr_dax = _cxlr_dax;
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:38 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
Move the CXL DAX region device infrastructure from region.c into a
new dax_region.c file.
No functional changes.
Signed-off-by: Gregory Price <gourry@gourry.net>
---
drivers/cxl/core/Makefile | 1 +
drivers/cxl/core/core.h | 1 +
drivers/cxl/core/dax_region.c | 113 ++++++++++++++++++++++++++++++++++
drivers/cxl/core/region.c | 102 ------------------------------
4 files changed, 115 insertions(+), 102 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index 23269c81fd44..36f284d7c500 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -17,6 +17,7 @@ cxl_core-y += cdat.o
cxl_core-y += ras.o
cxl_core-$(CONFIG_TRACING) += trace.o
cxl_core-$(CONFIG_CXL_REGION) += region.o
+cxl_core-$(CONFIG_CXL_REGION) += dax_region.o
cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o
cxl_core-$(CONFIG_CXL_MCE) += mce.o
cxl_core-$(CONFIG_CXL_FEATURES) += features.o
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 26991de12d76..217dd708a2a6 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -43,6 +43,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port);
struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa);
u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd,
u64 dpa);
+int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type);
int devm_cxl_add_pmem_region(struct cxl_region *cxlr);
#else
diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c
new file mode 100644
index 000000000000..0602db5f7248
--- /dev/null
+++ b/drivers/cxl/core/dax_region.c
@@ -0,0 +1,113 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright(c) 2022 Intel Corporation. All rights reserved.
+ * Copyright(c) 2026 Meta Technologies Inc. All rights reserved.
+ */
+#include <linux/memory_hotplug.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <cxlmem.h>
+#include <cxl.h>
+#include "core.h"
+
+static void cxl_dax_region_release(struct device *dev)
+{
+ struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev);
+
+ kfree(cxlr_dax);
+}
+
+static const struct attribute_group *cxl_dax_region_attribute_groups[] = {
+ &cxl_base_attribute_group,
+ NULL,
+};
+
+const struct device_type cxl_dax_region_type = {
+ .name = "cxl_dax_region",
+ .release = cxl_dax_region_release,
+ .groups = cxl_dax_region_attribute_groups,
+};
+
+static bool is_cxl_dax_region(struct device *dev)
+{
+ return dev->type == &cxl_dax_region_type;
+}
+
+struct cxl_dax_region *to_cxl_dax_region(struct device *dev)
+{
+ if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev),
+ "not a cxl_dax_region device\n"))
+ return NULL;
+ return container_of(dev, struct cxl_dax_region, dev);
+}
+EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL");
+
+static struct lock_class_key cxl_dax_region_key;
+
+static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr)
+{
+ struct cxl_region_params *p = &cxlr->params;
+ struct cxl_dax_region *cxlr_dax;
+ struct device *dev;
+
+ guard(rwsem_read)(&cxl_rwsem.region);
+ if (p->state != CXL_CONFIG_COMMIT)
+ return ERR_PTR(-ENXIO);
+
+ cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL);
+ if (!cxlr_dax)
+ return ERR_PTR(-ENOMEM);
+
+ cxlr_dax->hpa_range.start = p->res->start;
+ cxlr_dax->hpa_range.end = p->res->end;
+
+ dev = &cxlr_dax->dev;
+ cxlr_dax->cxlr = cxlr;
+ device_initialize(dev);
+ lockdep_set_class(&dev->mutex, &cxl_dax_region_key);
+ device_set_pm_not_required(dev);
+ dev->parent = &cxlr->dev;
+ dev->bus = &cxl_bus_type;
+ dev->type = &cxl_dax_region_type;
+
+ return cxlr_dax;
+}
+
+static void cxlr_dax_unregister(void *_cxlr_dax)
+{
+ struct cxl_dax_region *cxlr_dax = _cxlr_dax;
+
+ device_unregister(&cxlr_dax->dev);
+}
+
+int devm_cxl_add_dax_region(struct cxl_region *cxlr,
+ enum dax_driver_type dax_driver)
+{
+ struct cxl_dax_region *cxlr_dax;
+ struct device *dev;
+ int rc;
+
+ cxlr_dax = cxl_dax_region_alloc(cxlr);
+ if (IS_ERR(cxlr_dax))
+ return PTR_ERR(cxlr_dax);
+
+ cxlr_dax->online_type = mhp_get_default_online_type();
+ cxlr_dax->dax_driver = dax_driver;
+ dev = &cxlr_dax->dev;
+ rc = dev_set_name(dev, "dax_region%d", cxlr->id);
+ if (rc)
+ goto err;
+
+ rc = device_add(dev);
+ if (rc)
+ goto err;
+
+ dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent),
+ dev_name(dev));
+
+ return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister,
+ cxlr_dax);
+err:
+ put_device(dev);
+ return rc;
+}
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index fc56f8f03805..61ec939c1462 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -3196,108 +3196,6 @@ static int region_offset_to_dpa_result(struct cxl_region *cxlr, u64 offset,
return -ENXIO;
}
-static void cxl_dax_region_release(struct device *dev)
-{
- struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev);
-
- kfree(cxlr_dax);
-}
-
-static const struct attribute_group *cxl_dax_region_attribute_groups[] = {
- &cxl_base_attribute_group,
- NULL,
-};
-
-const struct device_type cxl_dax_region_type = {
- .name = "cxl_dax_region",
- .release = cxl_dax_region_release,
- .groups = cxl_dax_region_attribute_groups,
-};
-
-static bool is_cxl_dax_region(struct device *dev)
-{
- return dev->type == &cxl_dax_region_type;
-}
-
-struct cxl_dax_region *to_cxl_dax_region(struct device *dev)
-{
- if (dev_WARN_ONCE(dev, !is_cxl_dax_region(dev),
- "not a cxl_dax_region device\n"))
- return NULL;
- return container_of(dev, struct cxl_dax_region, dev);
-}
-EXPORT_SYMBOL_NS_GPL(to_cxl_dax_region, "CXL");
-
-static struct lock_class_key cxl_dax_region_key;
-
-static struct cxl_dax_region *cxl_dax_region_alloc(struct cxl_region *cxlr)
-{
- struct cxl_region_params *p = &cxlr->params;
- struct cxl_dax_region *cxlr_dax;
- struct device *dev;
-
- guard(rwsem_read)(&cxl_rwsem.region);
- if (p->state != CXL_CONFIG_COMMIT)
- return ERR_PTR(-ENXIO);
-
- cxlr_dax = kzalloc(sizeof(*cxlr_dax), GFP_KERNEL);
- if (!cxlr_dax)
- return ERR_PTR(-ENOMEM);
-
- cxlr_dax->hpa_range.start = p->res->start;
- cxlr_dax->hpa_range.end = p->res->end;
-
- dev = &cxlr_dax->dev;
- cxlr_dax->cxlr = cxlr;
- device_initialize(dev);
- lockdep_set_class(&dev->mutex, &cxl_dax_region_key);
- device_set_pm_not_required(dev);
- dev->parent = &cxlr->dev;
- dev->bus = &cxl_bus_type;
- dev->type = &cxl_dax_region_type;
-
- return cxlr_dax;
-}
-
-static void cxlr_dax_unregister(void *_cxlr_dax)
-{
- struct cxl_dax_region *cxlr_dax = _cxlr_dax;
-
- device_unregister(&cxlr_dax->dev);
-}
-
-static int devm_cxl_add_dax_region(struct cxl_region *cxlr,
- enum dax_driver_type dax_driver)
-{
- struct cxl_dax_region *cxlr_dax;
- struct device *dev;
- int rc;
-
- cxlr_dax = cxl_dax_region_alloc(cxlr);
- if (IS_ERR(cxlr_dax))
- return PTR_ERR(cxlr_dax);
-
- cxlr_dax->online_type = mhp_get_default_online_type();
- cxlr_dax->dax_driver = dax_driver;
- dev = &cxlr_dax->dev;
- rc = dev_set_name(dev, "dax_region%d", cxlr->id);
- if (rc)
- goto err;
-
- rc = device_add(dev);
- if (rc)
- goto err;
-
- dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent),
- dev_name(dev));
-
- return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister,
- cxlr_dax);
-err:
- put_device(dev);
- return rc;
-}
-
static int match_decoder_by_range(struct device *dev, const void *data)
{
const struct range *r1, *r2 = data;
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:39 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
Add a new cxl_devdax_region driver that probes CXL regions in device
dax mode and creates dax_region devices. This allows explicit binding to
the device_dax dax driver instead of the kmem driver.
Exports to_cxl_region() to core.h so it can be used by the driver.
Signed-off-by: Gregory Price <gourry@gourry.net>
---
drivers/cxl/core/core.h | 2 ++
drivers/cxl/core/dax_region.c | 16 ++++++++++++++++
drivers/cxl/core/region.c | 21 +++++++++++++++++----
drivers/cxl/cxl.h | 1 +
4 files changed, 36 insertions(+), 4 deletions(-)
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 217dd708a2a6..ea4df8abc2ad 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -46,6 +46,8 @@ u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd,
int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type);
int devm_cxl_add_pmem_region(struct cxl_region *cxlr);
+extern struct cxl_driver cxl_devdax_region_driver;
+
#else
static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr,
const struct cxl_memdev *cxlmd, u64 dpa)
diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c
index 0602db5f7248..391d51e5ec37 100644
--- a/drivers/cxl/core/dax_region.c
+++ b/drivers/cxl/core/dax_region.c
@@ -111,3 +111,19 @@ int devm_cxl_add_dax_region(struct cxl_region *cxlr,
put_device(dev);
return rc;
}
+
+static int cxl_devdax_region_driver_probe(struct device *dev)
+{
+ struct cxl_region *cxlr = to_cxl_region(dev);
+
+ if (cxlr->mode != CXL_PARTMODE_RAM)
+ return -ENODEV;
+
+ return devm_cxl_add_dax_region(cxlr, DAXDRV_DEVICE_TYPE);
+}
+
+struct cxl_driver cxl_devdax_region_driver = {
+ .name = "cxl_devdax_region",
+ .probe = cxl_devdax_region_driver_probe,
+ .id = CXL_DEVICE_REGION,
+};
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 61ec939c1462..6200ca1cc2dd 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -39,8 +39,6 @@
*/
static nodemask_t nodemask_region_seen = NODE_MASK_NONE;
-static struct cxl_region *to_cxl_region(struct device *dev);
-
#define __ACCESS_ATTR_RO(_level, _name) { \
.attr = { .name = __stringify(_name), .mode = 0444 }, \
.show = _name##_access##_level##_show, \
@@ -2430,7 +2428,7 @@ bool is_cxl_region(struct device *dev)
}
EXPORT_SYMBOL_NS_GPL(is_cxl_region, "CXL");
-static struct cxl_region *to_cxl_region(struct device *dev)
+struct cxl_region *to_cxl_region(struct device *dev)
{
if (dev_WARN_ONCE(dev, dev->type != &cxl_region_type,
"not a cxl_region device\n"))
@@ -3726,11 +3724,26 @@ static struct cxl_driver cxl_region_driver = {
int cxl_region_init(void)
{
- return cxl_driver_register(&cxl_region_driver);
+ int rc;
+
+ rc = cxl_driver_register(&cxl_region_driver);
+ if (rc)
+ return rc;
+
+ rc = cxl_driver_register(&cxl_devdax_region_driver);
+ if (rc)
+ goto err_dax;
+
+ return 0;
+
+err_dax:
+ cxl_driver_unregister(&cxl_region_driver);
+ return rc;
}
void cxl_region_exit(void)
{
+ cxl_driver_unregister(&cxl_devdax_region_driver);
cxl_driver_unregister(&cxl_region_driver);
}
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index c06a239c0008..674d5f870c70 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -859,6 +859,7 @@ int cxl_dvsec_rr_decode(struct cxl_dev_state *cxlds,
struct cxl_endpoint_dvsec_info *info);
bool is_cxl_region(struct device *dev);
+struct cxl_region *to_cxl_region(struct device *dev);
extern const struct bus_type cxl_bus_type;
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:40 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
CXL regions may wish not to auto-configure their memory as dax kmem,
but the current plumbing defaults all cxl-created dax devices to the
kmem driver. This exposes them to hotplug policy, even if the user
intends to use the memory as a dax device.
Add plumbing to allow CXL drivers to select whether a DAX region should
default to kmem (DAXDRV_KMEM_TYPE) or device (DAXDRV_DEVICE_TYPE).
Add a 'dax_driver' field to struct cxl_dax_region and update
devm_cxl_add_dax_region() to take a dax_driver_type parameter.
In drivers/dax/cxl.c, the IORESOURCE_DAX_KMEM flag used by dax driver
matching code is now set conditionally based on dax_region->dax_driver.
Exports `enum dax_driver_type` to linux/dax.h for use in the cxl driver.
All current callers pass DAXDRV_KMEM_TYPE for backward compatibility.
Cc: John Groves <john@jagalactic.com>
Signed-off-by: Gregory Price <gourry@gourry.net>
---
drivers/cxl/core/core.h | 1 +
drivers/cxl/core/region.c | 6 ++++--
drivers/cxl/cxl.h | 2 ++
drivers/dax/bus.h | 6 +-----
drivers/dax/cxl.c | 6 +++++-
include/linux/dax.h | 5 +++++
6 files changed, 18 insertions(+), 8 deletions(-)
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index 1fb66132b777..dd987ef2def5 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -6,6 +6,7 @@
#include <cxl/mailbox.h>
#include <linux/rwsem.h>
+#include <linux/dax.h>
extern const struct device_type cxl_nvdimm_bridge_type;
extern const struct device_type cxl_nvdimm_type;
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index eef5d5fe3f95..e4097c464ed3 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -3450,7 +3450,8 @@ static void cxlr_dax_unregister(void *_cxlr_dax)
device_unregister(&cxlr_dax->dev);
}
-static int devm_cxl_add_dax_region(struct cxl_region *cxlr)
+static int devm_cxl_add_dax_region(struct cxl_region *cxlr,
+ enum dax_driver_type dax_driver)
{
struct cxl_dax_region *cxlr_dax;
struct device *dev;
@@ -3461,6 +3462,7 @@ static int devm_cxl_add_dax_region(struct cxl_region *cxlr)
return PTR_ERR(cxlr_dax);
cxlr_dax->online_type = mhp_get_default_online_type();
+ cxlr_dax->dax_driver = dax_driver;
dev = &cxlr_dax->dev;
rc = dev_set_name(dev, "dax_region%d", cxlr->id);
if (rc)
@@ -3994,7 +3996,7 @@ static int cxl_region_probe(struct device *dev)
p->res->start, p->res->end, cxlr,
is_system_ram) > 0)
return 0;
- return devm_cxl_add_dax_region(cxlr);
+ return devm_cxl_add_dax_region(cxlr, DAXDRV_KMEM_TYPE);
default:
dev_dbg(&cxlr->dev, "unsupported region mode: %d\n",
cxlr->mode);
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 07d57d13f4c7..c06a239c0008 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -12,6 +12,7 @@
#include <linux/node.h>
#include <linux/io.h>
#include <linux/range.h>
+#include <linux/dax.h>
extern const struct nvdimm_security_ops *cxl_security_ops;
@@ -592,6 +593,7 @@ struct cxl_dax_region {
struct cxl_region *cxlr;
struct range hpa_range;
int online_type; /* MMOP_ value for kmem driver */
+ enum dax_driver_type dax_driver;
};
/**
diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h
index 4ac92a4edfe7..9144593b4029 100644
--- a/drivers/dax/bus.h
+++ b/drivers/dax/bus.h
@@ -2,6 +2,7 @@
/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
#ifndef __DAX_BUS_H__
#define __DAX_BUS_H__
+#include <linux/dax.h>
#include <linux/device.h>
#include <linux/range.h>
@@ -29,11 +30,6 @@ struct dev_dax_data {
struct dev_dax *devm_create_dev_dax(struct dev_dax_data *data);
-enum dax_driver_type {
- DAXDRV_KMEM_TYPE,
- DAXDRV_DEVICE_TYPE,
-};
-
struct dax_device_driver {
struct device_driver drv;
struct list_head ids;
diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c
index 856a0cd24f3b..b13ecc2f9806 100644
--- a/drivers/dax/cxl.c
+++ b/drivers/dax/cxl.c
@@ -11,14 +11,18 @@ static int cxl_dax_region_probe(struct device *dev)
struct cxl_dax_region *cxlr_dax = to_cxl_dax_region(dev);
int nid = phys_to_target_node(cxlr_dax->hpa_range.start);
struct cxl_region *cxlr = cxlr_dax->cxlr;
+ unsigned long flags = 0;
struct dax_region *dax_region;
struct dev_dax_data data;
+ if (cxlr_dax->dax_driver == DAXDRV_KMEM_TYPE)
+ flags |= IORESOURCE_DAX_KMEM;
+
if (nid == NUMA_NO_NODE)
nid = memory_add_physaddr_to_nid(cxlr_dax->hpa_range.start);
dax_region = alloc_dax_region(dev, cxlr->id, &cxlr_dax->hpa_range, nid,
- PMD_SIZE, IORESOURCE_DAX_KMEM);
+ PMD_SIZE, flags);
if (!dax_region)
return -ENOMEM;
diff --git a/include/linux/dax.h b/include/linux/dax.h
index bf103f317cac..e62f92d0ace1 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -19,6 +19,11 @@ enum dax_access_mode {
DAX_RECOVERY_WRITE,
};
+enum dax_driver_type {
+ DAXDRV_KMEM_TYPE,
+ DAXDRV_DEVICE_TYPE,
+};
+
struct dax_operations {
/*
* direct_access: translate a device-relative
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:37 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
Explain the binding process for sysram and daxdev regions which are
explicit about which dax driver to use during region creation.
Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Gregory Price <gourry@gourry.net>
---
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++++++++++++++++++
.../driver-api/cxl/linux/dax-driver.rst | 29 +++++++++++++
2 files changed, 72 insertions(+)
diff --git a/Documentation/driver-api/cxl/linux/cxl-driver.rst b/Documentation/driver-api/cxl/linux/cxl-driver.rst
index dd6dd17dc536..1f857345e896 100644
--- a/Documentation/driver-api/cxl/linux/cxl-driver.rst
+++ b/Documentation/driver-api/cxl/linux/cxl-driver.rst
@@ -445,6 +445,49 @@ for more details. ::
dax0.0 devtype modalias uevent
dax_region driver subsystem
+DAX regions are created when a CXL RAM region is bound to one of the
+following drivers:
+
+* :code:`cxl_devdax_region` - Creates a dax_region for device_dax mode.
+ The resulting DAX device provides direct userspace access via
+ :code:`/dev/daxN.Y`.
+
+* :code:`cxl_dax_kmem_region` - Creates a dax_region for kmem mode via a
+ sysram_region intermediate device. See `Sysram Region`_ below.
+
+Sysram Region
+~~~~~~~~~~~~~
+A `Sysram Region` is an intermediate device between a CXL `Memory Region`
+and a `DAX Region` for kmem mode. It is created when a CXL RAM region is
+bound to the :code:`cxl_sysram_region` driver.
+
+The sysram_region device provides an interposition point where users can
+configure memory hotplug policy before the underlying dax_region is created
+and memory is hotplugged to the system.
+
+The device hierarchy for kmem mode is::
+
+ regionX -> sysram_regionX -> dax_regionX -> daxX.Y
+
+The sysram_region exposes an :code:`online_type` attribute that controls
+how memory will be onlined when the dax_kmem driver binds:
+
+* :code:`invalid` - Not configured (default). Blocks driver binding.
+* :code:`offline` - Memory will not be onlined automatically.
+* :code:`online` - Memory will be onlined in ZONE_NORMAL.
+* :code:`online_movable` - Memory will be onlined in ZONE_MOVABLE.
+
+Example two-stage binding process::
+
+ # Bind region to sysram_region driver
+ echo region0 > /sys/bus/cxl/drivers/cxl_sysram_region/bind
+
+ # Configure memory online type
+ echo online_movable > /sys/bus/cxl/devices/sysram_region0/online_type
+
+ # Bind sysram_region to dax_kmem_region driver
+ echo sysram_region0 > /sys/bus/cxl/drivers/cxl_dax_kmem_region/bind
+
Mailbox Interfaces
------------------
A mailbox command interface for each device is exposed in ::
diff --git a/Documentation/driver-api/cxl/linux/dax-driver.rst b/Documentation/driver-api/cxl/linux/dax-driver.rst
index 10d953a2167b..2b8e21736292 100644
--- a/Documentation/driver-api/cxl/linux/dax-driver.rst
+++ b/Documentation/driver-api/cxl/linux/dax-driver.rst
@@ -17,6 +17,35 @@ The DAX subsystem exposes this ability through the `cxl_dax_region` driver.
A `dax_region` provides the translation between a CXL `memory_region` and
a `DAX Device`.
+CXL DAX Region Drivers
+======================
+CXL provides multiple drivers for creating DAX regions, each suited for
+different use cases:
+
+cxl_devdax_region
+-----------------
+The :code:`cxl_devdax_region` driver creates a dax_region configured for
+device_dax mode. When a CXL RAM region is bound to this driver, the
+resulting DAX device provides direct userspace access via :code:`/dev/daxN.Y`.
+
+Device hierarchy::
+
+ regionX -> dax_regionX -> daxX.Y
+
+This is the simplest path for applications that want to manage CXL memory
+directly from userspace.
+
+cxl_dax_kmem_region
+-------------------
+For kmem mode, CXL provides a two-stage binding process that allows users
+to configure memory hotplug policy before memory is added to the system.
+
+The :code:`cxl_dax_kmem_region` driver then binds a sysram_region
+device and creates a dax_region configured for kmem mode.
+
+The :code:`online_type` policy will be passed from sysram_region to
+the dax kmem driver for use when hotplugging the memory.
+
DAX Device
==========
A `DAX Device` is a file-like interface exposed in :code:`/dev/daxN.Y`. A
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:42 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
In the current kmem driver binding process, the only way for users
to define hotplug policy is via a build-time option, or by not
onlining memory by default and setting each individual memory block
online after hotplug occurs. We can solve this with a configuration
step between region-probe and dax-probe.
Add the infrastructure for a two-stage driver binding for kmem-mode
dax regions. The cxl_dax_kmem_region driver probes cxl_sysram_region
devices and creates cxl_dax_region with dax_driver=kmem.
This creates an interposition step where users can configure policy.
Device hierarchy:
region0 -> sysram_region0 -> dax_region0 -> dax0.0
The sysram_region device exposes a sysfs 'online_type' attribute
that allows users to configure the memory online type before the
underlying dax_region is created and memory is hotplugged.
sysram_region0/online_type:
invalid: not configured, blocks probe
offline: memory will not be onlined automatically
online: memory will be onlined in ZONE_NORMAL
online_movable: memory will be onlined in ZONE_MMOVABLE
The device initializes with online_type=invalid which prevents the
cxl_dax_kmem_region driver from binding until the user explicitly
configures a valid online_type.
This enables a two-step binding process:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
Signed-off-by: Gregory Price <gourry@gourry.net>
---
Documentation/ABI/testing/sysfs-bus-cxl | 21 +++
drivers/cxl/core/Makefile | 1 +
drivers/cxl/core/core.h | 6 +
drivers/cxl/core/dax_region.c | 50 +++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 14 ++
drivers/cxl/core/sysram_region.c | 180 ++++++++++++++++++++++++
drivers/cxl/cxl.h | 25 ++++
8 files changed, 299 insertions(+)
create mode 100644 drivers/cxl/core/sysram_region.c
diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
index c80a1b5a03db..a051cb86bdfc 100644
--- a/Documentation/ABI/testing/sysfs-bus-cxl
+++ b/Documentation/ABI/testing/sysfs-bus-cxl
@@ -624,3 +624,24 @@ Description:
The count is persistent across power loss and wraps back to 0
upon overflow. If this file is not present, the device does not
have the necessary support for dirty tracking.
+
+
+What: /sys/bus/cxl/devices/sysram_regionZ/online_type
+Date: January, 2026
+KernelVersion: v7.1
+Contact: linux-cxl@vger.kernel.org
+Description:
+ (RW) This attribute allows users to configure the memory online
+ type before the underlying dax_region engages in hotplug.
+
+ Valid values:
+ 'invalid': Not configured (default). Blocks probe.
+ 'offline': Memory will not be onlined automatically.
+ 'online' : Memory will be onlined in ZONE_NORMAL.
+ 'online_movable': Memory will be onlined in ZONE_MOVABLE.
+
+ The device initializes with online_type='invalid' which prevents
+ the cxl_dax_kmem_region driver from binding until the user
+ explicitly configures a valid online_type. This enables a
+ two-step binding process that gives users control over memory
+ hotplug policy before memory is added to the system.
diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile
index 36f284d7c500..faf662c7d88b 100644
--- a/drivers/cxl/core/Makefile
+++ b/drivers/cxl/core/Makefile
@@ -18,6 +18,7 @@ cxl_core-y += ras.o
cxl_core-$(CONFIG_TRACING) += trace.o
cxl_core-$(CONFIG_CXL_REGION) += region.o
cxl_core-$(CONFIG_CXL_REGION) += dax_region.o
+cxl_core-$(CONFIG_CXL_REGION) += sysram_region.o
cxl_core-$(CONFIG_CXL_REGION) += pmem_region.o
cxl_core-$(CONFIG_CXL_MCE) += mce.o
cxl_core-$(CONFIG_CXL_FEATURES) += features.o
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h
index ea4df8abc2ad..04b32015e9b1 100644
--- a/drivers/cxl/core/core.h
+++ b/drivers/cxl/core/core.h
@@ -26,6 +26,7 @@ extern struct device_attribute dev_attr_delete_region;
extern struct device_attribute dev_attr_region;
extern const struct device_type cxl_pmem_region_type;
extern const struct device_type cxl_dax_region_type;
+extern const struct device_type cxl_sysram_region_type;
extern const struct device_type cxl_region_type;
int cxl_decoder_detach(struct cxl_region *cxlr,
@@ -37,6 +38,7 @@ int cxl_decoder_detach(struct cxl_region *cxlr,
#define SET_CXL_REGION_ATTR(x) (&dev_attr_##x.attr),
#define CXL_PMEM_REGION_TYPE(x) (&cxl_pmem_region_type)
#define CXL_DAX_REGION_TYPE(x) (&cxl_dax_region_type)
+#define CXL_SYSRAM_REGION_TYPE(x) (&cxl_sysram_region_type)
int cxl_region_init(void);
void cxl_region_exit(void);
int cxl_get_poison_by_endpoint(struct cxl_port *port);
@@ -44,9 +46,12 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa);
u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd,
u64 dpa);
int devm_cxl_add_dax_region(struct cxl_region *cxlr, enum dax_driver_type);
+int devm_cxl_add_sysram_region(struct cxl_region *cxlr);
int devm_cxl_add_pmem_region(struct cxl_region *cxlr);
extern struct cxl_driver cxl_devdax_region_driver;
+extern struct cxl_driver cxl_dax_kmem_region_driver;
+extern struct cxl_driver cxl_sysram_region_driver;
#else
static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr,
@@ -81,6 +86,7 @@ static inline void cxl_region_exit(void)
#define SET_CXL_REGION_ATTR(x)
#define CXL_PMEM_REGION_TYPE(x) NULL
#define CXL_DAX_REGION_TYPE(x) NULL
+#define CXL_SYSRAM_REGION_TYPE(x) NULL
#endif
struct cxl_send_command;
diff --git a/drivers/cxl/core/dax_region.c b/drivers/cxl/core/dax_region.c
index 391d51e5ec37..a379f5b85e3d 100644
--- a/drivers/cxl/core/dax_region.c
+++ b/drivers/cxl/core/dax_region.c
@@ -127,3 +127,53 @@ struct cxl_driver cxl_devdax_region_driver = {
.probe = cxl_devdax_region_driver_probe,
.id = CXL_DEVICE_REGION,
};
+
+static int cxl_dax_kmem_region_driver_probe(struct device *dev)
+{
+ struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev);
+ struct cxl_dax_region *cxlr_dax;
+ struct cxl_region *cxlr;
+ int rc;
+
+ if (!cxlr_sysram)
+ return -ENODEV;
+
+ /* Require explicit online_type configuration before binding */
+ if (cxlr_sysram->online_type == -1)
+ return -ENODEV;
+
+ cxlr = cxlr_sysram->cxlr;
+
+ cxlr_dax = cxl_dax_region_alloc(cxlr);
+ if (IS_ERR(cxlr_dax))
+ return PTR_ERR(cxlr_dax);
+
+ /* Inherit online_type from parent sysram_region */
+ cxlr_dax->online_type = cxlr_sysram->online_type;
+ cxlr_dax->dax_driver = DAXDRV_KMEM_TYPE;
+
+ /* Parent is the sysram_region device */
+ cxlr_dax->dev.parent = dev;
+
+ rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id);
+ if (rc)
+ goto err;
+
+ rc = device_add(&cxlr_dax->dev);
+ if (rc)
+ goto err;
+
+ dev_dbg(dev, "%s: register %s\n", dev_name(dev),
+ dev_name(&cxlr_dax->dev));
+
+ return devm_add_action_or_reset(dev, cxlr_dax_unregister, cxlr_dax);
+err:
+ put_device(&cxlr_dax->dev);
+ return rc;
+}
+
+struct cxl_driver cxl_dax_kmem_region_driver = {
+ .name = "cxl_dax_kmem_region",
+ .probe = cxl_dax_kmem_region_driver_probe,
+ .id = CXL_DEVICE_SYSRAM_REGION,
+};
diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
index 3310dbfae9d6..dc7262a5efd6 100644
--- a/drivers/cxl/core/port.c
+++ b/drivers/cxl/core/port.c
@@ -66,6 +66,8 @@ static int cxl_device_id(const struct device *dev)
return CXL_DEVICE_PMEM_REGION;
if (dev->type == CXL_DAX_REGION_TYPE())
return CXL_DEVICE_DAX_REGION;
+ if (dev->type == CXL_SYSRAM_REGION_TYPE())
+ return CXL_DEVICE_SYSRAM_REGION;
if (is_cxl_port(dev)) {
if (is_cxl_root(to_cxl_port(dev)))
return CXL_DEVICE_ROOT;
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 6200ca1cc2dd..8bef91dc726c 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -3734,8 +3734,20 @@ int cxl_region_init(void)
if (rc)
goto err_dax;
+ rc = cxl_driver_register(&cxl_sysram_region_driver);
+ if (rc)
+ goto err_sysram;
+
+ rc = cxl_driver_register(&cxl_dax_kmem_region_driver);
+ if (rc)
+ goto err_dax_kmem;
+
return 0;
+err_dax_kmem:
+ cxl_driver_unregister(&cxl_sysram_region_driver);
+err_sysram:
+ cxl_driver_unregister(&cxl_devdax_region_driver);
err_dax:
cxl_driver_unregister(&cxl_region_driver);
return rc;
@@ -3743,6 +3755,8 @@ int cxl_region_init(void)
void cxl_region_exit(void)
{
+ cxl_driver_unregister(&cxl_dax_kmem_region_driver);
+ cxl_driver_unregister(&cxl_sysram_region_driver);
cxl_driver_unregister(&cxl_devdax_region_driver);
cxl_driver_unregister(&cxl_region_driver);
}
diff --git a/drivers/cxl/core/sysram_region.c b/drivers/cxl/core/sysram_region.c
new file mode 100644
index 000000000000..5665db238d0f
--- /dev/null
+++ b/drivers/cxl/core/sysram_region.c
@@ -0,0 +1,180 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2026 Meta Platforms, Inc. All rights reserved. */
+/*
+ * CXL Sysram Region - Intermediate device for kmem hotplug configuration
+ *
+ * This provides an intermediate device between cxl_region and cxl_dax_region
+ * that allows users to configure memory hotplug parameters (like online_type)
+ * before the underlying dax_region is created and memory is hotplugged.
+ */
+
+#include <linux/memory_hotplug.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <cxlmem.h>
+#include <cxl.h>
+#include "core.h"
+
+static void cxl_sysram_region_release(struct device *dev)
+{
+ struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev);
+
+ kfree(cxlr_sysram);
+}
+
+static ssize_t online_type_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev);
+
+ switch (cxlr_sysram->online_type) {
+ case MMOP_OFFLINE:
+ return sysfs_emit(buf, "offline\n");
+ case MMOP_ONLINE:
+ return sysfs_emit(buf, "online\n");
+ case MMOP_ONLINE_MOVABLE:
+ return sysfs_emit(buf, "online_movable\n");
+ default:
+ return sysfs_emit(buf, "invalid\n");
+ }
+}
+
+static ssize_t online_type_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t len)
+{
+ struct cxl_sysram_region *cxlr_sysram = to_cxl_sysram_region(dev);
+
+ if (sysfs_streq(buf, "offline"))
+ cxlr_sysram->online_type = MMOP_OFFLINE;
+ else if (sysfs_streq(buf, "online"))
+ cxlr_sysram->online_type = MMOP_ONLINE;
+ else if (sysfs_streq(buf, "online_movable"))
+ cxlr_sysram->online_type = MMOP_ONLINE_MOVABLE;
+ else
+ return -EINVAL;
+
+ return len;
+}
+
+static DEVICE_ATTR_RW(online_type);
+
+static struct attribute *cxl_sysram_region_attrs[] = {
+ &dev_attr_online_type.attr,
+ NULL,
+};
+
+static const struct attribute_group cxl_sysram_region_attribute_group = {
+ .attrs = cxl_sysram_region_attrs,
+};
+
+static const struct attribute_group *cxl_sysram_region_attribute_groups[] = {
+ &cxl_base_attribute_group,
+ &cxl_sysram_region_attribute_group,
+ NULL,
+};
+
+const struct device_type cxl_sysram_region_type = {
+ .name = "cxl_sysram_region",
+ .release = cxl_sysram_region_release,
+ .groups = cxl_sysram_region_attribute_groups,
+};
+
+static bool is_cxl_sysram_region(struct device *dev)
+{
+ return dev->type == &cxl_sysram_region_type;
+}
+
+struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev)
+{
+ if (dev_WARN_ONCE(dev, !is_cxl_sysram_region(dev),
+ "not a cxl_sysram_region device\n"))
+ return NULL;
+ return container_of(dev, struct cxl_sysram_region, dev);
+}
+EXPORT_SYMBOL_NS_GPL(to_cxl_sysram_region, "CXL");
+
+static struct lock_class_key cxl_sysram_region_key;
+
+static struct cxl_sysram_region *cxl_sysram_region_alloc(struct cxl_region *cxlr)
+{
+ struct cxl_region_params *p = &cxlr->params;
+ struct cxl_sysram_region *cxlr_sysram;
+ struct device *dev;
+
+ guard(rwsem_read)(&cxl_rwsem.region);
+ if (p->state != CXL_CONFIG_COMMIT)
+ return ERR_PTR(-ENXIO);
+
+ cxlr_sysram = kzalloc(sizeof(*cxlr_sysram), GFP_KERNEL);
+ if (!cxlr_sysram)
+ return ERR_PTR(-ENOMEM);
+
+ cxlr_sysram->hpa_range.start = p->res->start;
+ cxlr_sysram->hpa_range.end = p->res->end;
+ cxlr_sysram->online_type = -1; /* Require explicit configuration */
+
+ dev = &cxlr_sysram->dev;
+ cxlr_sysram->cxlr = cxlr;
+ device_initialize(dev);
+ lockdep_set_class(&dev->mutex, &cxl_sysram_region_key);
+ device_set_pm_not_required(dev);
+ dev->parent = &cxlr->dev;
+ dev->bus = &cxl_bus_type;
+ dev->type = &cxl_sysram_region_type;
+
+ return cxlr_sysram;
+}
+
+static void cxlr_sysram_unregister(void *_cxlr_sysram)
+{
+ struct cxl_sysram_region *cxlr_sysram = _cxlr_sysram;
+
+ device_unregister(&cxlr_sysram->dev);
+}
+
+int devm_cxl_add_sysram_region(struct cxl_region *cxlr)
+{
+ struct cxl_sysram_region *cxlr_sysram;
+ struct device *dev;
+ int rc;
+
+ cxlr_sysram = cxl_sysram_region_alloc(cxlr);
+ if (IS_ERR(cxlr_sysram))
+ return PTR_ERR(cxlr_sysram);
+
+ dev = &cxlr_sysram->dev;
+ rc = dev_set_name(dev, "sysram_region%d", cxlr->id);
+ if (rc)
+ goto err;
+
+ rc = device_add(dev);
+ if (rc)
+ goto err;
+
+ dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent),
+ dev_name(dev));
+
+ return devm_add_action_or_reset(&cxlr->dev, cxlr_sysram_unregister,
+ cxlr_sysram);
+err:
+ put_device(dev);
+ return rc;
+}
+
+static int cxl_sysram_region_driver_probe(struct device *dev)
+{
+ struct cxl_region *cxlr = to_cxl_region(dev);
+
+ /* Only handle RAM regions */
+ if (cxlr->mode != CXL_PARTMODE_RAM)
+ return -ENODEV;
+
+ return devm_cxl_add_sysram_region(cxlr);
+}
+
+struct cxl_driver cxl_sysram_region_driver = {
+ .name = "cxl_sysram_region",
+ .probe = cxl_sysram_region_driver_probe,
+ .id = CXL_DEVICE_REGION,
+};
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 674d5f870c70..1544c27e9c89 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -596,6 +596,25 @@ struct cxl_dax_region {
enum dax_driver_type dax_driver;
};
+/**
+ * struct cxl_sysram_region - CXL RAM region for system memory hotplug
+ * @dev: device for this sysram_region
+ * @cxlr: parent cxl_region
+ * @hpa_range: Host physical address range for the region
+ * @online_type: Memory online type (MMOP_* 0-3, or -1 if not configured)
+ *
+ * Intermediate device that allows configuration of memory hotplug
+ * parameters before the underlying dax_region is created. The device
+ * starts with online_type=-1 which prevents the cxl_dax_kmem_region
+ * driver from binding until the user explicitly sets online_type.
+ */
+struct cxl_sysram_region {
+ struct device dev;
+ struct cxl_region *cxlr;
+ struct range hpa_range;
+ int online_type;
+};
+
/**
* struct cxl_port - logical collection of upstream port devices and
* downstream port devices to construct a CXL memory
@@ -890,6 +909,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv);
#define CXL_DEVICE_PMEM_REGION 7
#define CXL_DEVICE_DAX_REGION 8
#define CXL_DEVICE_PMU 9
+#define CXL_DEVICE_SYSRAM_REGION 10
#define MODULE_ALIAS_CXL(type) MODULE_ALIAS("cxl:t" __stringify(type) "*")
#define CXL_MODALIAS_FMT "cxl:t%d"
@@ -907,6 +927,7 @@ bool is_cxl_pmem_region(struct device *dev);
struct cxl_pmem_region *to_cxl_pmem_region(struct device *dev);
int cxl_add_to_region(struct cxl_endpoint_decoder *cxled);
struct cxl_dax_region *to_cxl_dax_region(struct device *dev);
+struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev);
u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint, u64 spa);
#else
static inline bool is_cxl_pmem_region(struct device *dev)
@@ -925,6 +946,10 @@ static inline struct cxl_dax_region *to_cxl_dax_region(struct device *dev)
{
return NULL;
}
+static inline struct cxl_sysram_region *to_cxl_sysram_region(struct device *dev)
+{
+ return NULL;
+}
static inline u64 cxl_port_get_spa_cache_alias(struct cxl_port *endpoint,
u64 spa)
{
--
2.52.0
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:04:41 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
Annoyingly, my email client has been truncating my titles:
cxl: explicit DAX driver selection and hotplug policy for CXL regions
~Gregory
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Thu, 29 Jan 2026 16:17:55 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Thu, Jan 29, 2026 at 04:04:33PM -0500, Gregory Price wrote:
Looks like build regression on configs without hotplug
MMOP_ defines and mhp_get_default_online_type() undefined
Will let this version sit for a bit before spinning a v2
~Gregory
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Fri, 30 Jan 2026 12:34:33 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On 1/29/2026 3:04 PM, Gregory Price wrote:
This technically comes up in the devdax_region driver patch first, but I noticed it here
so this is where I'm putting it:
I like the idea here, but the implementation is all off. Firstly, devm_cxl_add_sysram_region()
is never called outside of sysram_region_driver::probe(), so I'm not sure how they ever get
added to the system (same with devdax regions).
Second, there's this weird pattern of adding sub-region (sysram, devdax, etc.) devices being added
inside of the sub-region driver probe. I would expect the devices are added then the probe function
is called. What I think should be going on here (and correct me if I'm wrong) is:
1. a cxl_region device is added to the system
2. cxl_region::probe() is called on said device (one in cxl/core/region.c)
3. Said probe function figures out the device is a dax_region or whatever else and creates that type of region device
(i.e. cxl_region::probe() -> device_add(&cxl_sysram_device))
4. if the device's dax driver type is DAXDRV_DEVICE_TYPE it gets sent to the daxdev_region driver
5a. if the device's dax driver type is DAXDRV_KMEM_TYPE it gets sent to the sysram_region driver which holds it until
the online_type is set
5b. Once the online_type is set, the device is forwarded to the dax_kmem_region driver? Not sure on this part
What seems to be happening is that the cxl_region is added, all of these region drivers try
to bind to it since they all use the same device id (CXL_DEVICE_REGION) and the correct one is
figured out by magic? I'm somewhat confused at this point :/.
This should be removed from the valid values section since it's not a valid value
to write to the attribute. The mention of the default in the paragraph below should
be enough.
You can use cleanup.h here to remove the goto's (I think). Following should work:
#DEFINE_FREE(cxlr_dax_region_put, struct cxl_dax_region *, if (!IS_ERR_OR_NULL(_T)) put_device(&cxlr_dax->dev))
static int cxl_dax_kmem_region_driver_probe(struct device *dev)
{
...
struct cxl_dax_region *cxlr_dax __free(cxlr_dax_region_put) = cxl_dax_region_alloc(cxlr);
if (IS_ERR(cxlr_dax))
return PTR_ERR(cxlr_dax);
...
rc = dev_set_name(&cxlr_dax->dev, "dax_region%d", cxlr->id);
if (rc)
return rc;
rc = device_add(&cxlr_dax->dev);
if (rc)
return rc;
dev_dbg(dev, "%s: register %s\n", dev_name(dev), dev_name(&cxlr_dax->dev));
return devm_add_action_or_reset(dev, cxlr_dax_unregister, no_free_ptr(cxlr_dax));
}
Same thing as above
Thanks,
Ben
|
{
"author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>",
"date": "Fri, 30 Jan 2026 15:27:12 -0600",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Fri, Jan 30, 2026 at 03:27:12PM -0600, Cheatham, Benjamin wrote:
I originally tried doing with region0/region_driver, but that design
pattern is also confusing - and it creates differently bad patterns.
echo region0 > decoder0.0/create_ram_region -> creates region0
# Current pattern
echo region > driver/region/probe /* auto-region behavior */
# region_driver attribute pattern
echo "sysram" > region0/region_driver
echo region0 > driver/region/probe /* uses sysram region driver */
https://lore.kernel.org/linux-cxl/20260113202138.3021093-1-gourry@gourry.net/
Ira pointed out that this design makes the "implicit" design of the
driver worse. The user doesn't actually know what driver is being used
under the hood - it just knows something is being used.
This at least makes it explicit which driver is being used - and splits
the uses-case logic up into discrete drivers (dax users don't have to
worry about sysram users breaking their stuff).
If it makes more sense, you could swap the ordering of the names
echo region0 > region/bind
echo region0 > region_sysram/bind
echo region0 > region_daxdev/bind
echo region0 > region_dax_kmem/bind
echo region0 > region_pony/bind
---
The underlying issue is that region::probe() is trying to be a
god-function for every possible use case, and hiding the use case
behind an attribute vs a driver is not good.
(also the default behavior for region::probe() in an otherwise
unconfigured region is required for backwards compatibility)
For auto-regions:
region_probe() eats it and you get the default behavior.
For non-auto regions:
create_x_region generates an un-configured region and fails to probe
until the user commits it and probes it.
auto-regions are evil and should be discouraged.
~Gregory
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Fri, 30 Jan 2026 17:12:50 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On 1/30/2026 4:12 PM, Gregory Price wrote:
Ok, that makes sense. I think I just got lost in the sauce while looking at this last
week and this explanation helped a lot.>
I think this was the source of my misunderstanding. I was trying to understand how it
works for auto regions when it's never meant to apply to them.
Sorry if this is a stupid question, but what stops auto regions from binding to the
sysram/dax region drivers? They all bind to region devices, so I assume there's something
keeping them from binding before the core region driver gets a chance.
Thanks,
Ben
|
{
"author": "\"Cheatham, Benjamin\" <benjamin.cheatham@amd.com>",
"date": "Mon, 2 Feb 2026 11:02:37 -0600",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Thu, 29 Jan 2026 16:04:34 -0500
Gregory Price <gourry@gourry.net> wrote:
Trivial comment inline. I don't really care either way.
Pushing the policy up to the caller and ensuring it's explicitly constant
for all the memory blocks (as opposed to relying on locks) seems sensible to me
even without anything else.
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
Maybe move the local variable outside the loop to avoid the double call.
|
{
"author": "Jonathan Cameron <jonathan.cameron@huawei.com>",
"date": "Mon, 2 Feb 2026 17:10:29 +0000",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Thu, 29 Jan 2026 16:04:35 -0500
Gregory Price <gourry@gourry.net> wrote:
Hi Gregory,
I think maybe I'd have left the export for the first user outside of
memory_hotplug.c. Not particularly important however.
Maybe talk about why a caller of __add_memory_driver_managed() might want
the default? Feels like that's for the people who don't...
Or is this all a dance to avoid an
if (special mode)
__add_memory_driver_managed();
else
add_memory_driver_managed();
?
Other comments are mostly about using a named enum. I'm not sure
if there is some existing reason why that doesn't work? -Errno pushed through
this variable or anything like that?
Given online_type values are from an enum anyway, maybe we can name that enum and use
it explicitly?
Ah. Fair enough, ignore comment in previous patch. I should have read on...
It's a little odd to add nice kernel-doc formatted documentation
when the non __ variant has free form docs. Maybe tidy that up first
if we want to go kernel-doc in this file? (I'm in favor, but no idea
on general feelings...)
Given that's currently the full set, seems like enum wins out here over
an int.
This is where using an enum would help compiler know what is going on
and maybe warn if anyone writes something that isn't defined.
|
{
"author": "Jonathan Cameron <jonathan.cameron@huawei.com>",
"date": "Mon, 2 Feb 2026 17:25:24 +0000",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Mon, Feb 02, 2026 at 11:02:37AM -0600, Cheatham, Benjamin wrote:
Auto regions explicitly use the dax_kmem path (all existing code,
unchanged)- which auto-plugs into dax/hotplug.
I do get what you're saying that everything binds on a region type,
I will look a little closer at this and see if there's something more
reasonable we can do.
I think i can update `region/bind` to use the sysram driver with
online_type=mhp_default_online_type
so you'd end up with effective the auto-region logic:
cxlcli create-region -m ram ... existing argument set
------
echo region0 > create_ram_region
/* program decoders */
echo region0 > region/bind
/*
* region_bind():
* 1) alloc sysram_region object
* 2) sysram_regionN->online_type=mhp_default_online_type()
* 3) add device to bus
* 4) device auto-probes all the way down to dax
* 5) dax auto-onlines with system default setting
*/
------
and Non-auto-region logic (approximation)
cxlcli creation-region -m ram --type sysram --online-type=movable
-----
echo region0 > create_ram_region
/* program decoders */
echo region0 > sysram/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > dax_kmem/bind
-----
I want to retain the dax_kmem driver because there may be multiple users
other than sysram. For example, a compressed memory region wants to
utilize dax_kmem, but has its own complex policy (via N_MEMORY_PRIVATE)
so it doesn't want to abstract through sysram_region, but it does want
to abstract through dax_kmem.
weeeee "software defined memory" weeeee
~Gregory
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Mon, 2 Feb 2026 12:41:31 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Mon, Feb 02, 2026 at 05:10:29PM +0000, Jonathan Cameron wrote:
ack. will update for next version w/ Ben's notes and the build fix.
Thanks!
~Gregory
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Mon, 2 Feb 2026 12:46:25 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Thu, 29 Jan 2026 16:04:37 -0500
Gregory Price <gourry@gourry.net> wrote:
LGTM
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
|
{
"author": "Jonathan Cameron <jonathan.cameron@huawei.com>",
"date": "Mon, 2 Feb 2026 17:54:17 +0000",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Thu, 29 Jan 2026 16:04:38 -0500
Gregory Price <gourry@gourry.net> wrote:
Needs to answer the question: Why?
Minor stuff inline.
Maybe sneak in dropping that trailing comma whilst you are moving it.
...
Bonus line...
|
{
"author": "Jonathan Cameron <jonathan.cameron@huawei.com>",
"date": "Mon, 2 Feb 2026 17:56:40 +0000",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Thu, 29 Jan 2026 16:04:39 -0500
Gregory Price <gourry@gourry.net> wrote:
Likewise. Why?
|
{
"author": "Jonathan Cameron <jonathan.cameron@huawei.com>",
"date": "Mon, 2 Feb 2026 17:57:11 +0000",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[PATCH 0/9] cxl: explicit DAX driver selection and hotplug
|
Currently, CXL regions that create DAX devices have no mechanism to
control select the hotplug online policy for kmem regions at region
creation time. Users must either rely on a build-time default or
manually configure each memory block after hotplug occurs.
Additionally, there is no explicit way to choose between device_dax
and dax_kmem modes at region creation time - regions default to kmem.
This series addresses both issues by:
1. Plumbing an online_type parameter through the memory hotplug path,
from mm/memory_hotplug through the DAX layer, enabling drivers to
specify the desired policy (offline, online, online_movable).
2. Adding infrastructure for explicit dax driver selection (kmem vs
device) when creating CXL DAX regions.
3. Introducing new CXL region drivers that provide a two-stage binding
process with user-configurable policy between region creation and
memory hotplug.
The new drivers are:
- cxl_devdax_region: Creates dax_regions that bind to device_dax driver
- cxl_sysram_region: Creates sysram_region devices with hotplug policy
- cxl_dax_kmem_region: Probes sysram_regions to create kmem dax_regions
The sysram_region device exposes an 'online_type' sysfs attribute
allowing users to configure the memory online type before hotplug:
echo region0 > cxl_sysram_region/bind
echo online_movable > sysram_region0/online_type
echo sysram_region0 > cxl_dax_kmem_region/bind
This enables explicit control over both the dax driver mode and the
memory hotplug policy for CXL memory regions.
In the future, with DCD regions, this will also provide a policy step
which dictates how extents will be surfaces and managed (e.g. if the
dc region is bound to the sysram driver, it will surface as system
memory, while the devdax driver will surface extents as new devdax).
Gregory Price (9):
mm/memory_hotplug: pass online_type to online_memory_block() via arg
mm/memory_hotplug: add __add_memory_driver_managed() with online_type
arg
dax: plumb online_type from dax_kmem creators to hotplug
drivers/cxl,dax: add dax driver mode selection for dax regions
cxl/core/region: move pmem region driver logic into pmem_region
cxl/core/region: move dax region device logic into dax_region.c
cxl/core: add cxl_devdax_region driver for explicit userland region
binding
cxl/core: Add dax_kmem_region and sysram_region drivers
Documentation/driver-api/cxl: add dax and sysram driver documentation
Documentation/ABI/testing/sysfs-bus-cxl | 21 ++
.../driver-api/cxl/linux/cxl-driver.rst | 43 +++
.../driver-api/cxl/linux/dax-driver.rst | 29 ++
drivers/cxl/core/Makefile | 3 +
drivers/cxl/core/core.h | 11 +
drivers/cxl/core/dax_region.c | 179 ++++++++++
drivers/cxl/core/pmem_region.c | 191 +++++++++++
drivers/cxl/core/port.c | 2 +
drivers/cxl/core/region.c | 321 ++----------------
drivers/cxl/core/sysram_region.c | 180 ++++++++++
drivers/cxl/cxl.h | 29 ++
drivers/dax/bus.c | 3 +
drivers/dax/bus.h | 7 +-
drivers/dax/cxl.c | 7 +-
drivers/dax/dax-private.h | 2 +
drivers/dax/hmem/hmem.c | 2 +
drivers/dax/kmem.c | 13 +-
drivers/dax/pmem.c | 2 +
include/linux/dax.h | 5 +
include/linux/memory_hotplug.h | 3 +
mm/memory_hotplug.c | 95 ++++--
21 files changed, 826 insertions(+), 322 deletions(-)
create mode 100644 drivers/cxl/core/dax_region.c
create mode 100644 drivers/cxl/core/pmem_region.c
create mode 100644 drivers/cxl/core/sysram_region.c
--
2.52.0
|
On Mon, Feb 02, 2026 at 05:25:24PM +0000, Jonathan Cameron wrote:
Less about why they want the default, more about maintaining backward
compatibility.
In the cxl driver, Ben pointed out something that made me realize we can
change `region/bind()` to actually use the new `sysram/bind` path by
just adding a one line `sysram_regionN->online_type = default()`
I can add this detail to the changelog.
I can add a cleanup-patch prior to use the enum, but i don't think this
actually enables the compiler to do anything new at the moment?
An enum just resolves to an int, and setting `enum thing val = -1` when
the enum definition doesn't include -1 doesn't actually fire any errors
(at least IIRC - maybe i'm just wrong). Same with
function(enum) -> function(-1) wouldn't fire a compilation error
It might actually be worth adding `MMOP_NOT_CONFIGURED = -1` so that the
cxl-sysram driver can set this explicitly rather than just setting -1
as an implicit version of this - but then why would memory_hotplug.c
ever want to expose a NOT_CONFIGURED option lol.
So, yeah, the enum looks nicer, but not sure how much it buys us beyond
that.
ack. Can add some more cleanups early in the series.
I think you still have to sanity check this, but maybe the code looks
cleaner, so will do.
~Gregory
|
{
"author": "Gregory Price <gourry@gourry.net>",
"date": "Mon, 2 Feb 2026 13:02:10 -0500",
"thread_id": "20260129210442.3951412-1-gourry@gourry.net.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
The page allocated in io_mem_alloc_compound() is actually used as a folio
later in io_region_mmap(). So allocate a folio instead of a compound page
and rename io_mem_alloc_compound() to io_mem_alloc_folio().
This prepares for code separation of compound page and folio in a follow-up
commit.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
io_uring/memmap.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/io_uring/memmap.c b/io_uring/memmap.c
index 7d3c5eb58480..8ed8a78d71cc 100644
--- a/io_uring/memmap.c
+++ b/io_uring/memmap.c
@@ -15,10 +15,10 @@
#include "rsrc.h"
#include "zcrx.h"
-static bool io_mem_alloc_compound(struct page **pages, int nr_pages,
+static bool io_mem_alloc_folio(struct page **pages, int nr_pages,
size_t size, gfp_t gfp)
{
- struct page *page;
+ struct folio *folio;
int i, order;
order = get_order(size);
@@ -27,12 +27,12 @@ static bool io_mem_alloc_compound(struct page **pages, int nr_pages,
else if (order)
gfp |= __GFP_COMP;
- page = alloc_pages(gfp, order);
- if (!page)
+ folio = folio_alloc(gfp, order);
+ if (!folio)
return false;
for (i = 0; i < nr_pages; i++)
- pages[i] = page + i;
+ pages[i] = folio_page(folio, i);
return true;
}
@@ -162,7 +162,7 @@ static int io_region_allocate_pages(struct io_mapped_region *mr,
if (!pages)
return -ENOMEM;
- if (io_mem_alloc_compound(pages, mr->nr_pages, size, gfp)) {
+ if (io_mem_alloc_folio(pages, mr->nr_pages, size, gfp)) {
mr->flags |= IO_REGION_F_SINGLE_REF;
goto done;
}
--
2.51.0
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Thu, 29 Jan 2026 22:48:14 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
Current code uses folio_set_large_rmappable() on after-split folios, but
these folios should be treated as compound pages and converted to folios
with page_rmappable_folio().
This prepares for code separation of compound page and folio in a follow-up
commit.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/huge_memory.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 44ff8a648afd..74ba076e3fc0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3558,10 +3558,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
* which needs correct compound_head().
*/
clear_compound_head(new_head);
- if (new_order) {
+ if (new_order)
prep_compound_page(new_head, new_order);
- folio_set_large_rmappable(new_folio);
- }
+ page_rmappable_folio(new_head);
if (folio_test_young(folio))
folio_set_young(new_folio);
--
2.51.0
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Thu, 29 Jan 2026 22:48:15 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
Commit f708f6970cc9 ("mm/hugetlb: fix kernel NULL pointer dereference when
migrating hugetlb folio") fixed a NULL pointer dereference when
folio_undo_large_rmappable(), now folio_unqueue_deferred_list(), is used on
hugetlb to clear deferred_list. It cleared large_rmappable flag on hugetlb.
hugetlb is rmappable, thus clearing large_rmappable flag looks misleading.
Instead, reject hugetlb in folio_unqueue_deferred_list() to avoid the
issue.
This prepares for code separation of compound page and folio in a follow-up
commit.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/hugetlb.c | 6 +++---
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 3 ++-
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6e855a32de3d..7466c7bf41a1 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1422,8 +1422,8 @@ static struct folio *alloc_gigantic_frozen_folio(int order, gfp_t gfp_mask,
if (hugetlb_cma_exclusive_alloc())
return NULL;
- folio = (struct folio *)alloc_contig_frozen_pages(1 << order, gfp_mask,
- nid, nodemask);
+ folio = page_rmappable_folio(alloc_contig_frozen_pages(1 << order, gfp_mask,
+ nid, nodemask));
return folio;
}
#else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE || !CONFIG_CONTIG_ALLOC */
@@ -1859,7 +1859,7 @@ static struct folio *alloc_buddy_frozen_folio(int order, gfp_t gfp_mask,
if (alloc_try_hard)
gfp_mask |= __GFP_RETRY_MAYFAIL;
- folio = (struct folio *)__alloc_frozen_pages(gfp_mask, order, nid, nmask);
+ folio = page_rmappable_folio(__alloc_frozen_pages(gfp_mask, order, nid, nmask));
/*
* If we did not specify __GFP_RETRY_MAYFAIL, but still got a
diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index f83ae4998990..4245b5dda4dc 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -51,7 +51,7 @@ struct folio *hugetlb_cma_alloc_frozen_folio(int order, gfp_t gfp_mask,
if (!page)
return NULL;
- folio = page_folio(page);
+ folio = page_rmappable_folio(page);
folio_set_hugetlb_cma(folio);
return folio;
}
diff --git a/mm/internal.h b/mm/internal.h
index d67e8bb75734..8bb22fb9a0e1 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -835,7 +835,8 @@ static inline void folio_set_order(struct folio *folio, unsigned int order)
bool __folio_unqueue_deferred_split(struct folio *folio);
static inline bool folio_unqueue_deferred_split(struct folio *folio)
{
- if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio))
+ if (folio_order(folio) <= 1 || !folio_test_large_rmappable(folio) ||
+ folio_test_hugetlb(folio))
return false;
/*
--
2.51.0
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Thu, 29 Jan 2026 22:48:16 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
A compound page is not a folio. Using struct folio in compound_nr() and
compound_order() is misleading. Use struct page and refer to the right
subpage of a compound page to set compound page order. compound_nr() is
calculated using compound_order() instead of reading folio->_nr_pages.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
include/linux/mm.h | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f8a8fd47399c..f1c54d9f4620 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1428,11 +1428,9 @@ static inline unsigned long folio_large_nr_pages(const struct folio *folio)
*/
static inline unsigned int compound_order(const struct page *page)
{
- const struct folio *folio = (struct folio *)page;
-
- if (!test_bit(PG_head, &folio->flags.f))
+ if (!test_bit(PG_head, &page->flags.f))
return 0;
- return folio_large_order(folio);
+ return page[1].flags.f & 0xffUL;
}
/**
@@ -2514,11 +2512,9 @@ static inline unsigned long folio_nr_pages(const struct folio *folio)
*/
static inline unsigned long compound_nr(const struct page *page)
{
- const struct folio *folio = (struct folio *)page;
-
- if (!test_bit(PG_head, &folio->flags.f))
+ if (!test_bit(PG_head, &page->flags.f))
return 1;
- return folio_large_nr_pages(folio);
+ return 1 << compound_order(page);
}
/**
--
2.51.0
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Thu, 29 Jan 2026 22:48:17 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
A compound page is not a folio. Using struct folio in prep_compound_head()
causes confusion, since the input page is not a folio. The compound page to
folio conversion happens in page_rmappable_folio(). So move folio code from
prep_compound_head() to page_rmappable_folio().
After the change, a compound page no longer has the following folio field
set:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
The page freeing path for compound pages does not need to check these
fields and now just checks ->mapping == TAIL_MAPPING for all subpages.
So free_tail_page_prepare() has a new large_rmappable input to distinguish
between a compound page and a folio.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/hugetlb.c | 2 +-
mm/internal.h | 44 ++++++++++++++++++++++++++------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
4 files changed, 46 insertions(+), 25 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7466c7bf41a1..231c91c3d93b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3204,7 +3204,7 @@ static void __init hugetlb_folio_init_vmemmap(struct folio *folio,
ret = folio_ref_freeze(folio, 1);
VM_BUG_ON(!ret);
hugetlb_folio_init_tail_vmemmap(folio, 1, nr_pages);
- prep_compound_head(&folio->page, huge_page_order(h));
+ set_compound_order(&folio->page, huge_page_order(h));
}
static bool __init hugetlb_bootmem_page_prehvo(struct huge_bootmem_page *m)
diff --git a/mm/internal.h b/mm/internal.h
index 8bb22fb9a0e1..4d72e915d623 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -854,30 +854,38 @@ static inline struct folio *page_rmappable_folio(struct page *page)
{
struct folio *folio = (struct folio *)page;
- if (folio && folio_test_large(folio))
+ if (folio && folio_test_large(folio)) {
+ unsigned int order = compound_order(page);
+
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+ folio->_nr_pages = 1U << order;
+#endif
+ atomic_set(&folio->_large_mapcount, -1);
+ if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT))
+ atomic_set(&folio->_nr_pages_mapped, 0);
+ if (IS_ENABLED(CONFIG_MM_ID)) {
+ folio->_mm_ids = 0;
+ folio->_mm_id_mapcount[0] = -1;
+ folio->_mm_id_mapcount[1] = -1;
+ }
+ if (IS_ENABLED(CONFIG_64BIT) || order > 1) {
+ atomic_set(&folio->_pincount, 0);
+ atomic_set(&folio->_entire_mapcount, -1);
+ }
+ if (order > 1)
+ INIT_LIST_HEAD(&folio->_deferred_list);
folio_set_large_rmappable(folio);
+ }
return folio;
}
-static inline void prep_compound_head(struct page *page, unsigned int order)
+static inline void set_compound_order(struct page *page, unsigned int order)
{
- struct folio *folio = (struct folio *)page;
+ if (WARN_ON_ONCE(!order || !PageHead(page)))
+ return;
+ VM_WARN_ON_ONCE(order > MAX_FOLIO_ORDER);
- folio_set_order(folio, order);
- atomic_set(&folio->_large_mapcount, -1);
- if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT))
- atomic_set(&folio->_nr_pages_mapped, 0);
- if (IS_ENABLED(CONFIG_MM_ID)) {
- folio->_mm_ids = 0;
- folio->_mm_id_mapcount[0] = -1;
- folio->_mm_id_mapcount[1] = -1;
- }
- if (IS_ENABLED(CONFIG_64BIT) || order > 1) {
- atomic_set(&folio->_pincount, 0);
- atomic_set(&folio->_entire_mapcount, -1);
- }
- if (order > 1)
- INIT_LIST_HEAD(&folio->_deferred_list);
+ page[1].flags.f = (page[1].flags.f & ~0xffUL) | order;
}
static inline void prep_compound_tail(struct page *head, int tail_idx)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 1a29a719af58..23a42a4af77b 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1102,7 +1102,7 @@ static void __ref memmap_init_compound(struct page *head,
prep_compound_tail(head, pfn - head_pfn);
set_page_count(page, 0);
}
- prep_compound_head(head, order);
+ set_compound_order(head, order);
}
void __ref memmap_init_zone_device(struct zone *zone,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e4104973e22f..2194a6b3a062 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -746,7 +746,7 @@ void prep_compound_page(struct page *page, unsigned int order)
for (i = 1; i < nr_pages; i++)
prep_compound_tail(page, i);
- prep_compound_head(page, order);
+ set_compound_order(page, order);
}
static inline void set_buddy_order(struct page *page, unsigned int order)
@@ -1126,7 +1126,8 @@ static inline bool is_check_pages_enabled(void)
return static_branch_unlikely(&check_pages_enabled);
}
-static int free_tail_page_prepare(struct page *head_page, struct page *page)
+static int free_tail_page_prepare(struct page *head_page, struct page *page,
+ bool large_rmappable)
{
struct folio *folio = (struct folio *)head_page;
int ret = 1;
@@ -1141,6 +1142,13 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page)
ret = 0;
goto out;
}
+ if (!large_rmappable) {
+ if (page->mapping != TAIL_MAPPING) {
+ bad_page(page, "corrupted mapping in compound page's tail page");
+ goto out;
+ }
+ goto skip_rmappable_checks;
+ }
switch (page - head_page) {
case 1:
/* the first tail page: these may be in place of ->mapping */
@@ -1198,11 +1206,12 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page)
fallthrough;
default:
if (page->mapping != TAIL_MAPPING) {
- bad_page(page, "corrupted mapping in tail page");
+ bad_page(page, "corrupted mapping in folio's tail page");
goto out;
}
break;
}
+skip_rmappable_checks:
if (unlikely(!PageTail(page))) {
bad_page(page, "PageTail not set");
goto out;
@@ -1392,17 +1401,21 @@ __always_inline bool free_pages_prepare(struct page *page,
* avoid checking PageCompound for order-0 pages.
*/
if (unlikely(order)) {
+ bool large_rmappable = false;
int i;
if (compound) {
+ large_rmappable = folio_test_large_rmappable(folio);
+ /* clear compound order */
page[1].flags.f &= ~PAGE_FLAGS_SECOND;
#ifdef NR_PAGES_IN_LARGE_FOLIO
- folio->_nr_pages = 0;
+ if (large_rmappable)
+ folio->_nr_pages = 0;
#endif
}
for (i = 1; i < (1 << order); i++) {
if (compound)
- bad += free_tail_page_prepare(page, page + i);
+ bad += free_tail_page_prepare(page, page + i, large_rmappable);
if (is_check_pages_enabled()) {
if (free_page_is_bad(page + i)) {
bad++;
--
2.51.0
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Thu, 29 Jan 2026 22:48:18 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
syzbot ci has tested the following series
[v1] Separate compound page from folio
https://lore.kernel.org/all/20260130034818.472804-1-ziy@nvidia.com
* [RFC PATCH 1/5] io_uring: allocate folio in io_mem_alloc_compound() and function rename
* [RFC PATCH 2/5] mm/huge_memory: use page_rmappable_folio() to convert after-split folios
* [RFC PATCH 3/5] mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list handling
* [RFC PATCH 4/5] mm: only use struct page in compound_nr() and compound_order()
* [RFC PATCH 5/5] mm: code separation for compound page and folio
and found the following issue:
WARNING in __folio_large_mapcount_sanity_checks
Full report is available here:
https://ci.syzbot.org/series/f64f0297-d388-4cfa-b3be-f05819d0ce34
***
WARNING in __folio_large_mapcount_sanity_checks
tree: mm-new
URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git
base: 0241748f8b68fc2bf637f4901b9d7ca660d177ca
arch: amd64
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config: https://ci.syzbot.org/builds/76dc5ea6-0ff5-410b-8b1f-72e5607a704e/config
C repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/c_repro
syz repro: https://ci.syzbot.org/findings/a308f1d6-69e2-4ebc-80a9-b51d9dc02851/syz_repro
------------[ cut here ]------------
diff > folio_large_nr_pages(folio)
WARNING: ./include/linux/rmap.h:148 at __folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148, CPU#1: syz.0.17/5988
Modules linked in:
CPU: 1 UID: 0 PID: 5988 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:__folio_large_mapcount_sanity_checks+0x499/0x6b0 include/linux/rmap.h:148
Code: 5f 5d e9 4a 4e 64 09 cc e8 84 d8 aa ff 90 0f 0b 90 e9 82 fc ff ff e8 76 d8 aa ff 90 0f 0b 90 e9 8f fc ff ff e8 68 d8 aa ff 90 <0f> 0b 90 e9 b8 fc ff ff e8 5a d8 aa ff 90 0f 0b 90 e9 f2 fc ff ff
RSP: 0018:ffffc900040e72f8 EFLAGS: 00010293
RAX: ffffffff8217c0f8 RBX: ffffea0006ef5c00 RCX: ffff888105fdba80
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 0000000000000001 R08: ffffea0006ef5c07 R09: 1ffffd4000ddeb80
R10: dffffc0000000000 R11: fffff94000ddeb81 R12: 0000000000000001
R13: 0000000000000000 R14: 1ffffd4000ddeb8f R15: ffffea0006ef5c78
FS: 00005555867b3500(0000) GS:ffff8882a9923000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00002000000000c0 CR3: 0000000103ab0000 CR4: 00000000000006f0
Call Trace:
<TASK>
folio_add_return_large_mapcount include/linux/rmap.h:184 [inline]
__folio_add_rmap mm/rmap.c:1377 [inline]
__folio_add_file_rmap mm/rmap.c:1696 [inline]
folio_add_file_rmap_ptes+0x4c2/0xe60 mm/rmap.c:1722
insert_page_into_pte_locked+0x5ab/0x910 mm/memory.c:2378
insert_page+0x186/0x2d0 mm/memory.c:2398
packet_mmap+0x360/0x530 net/packet/af_packet.c:4622
vfs_mmap include/linux/fs.h:2053 [inline]
mmap_file mm/internal.h:167 [inline]
__mmap_new_file_vma mm/vma.c:2468 [inline]
__mmap_new_vma mm/vma.c:2532 [inline]
__mmap_region mm/vma.c:2759 [inline]
mmap_region+0x18fe/0x2240 mm/vma.c:2837
do_mmap+0xc39/0x10c0 mm/mmap.c:559
vm_mmap_pgoff+0x2c9/0x4f0 mm/util.c:581
ksys_mmap_pgoff+0x51e/0x760 mm/mmap.c:605
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f5d7399acb9
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe9f3eea78 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f5d73c15fa0 RCX: 00007f5d7399acb9
RDX: 0000000000000002 RSI: 0000000000030000 RDI: 0000200000000000
RBP: 00007f5d73a08bf7 R08: 0000000000000003 R09: 0000000000000000
R10: 0000000000000011 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f5d73c15fac R14: 00007f5d73c15fa0 R15: 00007f5d73c15fa0
</TASK>
***
If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
Tested-by: syzbot@syzkaller.appspotmail.com
---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.
|
{
"author": "syzbot ci <syzbot+ci7f632827e1b1c91b@syzkaller.appspotmail.com>",
"date": "Fri, 30 Jan 2026 00:15:47 -0800",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
On 30 Jan 2026, at 3:15, syzbot ci wrote:
The issue comes from alloc_one_pg_vec_page() in net/packet/af_packet.c.
It allocates a compound page with __GFP_COMP, but latter does vm_insert_page()
in packet_mmap(), using it as a folio.
The fix below is a hack. We will need a get_free_folios() instead.
I will check all __GFP_COMP callers to find out which ones are using it
as a folio and which ones are using it as a compound page. I suspect
most are using it as a folio.
#syz test
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2194a6b3a062..90858d20dfbe 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5311,6 +5311,8 @@ unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order)
page = alloc_pages_noprof(gfp_mask & ~__GFP_HIGHMEM, order);
if (!page)
return 0;
+ if (gfp_mask & __GFP_COMP)
+ return (unsigned long)folio_address(page_rmappable_folio(page));
return (unsigned long) page_address(page);
}
EXPORT_SYMBOL(get_free_pages_noprof);
Best Regards,
Yan, Zi
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Fri, 30 Jan 2026 11:39:40 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
On 2026/1/30 11:48, Zi Yan wrote:
Nit:
Since we're switching to folio_alloc(), which already adds __GFP_COMP
internally, the "else if (order)" part above can be dropped while at it.
IIUC, for order == 0, __GFP_COMP gets ignored anyway:
- prep_new_page() won't call prep_compound_page() (since order is zero)
- page_rmappable_folio() sees a non-compound page and does nothing
So no behavior change there :)
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Sat, 31 Jan 2026 23:30:35 +0800",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
On 31 Jan 2026, at 10:30, Lance Yang wrote:
Sure. Will update it in the next version. Thanks.
--
Best Regards,
Yan, Zi
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Sat, 31 Jan 2026 21:04:53 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
On 1/30/26 11:48 AM, Zi Yan wrote:
IIUC, this will break the semantics of the is_transparent_hugepage() and
might trigger a split of a hugetlb folio, right?
static inline bool is_transparent_hugepage(const struct folio *folio)
{
if (!folio_test_large(folio))
return false;
return is_huge_zero_folio(folio) ||
folio_test_large_rmappable(folio);
}
|
{
"author": "Baolin Wang <baolin.wang@linux.alibaba.com>",
"date": "Mon, 2 Feb 2026 11:59:39 +0800",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[RFC PATCH 0/5] Separate compound page from folio
|
Hi all,
Based on my discussion with Jason about device private folio
reinitialization[1], I realize that the concepts of compound page and folio
are mixed together and confusing, as people think a compound page is equal
to a folio. This is not true, since a compound page means a group of
pages is managed as a whole and it can be something other than a folio,
for example, a slab page. To avoid further confusing people, this
patchset separates compound page from folio by moving any folio related
code out of compound page functions.
The code is on top of mm-new (2026-01-28-20-27) and all mm selftests
passed.
The key change is that a compound page no longer sets:
1. folio->_nr_pages,
2. folio->_large_mapcount,
3. folio->_nr_pages_mapped,
4. folio->_mm_ids,
5. folio->_mm_id_mapcount,
6. folio->_pincount,
7. folio->_entire_mapcount,
8. folio->_deferred_list.
Since these fields are only used by folios that are rmappable. The code
setting these fields is moved to page_rmappable_folio(). To make the
code move, this patchset also needs to changes several places, where
folio and compound page are used interchangably or unusual folio use:
1. in io_mem_alloc_compound(), a compound page is allocated, but later
it is mapped via vm_insert_pages() like a rmappable folio;
2. __split_folio_to_order() sets large_rmappable flag directly instead
of using page_rmappable_folio() for after-split folios;
3. hugetlb unsets large_rmappable to escape deferred_list unqueue
operation.
At last, the page freeing path is also changed to have different checks
for compound page and folio.
One thing to note is that for compound page, I do not store compound
order in folio->_nr_pages, which overlaps with page[1].memcg_data and
use 1 << compound_order() instead, since I do not want to add a new
union to struct page and compound_nr() is not as widely used as
folio_nr_pages(). But let me know if there is a performance concern for
this.
Comments and suggestions are welcome.
Link: https://lore.kernel.org/all/F7E3DF24-A37B-40A0-A507-CEF4AB76C44D@nvidia.com/ [1]
Zi Yan (5):
io_uring: allocate folio in io_mem_alloc_compound() and function
rename
mm/huge_memory: use page_rmappable_folio() to convert after-split
folios
mm/hugetlb: set large_rmappable on hugetlb and avoid deferred_list
handling
mm: only use struct page in compound_nr() and compound_order()
mm: code separation for compound page and folio
include/linux/mm.h | 12 ++++--------
io_uring/memmap.c | 12 ++++++------
mm/huge_memory.c | 5 ++---
mm/hugetlb.c | 8 ++++----
mm/hugetlb_cma.c | 2 +-
mm/internal.h | 47 +++++++++++++++++++++++++++-------------------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 23 ++++++++++++++++++-----
8 files changed, 64 insertions(+), 47 deletions(-)
--
2.51.0
|
On 1 Feb 2026, at 22:59, Baolin Wang wrote:
Oh, I missed this. I will check all folio_test_large_rmappable() callers
and filter out hugetlb if necessary.
Thank you for pointing this out.
Best Regards,
Yan, Zi
|
{
"author": "Zi Yan <ziy@nvidia.com>",
"date": "Mon, 02 Feb 2026 12:11:45 -0500",
"thread_id": "21EACA83-C358-4FE7-BE2F-415A7EDC1485@nvidia.com.mbox.gz"
}
|
lkml
|
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
|
This series extends the MSHV driver to properly handle additional
memory-related error codes from the Microsoft Hypervisor by depositing
memory pages when needed.
Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY
during partition creation, the driver calls hv_call_deposit_pages() to
provide the necessary memory. However, there are other memory-related
error codes that indicate the hypervisor needs additional memory
resources, but the driver does not attempt to deposit pages for these
cases.
This series introduces a dedicated helper function macro to identify all
memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY,
HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and
HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to
deposit pages for all of them via new hv_deposit_memory() helper.
With these changes, partition creation becomes more robust by handling
all scenarios where the hypervisor requires additional memory deposits.
v2:
- Rename hv_result_oom() into hv_result_needs_memory()
---
Stanislav Kinsburskii (4):
mshv: Introduce hv_result_needs_memory() helper function
mshv: Introduce hv_deposit_memory helper functions
mshv: Handle insufficient contiguous memory hypervisor status
mshv: Handle insufficient root memory hypervisor statuses
drivers/hv/hv_common.c | 3 ++
drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++---
drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++-------------------
drivers/hv/mshv_root_main.c | 5 +---
include/asm-generic/mshyperv.h | 13 +++++++++
include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++-------------------
include/hyperv/hvhdk_mini.h | 2 +
7 files changed, 119 insertions(+), 60 deletions(-)
|
Replace direct comparisons of hv_result(status) against
HV_STATUS_INSUFFICIENT_MEMORY with a new hv_result_needs_memory() helper
function.
This improves code readability and provides a consistent and extendable
interface for checking out-of-memory conditions in hypercall results.
No functional changes intended.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/hv_proc.c | 14 ++++++++++++--
drivers/hv/mshv_root_hv_call.c | 20 ++++++++++----------
drivers/hv/mshv_root_main.c | 2 +-
include/asm-generic/mshyperv.h | 3 +++
4 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c
index fbb4eb3901bb..e53204b9e05d 100644
--- a/drivers/hv/hv_proc.c
+++ b/drivers/hv/hv_proc.c
@@ -110,6 +110,16 @@ int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages)
}
EXPORT_SYMBOL_GPL(hv_call_deposit_pages);
+bool hv_result_needs_memory(u64 status)
+{
+ switch (hv_result(status)) {
+ case HV_STATUS_INSUFFICIENT_MEMORY:
+ return true;
+ }
+ return false;
+}
+EXPORT_SYMBOL_GPL(hv_result_needs_memory);
+
int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id)
{
struct hv_input_add_logical_processor *input;
@@ -137,7 +147,7 @@ int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id)
input, output);
local_irq_restore(flags);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
if (!hv_result_success(status)) {
hv_status_err(status, "cpu %u apic ID: %u\n",
lp_index, apic_id);
@@ -179,7 +189,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags)
status = hv_do_hypercall(HVCALL_CREATE_VP, input, NULL);
local_irq_restore(irq_flags);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
if (!hv_result_success(status)) {
hv_status_err(status, "vcpu: %u, lp: %u\n",
vp_index, flags);
diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c
index 598eaff4ff29..89afeeda21dd 100644
--- a/drivers/hv/mshv_root_hv_call.c
+++ b/drivers/hv/mshv_root_hv_call.c
@@ -115,7 +115,7 @@ int hv_call_create_partition(u64 flags,
status = hv_do_hypercall(HVCALL_CREATE_PARTITION,
input, output);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
if (hv_result_success(status))
*partition_id = output->partition_id;
local_irq_restore(irq_flags);
@@ -147,7 +147,7 @@ int hv_call_initialize_partition(u64 partition_id)
status = hv_do_fast_hypercall8(HVCALL_INITIALIZE_PARTITION,
*(u64 *)&input);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
ret = hv_result_to_errno(status);
break;
}
@@ -239,7 +239,7 @@ static int hv_do_map_gpa_hcall(u64 partition_id, u64 gfn, u64 page_struct_count,
completed = hv_repcomp(status);
- if (hv_result(status) == HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (hv_result_needs_memory(status)) {
ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id,
HV_MAP_GPA_DEPOSIT_PAGES);
if (ret)
@@ -455,7 +455,7 @@ int hv_call_get_vp_state(u32 vp_index, u64 partition_id,
status = hv_do_hypercall(control, input, output);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
if (hv_result_success(status) && ret_output)
memcpy(ret_output, output, sizeof(*output));
@@ -518,7 +518,7 @@ int hv_call_set_vp_state(u32 vp_index, u64 partition_id,
status = hv_do_hypercall(control, input, NULL);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
local_irq_restore(flags);
ret = hv_result_to_errno(status);
break;
@@ -563,7 +563,7 @@ static int hv_call_map_vp_state_page(u64 partition_id, u32 vp_index, u32 type,
status = hv_do_hypercall(HVCALL_MAP_VP_STATE_PAGE, input,
output);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
if (hv_result_success(status))
*state_page = pfn_to_page(output->map_location);
local_irq_restore(flags);
@@ -718,7 +718,7 @@ hv_call_create_port(u64 port_partition_id, union hv_port_id port_id,
if (hv_result_success(status))
break;
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
ret = hv_result_to_errno(status);
break;
}
@@ -772,7 +772,7 @@ hv_call_connect_port(u64 port_partition_id, union hv_port_id port_id,
if (hv_result_success(status))
break;
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
ret = hv_result_to_errno(status);
break;
}
@@ -843,7 +843,7 @@ static int hv_call_map_stats_page2(enum hv_stats_object_type type,
if (!ret)
break;
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
hv_status_debug(status, "\n");
break;
}
@@ -878,7 +878,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type,
pfn = output->map_location;
local_irq_restore(flags);
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY) {
+ if (!hv_result_needs_memory(status)) {
ret = hv_result_to_errno(status);
if (hv_result_success(status))
break;
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index 6a6bf641b352..ee30bfa6bb2e 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -261,7 +261,7 @@ static int mshv_ioctl_passthru_hvcall(struct mshv_partition *partition,
if (hv_result_success(status))
break;
- if (hv_result(status) != HV_STATUS_INSUFFICIENT_MEMORY)
+ if (!hv_result_needs_memory(status))
ret = hv_result_to_errno(status);
else
ret = hv_call_deposit_pages(NUMA_NO_NODE,
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index ecedab554c80..452426d5b2ab 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -342,6 +342,8 @@ static inline bool hv_parent_partition(void)
{
return hv_root_partition() || hv_l1vh_partition();
}
+
+bool hv_result_needs_memory(u64 status);
int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages);
int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id);
int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags);
@@ -350,6 +352,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags);
static inline bool hv_root_partition(void) { return false; }
static inline bool hv_l1vh_partition(void) { return false; }
static inline bool hv_parent_partition(void) { return false; }
+static inline bool hv_result_needs_memory(u64 status) { return false; }
static inline int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages)
{
return -EOPNOTSUPP;
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 02 Feb 2026 17:58:57 +0000",
"thread_id": "177005514346.120041.5702271891856790910.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz"
}
|
lkml
|
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
|
This series extends the MSHV driver to properly handle additional
memory-related error codes from the Microsoft Hypervisor by depositing
memory pages when needed.
Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY
during partition creation, the driver calls hv_call_deposit_pages() to
provide the necessary memory. However, there are other memory-related
error codes that indicate the hypervisor needs additional memory
resources, but the driver does not attempt to deposit pages for these
cases.
This series introduces a dedicated helper function macro to identify all
memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY,
HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and
HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to
deposit pages for all of them via new hv_deposit_memory() helper.
With these changes, partition creation becomes more robust by handling
all scenarios where the hypervisor requires additional memory deposits.
v2:
- Rename hv_result_oom() into hv_result_needs_memory()
---
Stanislav Kinsburskii (4):
mshv: Introduce hv_result_needs_memory() helper function
mshv: Introduce hv_deposit_memory helper functions
mshv: Handle insufficient contiguous memory hypervisor status
mshv: Handle insufficient root memory hypervisor statuses
drivers/hv/hv_common.c | 3 ++
drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++---
drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++-------------------
drivers/hv/mshv_root_main.c | 5 +---
include/asm-generic/mshyperv.h | 13 +++++++++
include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++-------------------
include/hyperv/hvhdk_mini.h | 2 +
7 files changed, 119 insertions(+), 60 deletions(-)
|
Introduce hv_deposit_memory_node() and hv_deposit_memory() helper
functions to handle memory deposition with proper error handling.
The new hv_deposit_memory_node() function takes the hypervisor status
as a parameter and validates it before depositing pages. It checks for
HV_STATUS_INSUFFICIENT_MEMORY specifically and returns an error for
unexpected status codes.
This is a precursor patch to new out-of-memory error codes support.
No functional changes intended.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/hv_proc.c | 22 ++++++++++++++++++++--
drivers/hv/mshv_root_hv_call.c | 25 +++++++++----------------
drivers/hv/mshv_root_main.c | 3 +--
include/asm-generic/mshyperv.h | 10 ++++++++++
4 files changed, 40 insertions(+), 20 deletions(-)
diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c
index e53204b9e05d..ffa25cd6e4e9 100644
--- a/drivers/hv/hv_proc.c
+++ b/drivers/hv/hv_proc.c
@@ -110,6 +110,23 @@ int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages)
}
EXPORT_SYMBOL_GPL(hv_call_deposit_pages);
+int hv_deposit_memory_node(int node, u64 partition_id,
+ u64 hv_status)
+{
+ u32 num_pages;
+
+ switch (hv_result(hv_status)) {
+ case HV_STATUS_INSUFFICIENT_MEMORY:
+ num_pages = 1;
+ break;
+ default:
+ hv_status_err(hv_status, "Unexpected!\n");
+ return -ENOMEM;
+ }
+ return hv_call_deposit_pages(node, partition_id, num_pages);
+}
+EXPORT_SYMBOL_GPL(hv_deposit_memory_node);
+
bool hv_result_needs_memory(u64 status)
{
switch (hv_result(status)) {
@@ -155,7 +172,8 @@ int hv_call_add_logical_proc(int node, u32 lp_index, u32 apic_id)
}
break;
}
- ret = hv_call_deposit_pages(node, hv_current_partition_id, 1);
+ ret = hv_deposit_memory_node(node, hv_current_partition_id,
+ status);
} while (!ret);
return ret;
@@ -197,7 +215,7 @@ int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags)
}
break;
}
- ret = hv_call_deposit_pages(node, partition_id, 1);
+ ret = hv_deposit_memory_node(node, partition_id, status);
} while (!ret);
diff --git a/drivers/hv/mshv_root_hv_call.c b/drivers/hv/mshv_root_hv_call.c
index 89afeeda21dd..174431cb5e0e 100644
--- a/drivers/hv/mshv_root_hv_call.c
+++ b/drivers/hv/mshv_root_hv_call.c
@@ -123,8 +123,7 @@ int hv_call_create_partition(u64 flags,
break;
}
local_irq_restore(irq_flags);
- ret = hv_call_deposit_pages(NUMA_NO_NODE,
- hv_current_partition_id, 1);
+ ret = hv_deposit_memory(hv_current_partition_id, status);
} while (!ret);
return ret;
@@ -151,7 +150,7 @@ int hv_call_initialize_partition(u64 partition_id)
ret = hv_result_to_errno(status);
break;
}
- ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, 1);
+ ret = hv_deposit_memory(partition_id, status);
} while (!ret);
return ret;
@@ -465,8 +464,7 @@ int hv_call_get_vp_state(u32 vp_index, u64 partition_id,
}
local_irq_restore(flags);
- ret = hv_call_deposit_pages(NUMA_NO_NODE,
- partition_id, 1);
+ ret = hv_deposit_memory(partition_id, status);
} while (!ret);
return ret;
@@ -525,8 +523,7 @@ int hv_call_set_vp_state(u32 vp_index, u64 partition_id,
}
local_irq_restore(flags);
- ret = hv_call_deposit_pages(NUMA_NO_NODE,
- partition_id, 1);
+ ret = hv_deposit_memory(partition_id, status);
} while (!ret);
return ret;
@@ -573,7 +570,7 @@ static int hv_call_map_vp_state_page(u64 partition_id, u32 vp_index, u32 type,
local_irq_restore(flags);
- ret = hv_call_deposit_pages(NUMA_NO_NODE, partition_id, 1);
+ ret = hv_deposit_memory(partition_id, status);
} while (!ret);
return ret;
@@ -722,8 +719,7 @@ hv_call_create_port(u64 port_partition_id, union hv_port_id port_id,
ret = hv_result_to_errno(status);
break;
}
- ret = hv_call_deposit_pages(NUMA_NO_NODE, port_partition_id, 1);
-
+ ret = hv_deposit_memory(port_partition_id, status);
} while (!ret);
return ret;
@@ -776,8 +772,7 @@ hv_call_connect_port(u64 port_partition_id, union hv_port_id port_id,
ret = hv_result_to_errno(status);
break;
}
- ret = hv_call_deposit_pages(NUMA_NO_NODE,
- connection_partition_id, 1);
+ ret = hv_deposit_memory(connection_partition_id, status);
} while (!ret);
return ret;
@@ -848,8 +843,7 @@ static int hv_call_map_stats_page2(enum hv_stats_object_type type,
break;
}
- ret = hv_call_deposit_pages(NUMA_NO_NODE,
- hv_current_partition_id, 1);
+ ret = hv_deposit_memory(hv_current_partition_id, status);
} while (!ret);
return ret;
@@ -885,8 +879,7 @@ static int hv_call_map_stats_page(enum hv_stats_object_type type,
return ret;
}
- ret = hv_call_deposit_pages(NUMA_NO_NODE,
- hv_current_partition_id, 1);
+ ret = hv_deposit_memory(hv_current_partition_id, status);
if (ret)
return ret;
} while (!ret);
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index ee30bfa6bb2e..dce255c94f9e 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -264,8 +264,7 @@ static int mshv_ioctl_passthru_hvcall(struct mshv_partition *partition,
if (!hv_result_needs_memory(status))
ret = hv_result_to_errno(status);
else
- ret = hv_call_deposit_pages(NUMA_NO_NODE,
- pt_id, 1);
+ ret = hv_deposit_memory(pt_id, status);
} while (!ret);
args.status = hv_result(status);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index 452426d5b2ab..d37b68238c97 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -344,6 +344,7 @@ static inline bool hv_parent_partition(void)
}
bool hv_result_needs_memory(u64 status);
+int hv_deposit_memory_node(int node, u64 partition_id, u64 status);
int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages);
int hv_call_add_logical_proc(int node, u32 lp_index, u32 acpi_id);
int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u32 flags);
@@ -353,6 +354,10 @@ static inline bool hv_root_partition(void) { return false; }
static inline bool hv_l1vh_partition(void) { return false; }
static inline bool hv_parent_partition(void) { return false; }
static inline bool hv_result_needs_memory(u64 status) { return false; }
+static inline int hv_deposit_memory_node(int node, u64 partition_id, u64 status)
+{
+ return -EOPNOTSUPP;
+}
static inline int hv_call_deposit_pages(int node, u64 partition_id, u32 num_pages)
{
return -EOPNOTSUPP;
@@ -367,6 +372,11 @@ static inline int hv_call_create_vp(int node, u64 partition_id, u32 vp_index, u3
}
#endif /* CONFIG_MSHV_ROOT */
+static inline int hv_deposit_memory(u64 partition_id, u64 status)
+{
+ return hv_deposit_memory_node(NUMA_NO_NODE, partition_id, status);
+}
+
#if IS_ENABLED(CONFIG_HYPERV_VTL_MODE)
u8 __init get_vtl(void);
#else
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 02 Feb 2026 17:59:03 +0000",
"thread_id": "177005514346.120041.5702271891856790910.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz"
}
|
lkml
|
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
|
This series extends the MSHV driver to properly handle additional
memory-related error codes from the Microsoft Hypervisor by depositing
memory pages when needed.
Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY
during partition creation, the driver calls hv_call_deposit_pages() to
provide the necessary memory. However, there are other memory-related
error codes that indicate the hypervisor needs additional memory
resources, but the driver does not attempt to deposit pages for these
cases.
This series introduces a dedicated helper function macro to identify all
memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY,
HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and
HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to
deposit pages for all of them via new hv_deposit_memory() helper.
With these changes, partition creation becomes more robust by handling
all scenarios where the hypervisor requires additional memory deposits.
v2:
- Rename hv_result_oom() into hv_result_needs_memory()
---
Stanislav Kinsburskii (4):
mshv: Introduce hv_result_needs_memory() helper function
mshv: Introduce hv_deposit_memory helper functions
mshv: Handle insufficient contiguous memory hypervisor status
mshv: Handle insufficient root memory hypervisor statuses
drivers/hv/hv_common.c | 3 ++
drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++---
drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++-------------------
drivers/hv/mshv_root_main.c | 5 +---
include/asm-generic/mshyperv.h | 13 +++++++++
include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++-------------------
include/hyperv/hvhdk_mini.h | 2 +
7 files changed, 119 insertions(+), 60 deletions(-)
|
The HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY status indicates that the
hypervisor lacks sufficient contiguous memory for its internal allocations.
When this status is encountered, allocate and deposit
HV_MAX_CONTIGUOUS_ALLOCATION_PAGES contiguous pages to the hypervisor.
HV_MAX_CONTIGUOUS_ALLOCATION_PAGES is defined in the hypervisor headers, a
deposit of this size will always satisfy the hypervisor's requirements.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/hv_common.c | 1 +
drivers/hv/hv_proc.c | 4 ++++
include/hyperv/hvgdk_mini.h | 1 +
include/hyperv/hvhdk_mini.h | 2 ++
4 files changed, 8 insertions(+)
diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
index 0a3ab7efed46..c7f63c9de503 100644
--- a/drivers/hv/hv_common.c
+++ b/drivers/hv/hv_common.c
@@ -791,6 +791,7 @@ static const struct hv_status_info hv_status_infos[] = {
_STATUS_INFO(HV_STATUS_UNKNOWN_PROPERTY, -EIO),
_STATUS_INFO(HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE, -EIO),
_STATUS_INFO(HV_STATUS_INSUFFICIENT_MEMORY, -ENOMEM),
+ _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY, -ENOMEM),
_STATUS_INFO(HV_STATUS_INVALID_PARTITION_ID, -EINVAL),
_STATUS_INFO(HV_STATUS_INVALID_VP_INDEX, -EINVAL),
_STATUS_INFO(HV_STATUS_NOT_FOUND, -EIO),
diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c
index ffa25cd6e4e9..dfa27be66ff7 100644
--- a/drivers/hv/hv_proc.c
+++ b/drivers/hv/hv_proc.c
@@ -119,6 +119,9 @@ int hv_deposit_memory_node(int node, u64 partition_id,
case HV_STATUS_INSUFFICIENT_MEMORY:
num_pages = 1;
break;
+ case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY:
+ num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES;
+ break;
default:
hv_status_err(hv_status, "Unexpected!\n");
return -ENOMEM;
@@ -131,6 +134,7 @@ bool hv_result_needs_memory(u64 status)
{
switch (hv_result(status)) {
case HV_STATUS_INSUFFICIENT_MEMORY:
+ case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY:
return true;
}
return false;
diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
index 04b18d0e37af..70f22ef44948 100644
--- a/include/hyperv/hvgdk_mini.h
+++ b/include/hyperv/hvgdk_mini.h
@@ -38,6 +38,7 @@ struct hv_u128 {
#define HV_STATUS_INVALID_LP_INDEX 0x41
#define HV_STATUS_INVALID_REGISTER_VALUE 0x50
#define HV_STATUS_OPERATION_FAILED 0x71
+#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75
#define HV_STATUS_TIME_OUT 0x78
#define HV_STATUS_CALL_PENDING 0x79
#define HV_STATUS_VTL_ALREADY_ENABLED 0x86
diff --git a/include/hyperv/hvhdk_mini.h b/include/hyperv/hvhdk_mini.h
index c0300910808b..091c03e26046 100644
--- a/include/hyperv/hvhdk_mini.h
+++ b/include/hyperv/hvhdk_mini.h
@@ -7,6 +7,8 @@
#include "hvgdk_mini.h"
+#define HV_MAX_CONTIGUOUS_ALLOCATION_PAGES 8
+
/*
* Doorbell connection_info flags.
*/
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 02 Feb 2026 17:59:09 +0000",
"thread_id": "177005514346.120041.5702271891856790910.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz"
}
|
lkml
|
[PATCH v2 0/4] Improve Hyper-V memory deposit error handling
|
This series extends the MSHV driver to properly handle additional
memory-related error codes from the Microsoft Hypervisor by depositing
memory pages when needed.
Currently, when the hypervisor returns HV_STATUS_INSUFFICIENT_MEMORY
during partition creation, the driver calls hv_call_deposit_pages() to
provide the necessary memory. However, there are other memory-related
error codes that indicate the hypervisor needs additional memory
resources, but the driver does not attempt to deposit pages for these
cases.
This series introduces a dedicated helper function macro to identify all
memory-related error codes (HV_STATUS_INSUFFICIENT_MEMORY,
HV_STATUS_INSUFFICIENT_BUFFERS, HV_STATUS_INSUFFICIENT_DEVICE_DOMAINS, and
HV_STATUS_INSUFFICIENT_ROOT_MEMORY) and ensures the driver attempts to
deposit pages for all of them via new hv_deposit_memory() helper.
With these changes, partition creation becomes more robust by handling
all scenarios where the hypervisor requires additional memory deposits.
v2:
- Rename hv_result_oom() into hv_result_needs_memory()
---
Stanislav Kinsburskii (4):
mshv: Introduce hv_result_needs_memory() helper function
mshv: Introduce hv_deposit_memory helper functions
mshv: Handle insufficient contiguous memory hypervisor status
mshv: Handle insufficient root memory hypervisor statuses
drivers/hv/hv_common.c | 3 ++
drivers/hv/hv_proc.c | 54 +++++++++++++++++++++++++++++++++++---
drivers/hv/mshv_root_hv_call.c | 45 +++++++++++++-------------------
drivers/hv/mshv_root_main.c | 5 +---
include/asm-generic/mshyperv.h | 13 +++++++++
include/hyperv/hvgdk_mini.h | 57 +++++++++++++++++++++-------------------
include/hyperv/hvhdk_mini.h | 2 +
7 files changed, 119 insertions(+), 60 deletions(-)
|
When creating guest partition objects, the hypervisor may fail to
allocate root partition pages and return an insufficient memory status.
In this case, deposit memory using the root partition ID instead.
Note: This error should never occur in a guest of L1VH partition context.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/hv_common.c | 2 +
drivers/hv/hv_proc.c | 14 ++++++++++
include/hyperv/hvgdk_mini.h | 58 ++++++++++++++++++++++---------------------
3 files changed, 46 insertions(+), 28 deletions(-)
diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
index c7f63c9de503..cab0d1733607 100644
--- a/drivers/hv/hv_common.c
+++ b/drivers/hv/hv_common.c
@@ -792,6 +792,8 @@ static const struct hv_status_info hv_status_infos[] = {
_STATUS_INFO(HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE, -EIO),
_STATUS_INFO(HV_STATUS_INSUFFICIENT_MEMORY, -ENOMEM),
_STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY, -ENOMEM),
+ _STATUS_INFO(HV_STATUS_INSUFFICIENT_ROOT_MEMORY, -ENOMEM),
+ _STATUS_INFO(HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY, -ENOMEM),
_STATUS_INFO(HV_STATUS_INVALID_PARTITION_ID, -EINVAL),
_STATUS_INFO(HV_STATUS_INVALID_VP_INDEX, -EINVAL),
_STATUS_INFO(HV_STATUS_NOT_FOUND, -EIO),
diff --git a/drivers/hv/hv_proc.c b/drivers/hv/hv_proc.c
index dfa27be66ff7..935129e0b39d 100644
--- a/drivers/hv/hv_proc.c
+++ b/drivers/hv/hv_proc.c
@@ -122,6 +122,18 @@ int hv_deposit_memory_node(int node, u64 partition_id,
case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY:
num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES;
break;
+
+ case HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY:
+ num_pages = HV_MAX_CONTIGUOUS_ALLOCATION_PAGES;
+ fallthrough;
+ case HV_STATUS_INSUFFICIENT_ROOT_MEMORY:
+ if (!hv_root_partition()) {
+ hv_status_err(hv_status, "Unexpected root memory deposit\n");
+ return -ENOMEM;
+ }
+ partition_id = HV_PARTITION_ID_SELF;
+ break;
+
default:
hv_status_err(hv_status, "Unexpected!\n");
return -ENOMEM;
@@ -135,6 +147,8 @@ bool hv_result_needs_memory(u64 status)
switch (hv_result(status)) {
case HV_STATUS_INSUFFICIENT_MEMORY:
case HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY:
+ case HV_STATUS_INSUFFICIENT_ROOT_MEMORY:
+ case HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY:
return true;
}
return false;
diff --git a/include/hyperv/hvgdk_mini.h b/include/hyperv/hvgdk_mini.h
index 70f22ef44948..5b74a857ef43 100644
--- a/include/hyperv/hvgdk_mini.h
+++ b/include/hyperv/hvgdk_mini.h
@@ -14,34 +14,36 @@ struct hv_u128 {
} __packed;
/* NOTE: when adding below, update hv_result_to_string() */
-#define HV_STATUS_SUCCESS 0x0
-#define HV_STATUS_INVALID_HYPERCALL_CODE 0x2
-#define HV_STATUS_INVALID_HYPERCALL_INPUT 0x3
-#define HV_STATUS_INVALID_ALIGNMENT 0x4
-#define HV_STATUS_INVALID_PARAMETER 0x5
-#define HV_STATUS_ACCESS_DENIED 0x6
-#define HV_STATUS_INVALID_PARTITION_STATE 0x7
-#define HV_STATUS_OPERATION_DENIED 0x8
-#define HV_STATUS_UNKNOWN_PROPERTY 0x9
-#define HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE 0xA
-#define HV_STATUS_INSUFFICIENT_MEMORY 0xB
-#define HV_STATUS_INVALID_PARTITION_ID 0xD
-#define HV_STATUS_INVALID_VP_INDEX 0xE
-#define HV_STATUS_NOT_FOUND 0x10
-#define HV_STATUS_INVALID_PORT_ID 0x11
-#define HV_STATUS_INVALID_CONNECTION_ID 0x12
-#define HV_STATUS_INSUFFICIENT_BUFFERS 0x13
-#define HV_STATUS_NOT_ACKNOWLEDGED 0x14
-#define HV_STATUS_INVALID_VP_STATE 0x15
-#define HV_STATUS_NO_RESOURCES 0x1D
-#define HV_STATUS_PROCESSOR_FEATURE_NOT_SUPPORTED 0x20
-#define HV_STATUS_INVALID_LP_INDEX 0x41
-#define HV_STATUS_INVALID_REGISTER_VALUE 0x50
-#define HV_STATUS_OPERATION_FAILED 0x71
-#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75
-#define HV_STATUS_TIME_OUT 0x78
-#define HV_STATUS_CALL_PENDING 0x79
-#define HV_STATUS_VTL_ALREADY_ENABLED 0x86
+#define HV_STATUS_SUCCESS 0x0
+#define HV_STATUS_INVALID_HYPERCALL_CODE 0x2
+#define HV_STATUS_INVALID_HYPERCALL_INPUT 0x3
+#define HV_STATUS_INVALID_ALIGNMENT 0x4
+#define HV_STATUS_INVALID_PARAMETER 0x5
+#define HV_STATUS_ACCESS_DENIED 0x6
+#define HV_STATUS_INVALID_PARTITION_STATE 0x7
+#define HV_STATUS_OPERATION_DENIED 0x8
+#define HV_STATUS_UNKNOWN_PROPERTY 0x9
+#define HV_STATUS_PROPERTY_VALUE_OUT_OF_RANGE 0xA
+#define HV_STATUS_INSUFFICIENT_MEMORY 0xB
+#define HV_STATUS_INVALID_PARTITION_ID 0xD
+#define HV_STATUS_INVALID_VP_INDEX 0xE
+#define HV_STATUS_NOT_FOUND 0x10
+#define HV_STATUS_INVALID_PORT_ID 0x11
+#define HV_STATUS_INVALID_CONNECTION_ID 0x12
+#define HV_STATUS_INSUFFICIENT_BUFFERS 0x13
+#define HV_STATUS_NOT_ACKNOWLEDGED 0x14
+#define HV_STATUS_INVALID_VP_STATE 0x15
+#define HV_STATUS_NO_RESOURCES 0x1D
+#define HV_STATUS_PROCESSOR_FEATURE_NOT_SUPPORTED 0x20
+#define HV_STATUS_INVALID_LP_INDEX 0x41
+#define HV_STATUS_INVALID_REGISTER_VALUE 0x50
+#define HV_STATUS_OPERATION_FAILED 0x71
+#define HV_STATUS_INSUFFICIENT_ROOT_MEMORY 0x73
+#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_MEMORY 0x75
+#define HV_STATUS_TIME_OUT 0x78
+#define HV_STATUS_CALL_PENDING 0x79
+#define HV_STATUS_INSUFFICIENT_CONTIGUOUS_ROOT_MEMORY 0x83
+#define HV_STATUS_VTL_ALREADY_ENABLED 0x86
/*
* The Hyper-V TimeRefCount register and the TSC
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 02 Feb 2026 17:59:14 +0000",
"thread_id": "177005514346.120041.5702271891856790910.stgit@skinsburskii-cloud-desktop.internal.cloudapp.net.mbox.gz"
}
|
lkml
|
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
|
Hi Lukas, Ignat,
[Note this is based on Eric Bigger's libcrypto-next branch].
These patches add ML-DSA module signing signing:
(1) Add a crypto_sig interface for ML-DSA, verification only.
(2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in
the blacklist. Direct-sign ML-DSA doesn't generate an easily
accessible hash. Note that this changes behaviour as we no longer use
whatever hash is specified in the certificate for this.
(3) Rename the public_key_signature struct's "digest" and "digest_size"
members to "m" and "m_size" to reflect that it's not necessarily a
digest, but it is an input to the public key algorithm.
(4) Modify PKCS#7 support to allow kernel module signatures to carry
authenticatedAttributes as OpenSSL refuses to let them be opted out of
for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the
process.
Modify PKCS#7 to pass the authenticatedAttributes directly to the
ML-DSA algorithm rather than passing over a digest as is done with RSA
as ML-DSA wants to do its own hashing and will add other stuff into
the hash. We could use hashML-DSA or an external mu instead, but they
aren't standardised for CMS yet.
(5) Add support to the PKCS#7 and X.509 parsers for ML-DSA.
(6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with
ML-DSA and add ML-DSA to the choice of algorithm with which to sign
modules. Note that this might need some more 'select' lines in the
Kconfig to select the lib stuff as well.
(7) Add a config option to allow authenticatedAttributes to be used with
ML-DSA for module signing. Ordinarily, authenticatedAttributes are
not permitted for this purpose, however direct signing with ML-DSA
will not be supported by OpenSSL until v4 is released.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc
David
Changes
=======
ver #16)
- Make the selection of ML-DSA for module signing when configuring
contingent on openssl saying it supports ML-DSA (fix from Arnd
Bergmann).
- Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0.
ver #15)
- Undo a removed blank line to simplify the X.509 patch.
- Split the rename of ->digest to ->m into its own patch.
- In pkcs7_digest(), always copy the signedAttrs and modify rather than
passing the replacement tag byte in a separate shash update call to the
rest of the data. That way the ->m buffer is very likely to be
optimally aligned for the crypto.
- Only allow authenticatedAttributes with ML-DSA for module signing and
only if permission is given in the kernel config.
ver #14)
- public_key:
- Rename public_key::digest to public_key::m.
- X.509:
- Independently calculate the SHA256 hash for the blacklist check as
an ML-DSA-signed X.509 cert doesn't generate a digest we can use.
- Point public_key::m at the TBS data for ML-DSA.
- PKCS#7:
- Allocate a big enough digest buffer rather than reallocating in order
to store the authattrs/signedattrs instead.
- Merge the two patches that add direct signing support.
- ML-DSA:
- Use bool instead of u8.
- Remove references to SHAKE in Kconfig and mention OpenSSL requirements
there.
- Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using
SHA512 only.
- Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA.
- RSASSA-PSS:
- Allow use with SHA256 and SHA384.
- Fix calculation of emBits to be number of bits in the RSA modulus 'n'.
- Use strncmp() not memcmp() to avoid reading beyond end of string.
- Use correct destructor in rsassa_params_parse().
- Drop this algo for the moment.
- Drop the pefile_context::digest_free for now - it's only set to true and
is unrelated to public_key::digest_free.
ver #13)
- Allow a zero-length salt in RSASSA-PSS.
- Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS
selftest panics when used.
- Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp).
- Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set).
ver #12)
- Rebased on Eric's libcrypto-next branch.
- Delete references to Dilithium (ML-DSA derived from this).
- Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4.
- Made it possible to do ML-DSA over the data without signedAttrs.
- Made RSASSA-PSS info parser use strsep() and match_token().
- Cleaned the RSASSA-PSS param parsing.
- Added limitation on what hashes can be used with what algos.
- Moved __free()-marked variables to the point of setting.
ver #11)
- Rebased on Eric's libcrypto-next branch.
- Added RSASSA-PSS support patches.
ver #10)
- Replaced the Leancrypto ML-DSA implementation with Eric's.
- Fixed Eric's implementation to have MODULE_* info.
- Added a patch to drive Eric's ML-DSA implementation from crypto_sig.
- Removed SHAKE256 from the list of available module hash algorithms.
- Changed a some more ML_DSA to MLDSA in config symbols.
ver #9)
- ML-DSA changes:
- Separate output into four modules (1 common, 3 strength-specific).
- Solves Kconfig issue with needing to select at least one strength.
- Separate the strength-specific crypto-lib APIs.
- This is now generated by preprocessor-templating.
- Remove the multiplexor code.
- Multiplex the crypto-lib APIs by C type.
- Fix the PKCS#7/X.509 code to have the correct algo names.
ver #8)
- Moved the ML-DSA code to lib/crypto/mldsa/.
- Renamed some bits from ml-dsa to mldsa.
- Created a simplified API and placed that in include/crypto/mldsa.h.
- Made the testing code use the simplified API.
- Fixed a warning about implicitly casting between uint16_t and __le16.
ver #7)
- Rebased on Eric's tree as that now contains all the necessary SHA-3
infrastructure and drop the SHA-3 patches from here.
- Added a minimal patch to provide shake256 support for crypto_sig.
- Got rid of the memory allocation wrappers.
- Removed the ML-DSA keypair generation code and the signing code, leaving
only the signature verification code.
- Removed the secret key handling code.
- Removed the secret keys from the kunit tests and the signing testing.
- Removed some unused bits from the ML-DSA code.
- Downgraded the kdoc comments to ordinary comments, but keep the markup
for easier comparison to Leancrypto.
ver #6)
- Added a patch to make the jitterentropy RNG use lib/sha3.
- Added back the crypto/sha3_generic changes.
- Added ML-DSA implementation (still needs more cleanup).
- Added kunit test for ML-DSA.
- Modified PKCS#7 to accommodate ML-DSA.
- Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used.
- Modified sign-file to not use CMS_NOATTR with ML-DSA.
- Allowed SHA3 and SHAKE* algorithms for module signing default.
- Allowed ML-DSA-{44,65,87} to be selected as the module signing default.
ver #5)
- Fix gen-hash-testvecs.py to correctly handle algo names that contain a
dash.
- Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as
these don't currently have HMAC variants implemented.
- Fix algo names to be correct.
- Fix kunit module description as it now tests all SHA3 variants.
ver #4)
- Fix a couple of arm64 build problems.
- Doc fixes:
- Fix the description of the algorithm to be closer to the NIST spec's
terminology.
- Don't talk of finialising the context for XOFs.
- Don't say "Return: None".
- Declare the "Context" to be "Any context" and make no mention of the
fact that it might use the FPU.
- Change "initialise" to "initialize".
- Don't warn that the context is relatively large for stack use.
- Use size_t for size parameters/variables.
- Make the module_exit unconditional.
- Dropped the crypto/ dir-affecting patches for the moment.
ver #3)
- Renamed conflicting arm64 functions.
- Made a separate wrapper API for each algorithm in the family.
- Removed sha3_init(), sha3_reinit() and sha3_final().
- Removed sha3_ctx::digest_size.
- Renamed sha3_ctx::partial to sha3_ctx::absorb_offset.
- Refer to the output of SHAKE* as "output" not "digest".
- Moved the Iota transform into the one-round function.
- Made sha3_update() warn if called after sha3_squeeze().
- Simplified the module-load test to not do update after squeeze.
- Added Return: and Context: kdoc statements and expanded the kdoc
headers.
- Added an API description document.
- Overhauled the kunit tests.
- Only have one kunit test.
- Only call the general hash tester on one algo.
- Add separate simple cursory checks for the other algos.
- Add resqueezing tests.
- Add some NIST example tests.
- Changed crypto/sha3_generic to use this
- Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr
- Folded struct sha3_state into struct sha3_ctx.
ver #2)
- Simplify the endianness handling.
- Rename sha3_final() to sha3_squeeze() and don't clear the context at the
end as it's permitted to continue calling sha3_final() to extract
continuations of the digest (needed by ML-DSA).
- Don't reapply the end marker to the hash state in continuation
sha3_squeeze() unless sha3_update() gets called again (needed by
ML-DSA).
- Give sha3_squeeze() the amount of digest to produce as a parameter
rather than using ctx->digest_size and don't return the amount digested.
- Reimplement sha3_final() as a wrapper around sha3_squeeze() that
extracts ctx->digest_size amount of digest and then zeroes out the
context. The latter is necessary to avoid upsetting
hash-test-template.h.
- Provide a sha3_reinit() function to clear the state, but to leave the
parameters that indicate the hash properties unaffected, allowing for
reuse.
- Provide a sha3_set_digestsize() function to change the size of the
digest to be extracted by sha3_final(). sha3_squeeze() takes a
parameter for this instead.
- Don't pass the digest size as a parameter to shake128/256_init() but
rather default to 128/256 bits as per the function name.
- Provide a sha3_clear() function to zero out the context.
David Howells (7):
crypto: Add ML-DSA crypto_sig support
x509: Separately calculate sha256 for blacklist
pkcs7, x509: Rename ->digest to ->m
pkcs7: Allow the signing algo to do whatever digestion it wants itself
pkcs7, x509: Add ML-DSA support
modsign: Enable ML-DSA module signing
pkcs7: Allow authenticatedAttributes for ML-DSA
Documentation/admin-guide/module-signing.rst | 16 +-
certs/Kconfig | 40 ++++
certs/Makefile | 3 +
crypto/Kconfig | 9 +
crypto/Makefile | 2 +
crypto/asymmetric_keys/Kconfig | 11 +
crypto/asymmetric_keys/asymmetric_type.c | 4 +-
crypto/asymmetric_keys/pkcs7_parser.c | 36 +++-
crypto/asymmetric_keys/pkcs7_parser.h | 3 +
crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++---
crypto/asymmetric_keys/public_key.c | 13 +-
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++-
crypto/asymmetric_keys/x509_parser.h | 2 +
crypto/asymmetric_keys/x509_public_key.c | 42 ++--
crypto/mldsa.c | 201 +++++++++++++++++++
include/crypto/public_key.h | 6 +-
include/linux/oid_registry.h | 5 +
scripts/sign-file.c | 39 +++-
security/integrity/digsig_asymmetric.c | 4 +-
20 files changed, 473 insertions(+), 71 deletions(-)
create mode 100644 crypto/mldsa.c
|
Add verify-only public key crypto support for ML-DSA so that the
X.509/PKCS#7 signature verification code, as used by module signing,
amongst other things, can make use of it through the common crypto_sig API.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
cc: Eric Biggers <ebiggers@kernel.org>
cc: Lukas Wunner <lukas@wunner.de>
cc: Ignat Korchagin <ignat@cloudflare.com>
cc: Stephan Mueller <smueller@chronox.de>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: keyrings@vger.kernel.org
cc: linux-crypto@vger.kernel.org
---
crypto/Kconfig | 9 +++
crypto/Makefile | 2 +
crypto/mldsa.c | 201 ++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 212 insertions(+)
create mode 100644 crypto/mldsa.c
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 12a87f7cf150..a210575fa5e0 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -344,6 +344,15 @@ config CRYPTO_ECRDSA
One of the Russian cryptographic standard algorithms (called GOST
algorithms). Only signature verification is implemented.
+config CRYPTO_MLDSA
+ tristate "ML-DSA (Module-Lattice-Based Digital Signature Algorithm)"
+ select CRYPTO_SIG
+ select CRYPTO_LIB_MLDSA
+ help
+ ML-DSA (Module-Lattice-Based Digital Signature Algorithm) (FIPS-204).
+
+ Only signature verification is implemented.
+
endmenu
menu "Block ciphers"
diff --git a/crypto/Makefile b/crypto/Makefile
index 23d3db7be425..267d5403045b 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -60,6 +60,8 @@ ecdsa_generic-y += ecdsa-p1363.o
ecdsa_generic-y += ecdsasignature.asn1.o
obj-$(CONFIG_CRYPTO_ECDSA) += ecdsa_generic.o
+obj-$(CONFIG_CRYPTO_MLDSA) += mldsa.o
+
crypto_acompress-y := acompress.o
crypto_acompress-y += scompress.o
obj-$(CONFIG_CRYPTO_ACOMP2) += crypto_acompress.o
diff --git a/crypto/mldsa.c b/crypto/mldsa.c
new file mode 100644
index 000000000000..d8de082cc67a
--- /dev/null
+++ b/crypto/mldsa.c
@@ -0,0 +1,201 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * crypto_sig wrapper around ML-DSA library.
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <crypto/internal/sig.h>
+#include <crypto/mldsa.h>
+
+struct crypto_mldsa_ctx {
+ u8 pk[MAX(MAX(MLDSA44_PUBLIC_KEY_SIZE,
+ MLDSA65_PUBLIC_KEY_SIZE),
+ MLDSA87_PUBLIC_KEY_SIZE)];
+ unsigned int pk_len;
+ enum mldsa_alg strength;
+ bool key_set;
+};
+
+static int crypto_mldsa_sign(struct crypto_sig *tfm,
+ const void *msg, unsigned int msg_len,
+ void *sig, unsigned int sig_len)
+{
+ return -EOPNOTSUPP;
+}
+
+static int crypto_mldsa_verify(struct crypto_sig *tfm,
+ const void *sig, unsigned int sig_len,
+ const void *msg, unsigned int msg_len)
+{
+ const struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm);
+
+ if (unlikely(!ctx->key_set))
+ return -EINVAL;
+
+ return mldsa_verify(ctx->strength, sig, sig_len, msg, msg_len,
+ ctx->pk, ctx->pk_len);
+}
+
+static unsigned int crypto_mldsa_key_size(struct crypto_sig *tfm)
+{
+ struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm);
+
+ switch (ctx->strength) {
+ case MLDSA44:
+ return MLDSA44_PUBLIC_KEY_SIZE;
+ case MLDSA65:
+ return MLDSA65_PUBLIC_KEY_SIZE;
+ case MLDSA87:
+ return MLDSA87_PUBLIC_KEY_SIZE;
+ default:
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+}
+
+static int crypto_mldsa_set_pub_key(struct crypto_sig *tfm,
+ const void *key, unsigned int keylen)
+{
+ struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm);
+ unsigned int expected_len = crypto_mldsa_key_size(tfm);
+
+ if (keylen != expected_len)
+ return -EINVAL;
+
+ ctx->pk_len = keylen;
+ memcpy(ctx->pk, key, keylen);
+ ctx->key_set = true;
+ return 0;
+}
+
+static int crypto_mldsa_set_priv_key(struct crypto_sig *tfm,
+ const void *key, unsigned int keylen)
+{
+ return -EOPNOTSUPP;
+}
+
+static unsigned int crypto_mldsa_max_size(struct crypto_sig *tfm)
+{
+ struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm);
+
+ switch (ctx->strength) {
+ case MLDSA44:
+ return MLDSA44_SIGNATURE_SIZE;
+ case MLDSA65:
+ return MLDSA65_SIGNATURE_SIZE;
+ case MLDSA87:
+ return MLDSA87_SIGNATURE_SIZE;
+ default:
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+}
+
+static int crypto_mldsa44_alg_init(struct crypto_sig *tfm)
+{
+ struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm);
+
+ ctx->strength = MLDSA44;
+ ctx->key_set = false;
+ return 0;
+}
+
+static int crypto_mldsa65_alg_init(struct crypto_sig *tfm)
+{
+ struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm);
+
+ ctx->strength = MLDSA65;
+ ctx->key_set = false;
+ return 0;
+}
+
+static int crypto_mldsa87_alg_init(struct crypto_sig *tfm)
+{
+ struct crypto_mldsa_ctx *ctx = crypto_sig_ctx(tfm);
+
+ ctx->strength = MLDSA87;
+ ctx->key_set = false;
+ return 0;
+}
+
+static void crypto_mldsa_alg_exit(struct crypto_sig *tfm)
+{
+}
+
+static struct sig_alg crypto_mldsa_algs[] = {
+ {
+ .sign = crypto_mldsa_sign,
+ .verify = crypto_mldsa_verify,
+ .set_pub_key = crypto_mldsa_set_pub_key,
+ .set_priv_key = crypto_mldsa_set_priv_key,
+ .key_size = crypto_mldsa_key_size,
+ .max_size = crypto_mldsa_max_size,
+ .init = crypto_mldsa44_alg_init,
+ .exit = crypto_mldsa_alg_exit,
+ .base.cra_name = "mldsa44",
+ .base.cra_driver_name = "mldsa44-lib",
+ .base.cra_ctxsize = sizeof(struct crypto_mldsa_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_priority = 5000,
+ }, {
+ .sign = crypto_mldsa_sign,
+ .verify = crypto_mldsa_verify,
+ .set_pub_key = crypto_mldsa_set_pub_key,
+ .set_priv_key = crypto_mldsa_set_priv_key,
+ .key_size = crypto_mldsa_key_size,
+ .max_size = crypto_mldsa_max_size,
+ .init = crypto_mldsa65_alg_init,
+ .exit = crypto_mldsa_alg_exit,
+ .base.cra_name = "mldsa65",
+ .base.cra_driver_name = "mldsa65-lib",
+ .base.cra_ctxsize = sizeof(struct crypto_mldsa_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_priority = 5000,
+ }, {
+ .sign = crypto_mldsa_sign,
+ .verify = crypto_mldsa_verify,
+ .set_pub_key = crypto_mldsa_set_pub_key,
+ .set_priv_key = crypto_mldsa_set_priv_key,
+ .key_size = crypto_mldsa_key_size,
+ .max_size = crypto_mldsa_max_size,
+ .init = crypto_mldsa87_alg_init,
+ .exit = crypto_mldsa_alg_exit,
+ .base.cra_name = "mldsa87",
+ .base.cra_driver_name = "mldsa87-lib",
+ .base.cra_ctxsize = sizeof(struct crypto_mldsa_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_priority = 5000,
+ },
+};
+
+static int __init mldsa_init(void)
+{
+ int ret, i;
+
+ for (i = 0; i < ARRAY_SIZE(crypto_mldsa_algs); i++) {
+ ret = crypto_register_sig(&crypto_mldsa_algs[i]);
+ if (ret < 0)
+ goto error;
+ }
+ return 0;
+
+error:
+ pr_err("Failed to register (%d)\n", ret);
+ for (i--; i >= 0; i--)
+ crypto_unregister_sig(&crypto_mldsa_algs[i]);
+ return ret;
+}
+module_init(mldsa_init);
+
+static void mldsa_exit(void)
+{
+ for (int i = 0; i < ARRAY_SIZE(crypto_mldsa_algs); i++)
+ crypto_unregister_sig(&crypto_mldsa_algs[i]);
+}
+module_exit(mldsa_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Crypto API support for ML-DSA signature verification");
+MODULE_ALIAS_CRYPTO("mldsa44");
+MODULE_ALIAS_CRYPTO("mldsa65");
+MODULE_ALIAS_CRYPTO("mldsa87");
|
{
"author": "David Howells <dhowells@redhat.com>",
"date": "Mon, 2 Feb 2026 17:02:06 +0000",
"thread_id": "20260202170216.2467036-2-dhowells@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
|
Hi Lukas, Ignat,
[Note this is based on Eric Bigger's libcrypto-next branch].
These patches add ML-DSA module signing signing:
(1) Add a crypto_sig interface for ML-DSA, verification only.
(2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in
the blacklist. Direct-sign ML-DSA doesn't generate an easily
accessible hash. Note that this changes behaviour as we no longer use
whatever hash is specified in the certificate for this.
(3) Rename the public_key_signature struct's "digest" and "digest_size"
members to "m" and "m_size" to reflect that it's not necessarily a
digest, but it is an input to the public key algorithm.
(4) Modify PKCS#7 support to allow kernel module signatures to carry
authenticatedAttributes as OpenSSL refuses to let them be opted out of
for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the
process.
Modify PKCS#7 to pass the authenticatedAttributes directly to the
ML-DSA algorithm rather than passing over a digest as is done with RSA
as ML-DSA wants to do its own hashing and will add other stuff into
the hash. We could use hashML-DSA or an external mu instead, but they
aren't standardised for CMS yet.
(5) Add support to the PKCS#7 and X.509 parsers for ML-DSA.
(6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with
ML-DSA and add ML-DSA to the choice of algorithm with which to sign
modules. Note that this might need some more 'select' lines in the
Kconfig to select the lib stuff as well.
(7) Add a config option to allow authenticatedAttributes to be used with
ML-DSA for module signing. Ordinarily, authenticatedAttributes are
not permitted for this purpose, however direct signing with ML-DSA
will not be supported by OpenSSL until v4 is released.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc
David
Changes
=======
ver #16)
- Make the selection of ML-DSA for module signing when configuring
contingent on openssl saying it supports ML-DSA (fix from Arnd
Bergmann).
- Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0.
ver #15)
- Undo a removed blank line to simplify the X.509 patch.
- Split the rename of ->digest to ->m into its own patch.
- In pkcs7_digest(), always copy the signedAttrs and modify rather than
passing the replacement tag byte in a separate shash update call to the
rest of the data. That way the ->m buffer is very likely to be
optimally aligned for the crypto.
- Only allow authenticatedAttributes with ML-DSA for module signing and
only if permission is given in the kernel config.
ver #14)
- public_key:
- Rename public_key::digest to public_key::m.
- X.509:
- Independently calculate the SHA256 hash for the blacklist check as
an ML-DSA-signed X.509 cert doesn't generate a digest we can use.
- Point public_key::m at the TBS data for ML-DSA.
- PKCS#7:
- Allocate a big enough digest buffer rather than reallocating in order
to store the authattrs/signedattrs instead.
- Merge the two patches that add direct signing support.
- ML-DSA:
- Use bool instead of u8.
- Remove references to SHAKE in Kconfig and mention OpenSSL requirements
there.
- Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using
SHA512 only.
- Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA.
- RSASSA-PSS:
- Allow use with SHA256 and SHA384.
- Fix calculation of emBits to be number of bits in the RSA modulus 'n'.
- Use strncmp() not memcmp() to avoid reading beyond end of string.
- Use correct destructor in rsassa_params_parse().
- Drop this algo for the moment.
- Drop the pefile_context::digest_free for now - it's only set to true and
is unrelated to public_key::digest_free.
ver #13)
- Allow a zero-length salt in RSASSA-PSS.
- Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS
selftest panics when used.
- Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp).
- Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set).
ver #12)
- Rebased on Eric's libcrypto-next branch.
- Delete references to Dilithium (ML-DSA derived from this).
- Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4.
- Made it possible to do ML-DSA over the data without signedAttrs.
- Made RSASSA-PSS info parser use strsep() and match_token().
- Cleaned the RSASSA-PSS param parsing.
- Added limitation on what hashes can be used with what algos.
- Moved __free()-marked variables to the point of setting.
ver #11)
- Rebased on Eric's libcrypto-next branch.
- Added RSASSA-PSS support patches.
ver #10)
- Replaced the Leancrypto ML-DSA implementation with Eric's.
- Fixed Eric's implementation to have MODULE_* info.
- Added a patch to drive Eric's ML-DSA implementation from crypto_sig.
- Removed SHAKE256 from the list of available module hash algorithms.
- Changed a some more ML_DSA to MLDSA in config symbols.
ver #9)
- ML-DSA changes:
- Separate output into four modules (1 common, 3 strength-specific).
- Solves Kconfig issue with needing to select at least one strength.
- Separate the strength-specific crypto-lib APIs.
- This is now generated by preprocessor-templating.
- Remove the multiplexor code.
- Multiplex the crypto-lib APIs by C type.
- Fix the PKCS#7/X.509 code to have the correct algo names.
ver #8)
- Moved the ML-DSA code to lib/crypto/mldsa/.
- Renamed some bits from ml-dsa to mldsa.
- Created a simplified API and placed that in include/crypto/mldsa.h.
- Made the testing code use the simplified API.
- Fixed a warning about implicitly casting between uint16_t and __le16.
ver #7)
- Rebased on Eric's tree as that now contains all the necessary SHA-3
infrastructure and drop the SHA-3 patches from here.
- Added a minimal patch to provide shake256 support for crypto_sig.
- Got rid of the memory allocation wrappers.
- Removed the ML-DSA keypair generation code and the signing code, leaving
only the signature verification code.
- Removed the secret key handling code.
- Removed the secret keys from the kunit tests and the signing testing.
- Removed some unused bits from the ML-DSA code.
- Downgraded the kdoc comments to ordinary comments, but keep the markup
for easier comparison to Leancrypto.
ver #6)
- Added a patch to make the jitterentropy RNG use lib/sha3.
- Added back the crypto/sha3_generic changes.
- Added ML-DSA implementation (still needs more cleanup).
- Added kunit test for ML-DSA.
- Modified PKCS#7 to accommodate ML-DSA.
- Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used.
- Modified sign-file to not use CMS_NOATTR with ML-DSA.
- Allowed SHA3 and SHAKE* algorithms for module signing default.
- Allowed ML-DSA-{44,65,87} to be selected as the module signing default.
ver #5)
- Fix gen-hash-testvecs.py to correctly handle algo names that contain a
dash.
- Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as
these don't currently have HMAC variants implemented.
- Fix algo names to be correct.
- Fix kunit module description as it now tests all SHA3 variants.
ver #4)
- Fix a couple of arm64 build problems.
- Doc fixes:
- Fix the description of the algorithm to be closer to the NIST spec's
terminology.
- Don't talk of finialising the context for XOFs.
- Don't say "Return: None".
- Declare the "Context" to be "Any context" and make no mention of the
fact that it might use the FPU.
- Change "initialise" to "initialize".
- Don't warn that the context is relatively large for stack use.
- Use size_t for size parameters/variables.
- Make the module_exit unconditional.
- Dropped the crypto/ dir-affecting patches for the moment.
ver #3)
- Renamed conflicting arm64 functions.
- Made a separate wrapper API for each algorithm in the family.
- Removed sha3_init(), sha3_reinit() and sha3_final().
- Removed sha3_ctx::digest_size.
- Renamed sha3_ctx::partial to sha3_ctx::absorb_offset.
- Refer to the output of SHAKE* as "output" not "digest".
- Moved the Iota transform into the one-round function.
- Made sha3_update() warn if called after sha3_squeeze().
- Simplified the module-load test to not do update after squeeze.
- Added Return: and Context: kdoc statements and expanded the kdoc
headers.
- Added an API description document.
- Overhauled the kunit tests.
- Only have one kunit test.
- Only call the general hash tester on one algo.
- Add separate simple cursory checks for the other algos.
- Add resqueezing tests.
- Add some NIST example tests.
- Changed crypto/sha3_generic to use this
- Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr
- Folded struct sha3_state into struct sha3_ctx.
ver #2)
- Simplify the endianness handling.
- Rename sha3_final() to sha3_squeeze() and don't clear the context at the
end as it's permitted to continue calling sha3_final() to extract
continuations of the digest (needed by ML-DSA).
- Don't reapply the end marker to the hash state in continuation
sha3_squeeze() unless sha3_update() gets called again (needed by
ML-DSA).
- Give sha3_squeeze() the amount of digest to produce as a parameter
rather than using ctx->digest_size and don't return the amount digested.
- Reimplement sha3_final() as a wrapper around sha3_squeeze() that
extracts ctx->digest_size amount of digest and then zeroes out the
context. The latter is necessary to avoid upsetting
hash-test-template.h.
- Provide a sha3_reinit() function to clear the state, but to leave the
parameters that indicate the hash properties unaffected, allowing for
reuse.
- Provide a sha3_set_digestsize() function to change the size of the
digest to be extracted by sha3_final(). sha3_squeeze() takes a
parameter for this instead.
- Don't pass the digest size as a parameter to shake128/256_init() but
rather default to 128/256 bits as per the function name.
- Provide a sha3_clear() function to zero out the context.
David Howells (7):
crypto: Add ML-DSA crypto_sig support
x509: Separately calculate sha256 for blacklist
pkcs7, x509: Rename ->digest to ->m
pkcs7: Allow the signing algo to do whatever digestion it wants itself
pkcs7, x509: Add ML-DSA support
modsign: Enable ML-DSA module signing
pkcs7: Allow authenticatedAttributes for ML-DSA
Documentation/admin-guide/module-signing.rst | 16 +-
certs/Kconfig | 40 ++++
certs/Makefile | 3 +
crypto/Kconfig | 9 +
crypto/Makefile | 2 +
crypto/asymmetric_keys/Kconfig | 11 +
crypto/asymmetric_keys/asymmetric_type.c | 4 +-
crypto/asymmetric_keys/pkcs7_parser.c | 36 +++-
crypto/asymmetric_keys/pkcs7_parser.h | 3 +
crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++---
crypto/asymmetric_keys/public_key.c | 13 +-
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++-
crypto/asymmetric_keys/x509_parser.h | 2 +
crypto/asymmetric_keys/x509_public_key.c | 42 ++--
crypto/mldsa.c | 201 +++++++++++++++++++
include/crypto/public_key.h | 6 +-
include/linux/oid_registry.h | 5 +
scripts/sign-file.c | 39 +++-
security/integrity/digsig_asymmetric.c | 4 +-
20 files changed, 473 insertions(+), 71 deletions(-)
create mode 100644 crypto/mldsa.c
|
Calculate the SHA256 hash for blacklisting purposes independently of the
signature hash (which may be something other than SHA256).
This is necessary because when ML-DSA is used, no digest is calculated.
Note that this represents a change of behaviour in that the hash used for
the blacklist check would previously have been whatever digest was used
for, say, RSA-based signatures. It may be that this is inadvisable.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
cc: Lukas Wunner <lukas@wunner.de>
cc: Ignat Korchagin <ignat@cloudflare.com>
cc: Stephan Mueller <smueller@chronox.de>
cc: Eric Biggers <ebiggers@kernel.org>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: keyrings@vger.kernel.org
cc: linux-crypto@vger.kernel.org
---
crypto/asymmetric_keys/x509_parser.h | 2 ++
crypto/asymmetric_keys/x509_public_key.c | 22 +++++++++++++---------
2 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/crypto/asymmetric_keys/x509_parser.h b/crypto/asymmetric_keys/x509_parser.h
index 0688c222806b..b7aeebdddb36 100644
--- a/crypto/asymmetric_keys/x509_parser.h
+++ b/crypto/asymmetric_keys/x509_parser.h
@@ -9,12 +9,14 @@
#include <linux/time.h>
#include <crypto/public_key.h>
#include <keys/asymmetric-type.h>
+#include <crypto/sha2.h>
struct x509_certificate {
struct x509_certificate *next;
struct x509_certificate *signer; /* Certificate that signed this one */
struct public_key *pub; /* Public key details */
struct public_key_signature *sig; /* Signature parameters */
+ u8 sha256[SHA256_DIGEST_SIZE]; /* Hash for blacklist purposes */
char *issuer; /* Name of certificate issuer */
char *subject; /* Name of certificate subject */
struct asymmetric_key_id *id; /* Issuer + Serial number */
diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
index 12e3341e806b..79cc7b7a0630 100644
--- a/crypto/asymmetric_keys/x509_public_key.c
+++ b/crypto/asymmetric_keys/x509_public_key.c
@@ -31,6 +31,19 @@ int x509_get_sig_params(struct x509_certificate *cert)
pr_devel("==>%s()\n", __func__);
+ /* Calculate a SHA256 hash of the TBS and check it against the
+ * blacklist.
+ */
+ sha256(cert->tbs, cert->tbs_size, cert->sha256);
+ ret = is_hash_blacklisted(cert->sha256, sizeof(cert->sha256),
+ BLACKLIST_HASH_X509_TBS);
+ if (ret == -EKEYREJECTED) {
+ pr_err("Cert %*phN is blacklisted\n",
+ (int)sizeof(cert->sha256), cert->sha256);
+ cert->blacklisted = true;
+ ret = 0;
+ }
+
sig->s = kmemdup(cert->raw_sig, cert->raw_sig_size, GFP_KERNEL);
if (!sig->s)
return -ENOMEM;
@@ -69,15 +82,6 @@ int x509_get_sig_params(struct x509_certificate *cert)
if (ret < 0)
goto error_2;
- ret = is_hash_blacklisted(sig->digest, sig->digest_size,
- BLACKLIST_HASH_X509_TBS);
- if (ret == -EKEYREJECTED) {
- pr_err("Cert %*phN is blacklisted\n",
- sig->digest_size, sig->digest);
- cert->blacklisted = true;
- ret = 0;
- }
-
error_2:
kfree(desc);
error:
|
{
"author": "David Howells <dhowells@redhat.com>",
"date": "Mon, 2 Feb 2026 17:02:07 +0000",
"thread_id": "20260202170216.2467036-2-dhowells@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
|
Hi Lukas, Ignat,
[Note this is based on Eric Bigger's libcrypto-next branch].
These patches add ML-DSA module signing signing:
(1) Add a crypto_sig interface for ML-DSA, verification only.
(2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in
the blacklist. Direct-sign ML-DSA doesn't generate an easily
accessible hash. Note that this changes behaviour as we no longer use
whatever hash is specified in the certificate for this.
(3) Rename the public_key_signature struct's "digest" and "digest_size"
members to "m" and "m_size" to reflect that it's not necessarily a
digest, but it is an input to the public key algorithm.
(4) Modify PKCS#7 support to allow kernel module signatures to carry
authenticatedAttributes as OpenSSL refuses to let them be opted out of
for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the
process.
Modify PKCS#7 to pass the authenticatedAttributes directly to the
ML-DSA algorithm rather than passing over a digest as is done with RSA
as ML-DSA wants to do its own hashing and will add other stuff into
the hash. We could use hashML-DSA or an external mu instead, but they
aren't standardised for CMS yet.
(5) Add support to the PKCS#7 and X.509 parsers for ML-DSA.
(6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with
ML-DSA and add ML-DSA to the choice of algorithm with which to sign
modules. Note that this might need some more 'select' lines in the
Kconfig to select the lib stuff as well.
(7) Add a config option to allow authenticatedAttributes to be used with
ML-DSA for module signing. Ordinarily, authenticatedAttributes are
not permitted for this purpose, however direct signing with ML-DSA
will not be supported by OpenSSL until v4 is released.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc
David
Changes
=======
ver #16)
- Make the selection of ML-DSA for module signing when configuring
contingent on openssl saying it supports ML-DSA (fix from Arnd
Bergmann).
- Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0.
ver #15)
- Undo a removed blank line to simplify the X.509 patch.
- Split the rename of ->digest to ->m into its own patch.
- In pkcs7_digest(), always copy the signedAttrs and modify rather than
passing the replacement tag byte in a separate shash update call to the
rest of the data. That way the ->m buffer is very likely to be
optimally aligned for the crypto.
- Only allow authenticatedAttributes with ML-DSA for module signing and
only if permission is given in the kernel config.
ver #14)
- public_key:
- Rename public_key::digest to public_key::m.
- X.509:
- Independently calculate the SHA256 hash for the blacklist check as
an ML-DSA-signed X.509 cert doesn't generate a digest we can use.
- Point public_key::m at the TBS data for ML-DSA.
- PKCS#7:
- Allocate a big enough digest buffer rather than reallocating in order
to store the authattrs/signedattrs instead.
- Merge the two patches that add direct signing support.
- ML-DSA:
- Use bool instead of u8.
- Remove references to SHAKE in Kconfig and mention OpenSSL requirements
there.
- Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using
SHA512 only.
- Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA.
- RSASSA-PSS:
- Allow use with SHA256 and SHA384.
- Fix calculation of emBits to be number of bits in the RSA modulus 'n'.
- Use strncmp() not memcmp() to avoid reading beyond end of string.
- Use correct destructor in rsassa_params_parse().
- Drop this algo for the moment.
- Drop the pefile_context::digest_free for now - it's only set to true and
is unrelated to public_key::digest_free.
ver #13)
- Allow a zero-length salt in RSASSA-PSS.
- Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS
selftest panics when used.
- Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp).
- Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set).
ver #12)
- Rebased on Eric's libcrypto-next branch.
- Delete references to Dilithium (ML-DSA derived from this).
- Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4.
- Made it possible to do ML-DSA over the data without signedAttrs.
- Made RSASSA-PSS info parser use strsep() and match_token().
- Cleaned the RSASSA-PSS param parsing.
- Added limitation on what hashes can be used with what algos.
- Moved __free()-marked variables to the point of setting.
ver #11)
- Rebased on Eric's libcrypto-next branch.
- Added RSASSA-PSS support patches.
ver #10)
- Replaced the Leancrypto ML-DSA implementation with Eric's.
- Fixed Eric's implementation to have MODULE_* info.
- Added a patch to drive Eric's ML-DSA implementation from crypto_sig.
- Removed SHAKE256 from the list of available module hash algorithms.
- Changed a some more ML_DSA to MLDSA in config symbols.
ver #9)
- ML-DSA changes:
- Separate output into four modules (1 common, 3 strength-specific).
- Solves Kconfig issue with needing to select at least one strength.
- Separate the strength-specific crypto-lib APIs.
- This is now generated by preprocessor-templating.
- Remove the multiplexor code.
- Multiplex the crypto-lib APIs by C type.
- Fix the PKCS#7/X.509 code to have the correct algo names.
ver #8)
- Moved the ML-DSA code to lib/crypto/mldsa/.
- Renamed some bits from ml-dsa to mldsa.
- Created a simplified API and placed that in include/crypto/mldsa.h.
- Made the testing code use the simplified API.
- Fixed a warning about implicitly casting between uint16_t and __le16.
ver #7)
- Rebased on Eric's tree as that now contains all the necessary SHA-3
infrastructure and drop the SHA-3 patches from here.
- Added a minimal patch to provide shake256 support for crypto_sig.
- Got rid of the memory allocation wrappers.
- Removed the ML-DSA keypair generation code and the signing code, leaving
only the signature verification code.
- Removed the secret key handling code.
- Removed the secret keys from the kunit tests and the signing testing.
- Removed some unused bits from the ML-DSA code.
- Downgraded the kdoc comments to ordinary comments, but keep the markup
for easier comparison to Leancrypto.
ver #6)
- Added a patch to make the jitterentropy RNG use lib/sha3.
- Added back the crypto/sha3_generic changes.
- Added ML-DSA implementation (still needs more cleanup).
- Added kunit test for ML-DSA.
- Modified PKCS#7 to accommodate ML-DSA.
- Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used.
- Modified sign-file to not use CMS_NOATTR with ML-DSA.
- Allowed SHA3 and SHAKE* algorithms for module signing default.
- Allowed ML-DSA-{44,65,87} to be selected as the module signing default.
ver #5)
- Fix gen-hash-testvecs.py to correctly handle algo names that contain a
dash.
- Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as
these don't currently have HMAC variants implemented.
- Fix algo names to be correct.
- Fix kunit module description as it now tests all SHA3 variants.
ver #4)
- Fix a couple of arm64 build problems.
- Doc fixes:
- Fix the description of the algorithm to be closer to the NIST spec's
terminology.
- Don't talk of finialising the context for XOFs.
- Don't say "Return: None".
- Declare the "Context" to be "Any context" and make no mention of the
fact that it might use the FPU.
- Change "initialise" to "initialize".
- Don't warn that the context is relatively large for stack use.
- Use size_t for size parameters/variables.
- Make the module_exit unconditional.
- Dropped the crypto/ dir-affecting patches for the moment.
ver #3)
- Renamed conflicting arm64 functions.
- Made a separate wrapper API for each algorithm in the family.
- Removed sha3_init(), sha3_reinit() and sha3_final().
- Removed sha3_ctx::digest_size.
- Renamed sha3_ctx::partial to sha3_ctx::absorb_offset.
- Refer to the output of SHAKE* as "output" not "digest".
- Moved the Iota transform into the one-round function.
- Made sha3_update() warn if called after sha3_squeeze().
- Simplified the module-load test to not do update after squeeze.
- Added Return: and Context: kdoc statements and expanded the kdoc
headers.
- Added an API description document.
- Overhauled the kunit tests.
- Only have one kunit test.
- Only call the general hash tester on one algo.
- Add separate simple cursory checks for the other algos.
- Add resqueezing tests.
- Add some NIST example tests.
- Changed crypto/sha3_generic to use this
- Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr
- Folded struct sha3_state into struct sha3_ctx.
ver #2)
- Simplify the endianness handling.
- Rename sha3_final() to sha3_squeeze() and don't clear the context at the
end as it's permitted to continue calling sha3_final() to extract
continuations of the digest (needed by ML-DSA).
- Don't reapply the end marker to the hash state in continuation
sha3_squeeze() unless sha3_update() gets called again (needed by
ML-DSA).
- Give sha3_squeeze() the amount of digest to produce as a parameter
rather than using ctx->digest_size and don't return the amount digested.
- Reimplement sha3_final() as a wrapper around sha3_squeeze() that
extracts ctx->digest_size amount of digest and then zeroes out the
context. The latter is necessary to avoid upsetting
hash-test-template.h.
- Provide a sha3_reinit() function to clear the state, but to leave the
parameters that indicate the hash properties unaffected, allowing for
reuse.
- Provide a sha3_set_digestsize() function to change the size of the
digest to be extracted by sha3_final(). sha3_squeeze() takes a
parameter for this instead.
- Don't pass the digest size as a parameter to shake128/256_init() but
rather default to 128/256 bits as per the function name.
- Provide a sha3_clear() function to zero out the context.
David Howells (7):
crypto: Add ML-DSA crypto_sig support
x509: Separately calculate sha256 for blacklist
pkcs7, x509: Rename ->digest to ->m
pkcs7: Allow the signing algo to do whatever digestion it wants itself
pkcs7, x509: Add ML-DSA support
modsign: Enable ML-DSA module signing
pkcs7: Allow authenticatedAttributes for ML-DSA
Documentation/admin-guide/module-signing.rst | 16 +-
certs/Kconfig | 40 ++++
certs/Makefile | 3 +
crypto/Kconfig | 9 +
crypto/Makefile | 2 +
crypto/asymmetric_keys/Kconfig | 11 +
crypto/asymmetric_keys/asymmetric_type.c | 4 +-
crypto/asymmetric_keys/pkcs7_parser.c | 36 +++-
crypto/asymmetric_keys/pkcs7_parser.h | 3 +
crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++---
crypto/asymmetric_keys/public_key.c | 13 +-
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++-
crypto/asymmetric_keys/x509_parser.h | 2 +
crypto/asymmetric_keys/x509_public_key.c | 42 ++--
crypto/mldsa.c | 201 +++++++++++++++++++
include/crypto/public_key.h | 6 +-
include/linux/oid_registry.h | 5 +
scripts/sign-file.c | 39 +++-
security/integrity/digsig_asymmetric.c | 4 +-
20 files changed, 473 insertions(+), 71 deletions(-)
create mode 100644 crypto/mldsa.c
|
Rename ->digest and ->digest_len to ->m and ->m_size to represent the input
to the signature verification algorithm, reflecting that ->digest may no
longer actually *be* a digest.
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
cc: Lukas Wunner <lukas@wunner.de>
cc: Ignat Korchagin <ignat@cloudflare.com>
cc: Stephan Mueller <smueller@chronox.de>
cc: Eric Biggers <ebiggers@kernel.org>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: keyrings@vger.kernel.org
cc: linux-crypto@vger.kernel.org
---
crypto/asymmetric_keys/asymmetric_type.c | 4 ++--
crypto/asymmetric_keys/pkcs7_verify.c | 28 ++++++++++++------------
crypto/asymmetric_keys/public_key.c | 3 +--
crypto/asymmetric_keys/signature.c | 2 +-
crypto/asymmetric_keys/x509_public_key.c | 10 ++++-----
include/crypto/public_key.h | 4 ++--
security/integrity/digsig_asymmetric.c | 4 ++--
7 files changed, 26 insertions(+), 29 deletions(-)
diff --git a/crypto/asymmetric_keys/asymmetric_type.c b/crypto/asymmetric_keys/asymmetric_type.c
index 348966ea2175..2326743310b1 100644
--- a/crypto/asymmetric_keys/asymmetric_type.c
+++ b/crypto/asymmetric_keys/asymmetric_type.c
@@ -593,10 +593,10 @@ static int asymmetric_key_verify_signature(struct kernel_pkey_params *params,
{
struct public_key_signature sig = {
.s_size = params->in2_len,
- .digest_size = params->in_len,
+ .m_size = params->in_len,
.encoding = params->encoding,
.hash_algo = params->hash_algo,
- .digest = (void *)in,
+ .m = (void *)in,
.s = (void *)in2,
};
diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
index 6d6475e3a9bf..aa085ec6fb1c 100644
--- a/crypto/asymmetric_keys/pkcs7_verify.c
+++ b/crypto/asymmetric_keys/pkcs7_verify.c
@@ -31,7 +31,7 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
kenter(",%u,%s", sinfo->index, sinfo->sig->hash_algo);
/* The digest was calculated already. */
- if (sig->digest)
+ if (sig->m)
return 0;
if (!sinfo->sig->hash_algo)
@@ -45,11 +45,11 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
return (PTR_ERR(tfm) == -ENOENT) ? -ENOPKG : PTR_ERR(tfm);
desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
- sig->digest_size = crypto_shash_digestsize(tfm);
+ sig->m_size = crypto_shash_digestsize(tfm);
ret = -ENOMEM;
- sig->digest = kmalloc(sig->digest_size, GFP_KERNEL);
- if (!sig->digest)
+ sig->m = kmalloc(sig->m_size, GFP_KERNEL);
+ if (!sig->m)
goto error_no_desc;
desc = kzalloc(desc_size, GFP_KERNEL);
@@ -59,11 +59,10 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
desc->tfm = tfm;
/* Digest the message [RFC2315 9.3] */
- ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len,
- sig->digest);
+ ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len, sig->m);
if (ret < 0)
goto error;
- pr_devel("MsgDigest = [%*ph]\n", 8, sig->digest);
+ pr_devel("MsgDigest = [%*ph]\n", 8, sig->m);
/* However, if there are authenticated attributes, there must be a
* message digest attribute amongst them which corresponds to the
@@ -78,14 +77,14 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
goto error;
}
- if (sinfo->msgdigest_len != sig->digest_size) {
+ if (sinfo->msgdigest_len != sig->m_size) {
pr_warn("Sig %u: Invalid digest size (%u)\n",
sinfo->index, sinfo->msgdigest_len);
ret = -EBADMSG;
goto error;
}
- if (memcmp(sig->digest, sinfo->msgdigest,
+ if (memcmp(sig->m, sinfo->msgdigest,
sinfo->msgdigest_len) != 0) {
pr_warn("Sig %u: Message digest doesn't match\n",
sinfo->index);
@@ -98,7 +97,8 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
* convert the attributes from a CONT.0 into a SET before we
* hash it.
*/
- memset(sig->digest, 0, sig->digest_size);
+ memset(sig->m, 0, sig->m_size);
+
ret = crypto_shash_init(desc);
if (ret < 0)
@@ -108,10 +108,10 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
if (ret < 0)
goto error;
ret = crypto_shash_finup(desc, sinfo->authattrs,
- sinfo->authattrs_len, sig->digest);
+ sinfo->authattrs_len, sig->m);
if (ret < 0)
goto error;
- pr_devel("AADigest = [%*ph]\n", 8, sig->digest);
+ pr_devel("AADigest = [%*ph]\n", 8, sig->m);
}
error:
@@ -138,8 +138,8 @@ int pkcs7_get_digest(struct pkcs7_message *pkcs7, const u8 **buf, u32 *len,
if (ret)
return ret;
- *buf = sinfo->sig->digest;
- *len = sinfo->sig->digest_size;
+ *buf = sinfo->sig->m;
+ *len = sinfo->sig->m_size;
i = match_string(hash_algo_name, HASH_ALGO__LAST,
sinfo->sig->hash_algo);
diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
index e5b177c8e842..a46356e0c08b 100644
--- a/crypto/asymmetric_keys/public_key.c
+++ b/crypto/asymmetric_keys/public_key.c
@@ -425,8 +425,7 @@ int public_key_verify_signature(const struct public_key *pkey,
if (ret)
goto error_free_key;
- ret = crypto_sig_verify(tfm, sig->s, sig->s_size,
- sig->digest, sig->digest_size);
+ ret = crypto_sig_verify(tfm, sig->s, sig->s_size, sig->m, sig->m_size);
error_free_key:
kfree_sensitive(key);
diff --git a/crypto/asymmetric_keys/signature.c b/crypto/asymmetric_keys/signature.c
index 041d04b5c953..f4ec126121b3 100644
--- a/crypto/asymmetric_keys/signature.c
+++ b/crypto/asymmetric_keys/signature.c
@@ -28,7 +28,7 @@ void public_key_signature_free(struct public_key_signature *sig)
for (i = 0; i < ARRAY_SIZE(sig->auth_ids); i++)
kfree(sig->auth_ids[i]);
kfree(sig->s);
- kfree(sig->digest);
+ kfree(sig->m);
kfree(sig);
}
}
diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
index 79cc7b7a0630..3854f7ae4ed0 100644
--- a/crypto/asymmetric_keys/x509_public_key.c
+++ b/crypto/asymmetric_keys/x509_public_key.c
@@ -63,11 +63,11 @@ int x509_get_sig_params(struct x509_certificate *cert)
}
desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
- sig->digest_size = crypto_shash_digestsize(tfm);
+ sig->m_size = crypto_shash_digestsize(tfm);
ret = -ENOMEM;
- sig->digest = kmalloc(sig->digest_size, GFP_KERNEL);
- if (!sig->digest)
+ sig->m = kmalloc(sig->m_size, GFP_KERNEL);
+ if (!sig->m)
goto error;
desc = kzalloc(desc_size, GFP_KERNEL);
@@ -76,9 +76,7 @@ int x509_get_sig_params(struct x509_certificate *cert)
desc->tfm = tfm;
- ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size,
- sig->digest);
-
+ ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size, sig->m);
if (ret < 0)
goto error_2;
diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h
index 81098e00c08f..bd38ba4d217d 100644
--- a/include/crypto/public_key.h
+++ b/include/crypto/public_key.h
@@ -43,9 +43,9 @@ extern void public_key_free(struct public_key *key);
struct public_key_signature {
struct asymmetric_key_id *auth_ids[3];
u8 *s; /* Signature */
- u8 *digest;
+ u8 *m; /* Message data to pass to verifier */
u32 s_size; /* Number of bytes in signature */
- u32 digest_size; /* Number of bytes in digest */
+ u32 m_size; /* Number of bytes in ->m */
const char *pkey_algo;
const char *hash_algo;
const char *encoding;
diff --git a/security/integrity/digsig_asymmetric.c b/security/integrity/digsig_asymmetric.c
index 457c0a396caf..87be85f477d1 100644
--- a/security/integrity/digsig_asymmetric.c
+++ b/security/integrity/digsig_asymmetric.c
@@ -121,8 +121,8 @@ int asymmetric_verify(struct key *keyring, const char *sig,
goto out;
}
- pks.digest = (u8 *)data;
- pks.digest_size = datalen;
+ pks.m = (u8 *)data;
+ pks.m_size = datalen;
pks.s = hdr->sig;
pks.s_size = siglen;
ret = verify_signature(key, &pks);
|
{
"author": "David Howells <dhowells@redhat.com>",
"date": "Mon, 2 Feb 2026 17:02:08 +0000",
"thread_id": "20260202170216.2467036-2-dhowells@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
|
Hi Lukas, Ignat,
[Note this is based on Eric Bigger's libcrypto-next branch].
These patches add ML-DSA module signing signing:
(1) Add a crypto_sig interface for ML-DSA, verification only.
(2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in
the blacklist. Direct-sign ML-DSA doesn't generate an easily
accessible hash. Note that this changes behaviour as we no longer use
whatever hash is specified in the certificate for this.
(3) Rename the public_key_signature struct's "digest" and "digest_size"
members to "m" and "m_size" to reflect that it's not necessarily a
digest, but it is an input to the public key algorithm.
(4) Modify PKCS#7 support to allow kernel module signatures to carry
authenticatedAttributes as OpenSSL refuses to let them be opted out of
for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the
process.
Modify PKCS#7 to pass the authenticatedAttributes directly to the
ML-DSA algorithm rather than passing over a digest as is done with RSA
as ML-DSA wants to do its own hashing and will add other stuff into
the hash. We could use hashML-DSA or an external mu instead, but they
aren't standardised for CMS yet.
(5) Add support to the PKCS#7 and X.509 parsers for ML-DSA.
(6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with
ML-DSA and add ML-DSA to the choice of algorithm with which to sign
modules. Note that this might need some more 'select' lines in the
Kconfig to select the lib stuff as well.
(7) Add a config option to allow authenticatedAttributes to be used with
ML-DSA for module signing. Ordinarily, authenticatedAttributes are
not permitted for this purpose, however direct signing with ML-DSA
will not be supported by OpenSSL until v4 is released.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc
David
Changes
=======
ver #16)
- Make the selection of ML-DSA for module signing when configuring
contingent on openssl saying it supports ML-DSA (fix from Arnd
Bergmann).
- Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0.
ver #15)
- Undo a removed blank line to simplify the X.509 patch.
- Split the rename of ->digest to ->m into its own patch.
- In pkcs7_digest(), always copy the signedAttrs and modify rather than
passing the replacement tag byte in a separate shash update call to the
rest of the data. That way the ->m buffer is very likely to be
optimally aligned for the crypto.
- Only allow authenticatedAttributes with ML-DSA for module signing and
only if permission is given in the kernel config.
ver #14)
- public_key:
- Rename public_key::digest to public_key::m.
- X.509:
- Independently calculate the SHA256 hash for the blacklist check as
an ML-DSA-signed X.509 cert doesn't generate a digest we can use.
- Point public_key::m at the TBS data for ML-DSA.
- PKCS#7:
- Allocate a big enough digest buffer rather than reallocating in order
to store the authattrs/signedattrs instead.
- Merge the two patches that add direct signing support.
- ML-DSA:
- Use bool instead of u8.
- Remove references to SHAKE in Kconfig and mention OpenSSL requirements
there.
- Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using
SHA512 only.
- Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA.
- RSASSA-PSS:
- Allow use with SHA256 and SHA384.
- Fix calculation of emBits to be number of bits in the RSA modulus 'n'.
- Use strncmp() not memcmp() to avoid reading beyond end of string.
- Use correct destructor in rsassa_params_parse().
- Drop this algo for the moment.
- Drop the pefile_context::digest_free for now - it's only set to true and
is unrelated to public_key::digest_free.
ver #13)
- Allow a zero-length salt in RSASSA-PSS.
- Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS
selftest panics when used.
- Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp).
- Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set).
ver #12)
- Rebased on Eric's libcrypto-next branch.
- Delete references to Dilithium (ML-DSA derived from this).
- Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4.
- Made it possible to do ML-DSA over the data without signedAttrs.
- Made RSASSA-PSS info parser use strsep() and match_token().
- Cleaned the RSASSA-PSS param parsing.
- Added limitation on what hashes can be used with what algos.
- Moved __free()-marked variables to the point of setting.
ver #11)
- Rebased on Eric's libcrypto-next branch.
- Added RSASSA-PSS support patches.
ver #10)
- Replaced the Leancrypto ML-DSA implementation with Eric's.
- Fixed Eric's implementation to have MODULE_* info.
- Added a patch to drive Eric's ML-DSA implementation from crypto_sig.
- Removed SHAKE256 from the list of available module hash algorithms.
- Changed a some more ML_DSA to MLDSA in config symbols.
ver #9)
- ML-DSA changes:
- Separate output into four modules (1 common, 3 strength-specific).
- Solves Kconfig issue with needing to select at least one strength.
- Separate the strength-specific crypto-lib APIs.
- This is now generated by preprocessor-templating.
- Remove the multiplexor code.
- Multiplex the crypto-lib APIs by C type.
- Fix the PKCS#7/X.509 code to have the correct algo names.
ver #8)
- Moved the ML-DSA code to lib/crypto/mldsa/.
- Renamed some bits from ml-dsa to mldsa.
- Created a simplified API and placed that in include/crypto/mldsa.h.
- Made the testing code use the simplified API.
- Fixed a warning about implicitly casting between uint16_t and __le16.
ver #7)
- Rebased on Eric's tree as that now contains all the necessary SHA-3
infrastructure and drop the SHA-3 patches from here.
- Added a minimal patch to provide shake256 support for crypto_sig.
- Got rid of the memory allocation wrappers.
- Removed the ML-DSA keypair generation code and the signing code, leaving
only the signature verification code.
- Removed the secret key handling code.
- Removed the secret keys from the kunit tests and the signing testing.
- Removed some unused bits from the ML-DSA code.
- Downgraded the kdoc comments to ordinary comments, but keep the markup
for easier comparison to Leancrypto.
ver #6)
- Added a patch to make the jitterentropy RNG use lib/sha3.
- Added back the crypto/sha3_generic changes.
- Added ML-DSA implementation (still needs more cleanup).
- Added kunit test for ML-DSA.
- Modified PKCS#7 to accommodate ML-DSA.
- Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used.
- Modified sign-file to not use CMS_NOATTR with ML-DSA.
- Allowed SHA3 and SHAKE* algorithms for module signing default.
- Allowed ML-DSA-{44,65,87} to be selected as the module signing default.
ver #5)
- Fix gen-hash-testvecs.py to correctly handle algo names that contain a
dash.
- Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as
these don't currently have HMAC variants implemented.
- Fix algo names to be correct.
- Fix kunit module description as it now tests all SHA3 variants.
ver #4)
- Fix a couple of arm64 build problems.
- Doc fixes:
- Fix the description of the algorithm to be closer to the NIST spec's
terminology.
- Don't talk of finialising the context for XOFs.
- Don't say "Return: None".
- Declare the "Context" to be "Any context" and make no mention of the
fact that it might use the FPU.
- Change "initialise" to "initialize".
- Don't warn that the context is relatively large for stack use.
- Use size_t for size parameters/variables.
- Make the module_exit unconditional.
- Dropped the crypto/ dir-affecting patches for the moment.
ver #3)
- Renamed conflicting arm64 functions.
- Made a separate wrapper API for each algorithm in the family.
- Removed sha3_init(), sha3_reinit() and sha3_final().
- Removed sha3_ctx::digest_size.
- Renamed sha3_ctx::partial to sha3_ctx::absorb_offset.
- Refer to the output of SHAKE* as "output" not "digest".
- Moved the Iota transform into the one-round function.
- Made sha3_update() warn if called after sha3_squeeze().
- Simplified the module-load test to not do update after squeeze.
- Added Return: and Context: kdoc statements and expanded the kdoc
headers.
- Added an API description document.
- Overhauled the kunit tests.
- Only have one kunit test.
- Only call the general hash tester on one algo.
- Add separate simple cursory checks for the other algos.
- Add resqueezing tests.
- Add some NIST example tests.
- Changed crypto/sha3_generic to use this
- Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr
- Folded struct sha3_state into struct sha3_ctx.
ver #2)
- Simplify the endianness handling.
- Rename sha3_final() to sha3_squeeze() and don't clear the context at the
end as it's permitted to continue calling sha3_final() to extract
continuations of the digest (needed by ML-DSA).
- Don't reapply the end marker to the hash state in continuation
sha3_squeeze() unless sha3_update() gets called again (needed by
ML-DSA).
- Give sha3_squeeze() the amount of digest to produce as a parameter
rather than using ctx->digest_size and don't return the amount digested.
- Reimplement sha3_final() as a wrapper around sha3_squeeze() that
extracts ctx->digest_size amount of digest and then zeroes out the
context. The latter is necessary to avoid upsetting
hash-test-template.h.
- Provide a sha3_reinit() function to clear the state, but to leave the
parameters that indicate the hash properties unaffected, allowing for
reuse.
- Provide a sha3_set_digestsize() function to change the size of the
digest to be extracted by sha3_final(). sha3_squeeze() takes a
parameter for this instead.
- Don't pass the digest size as a parameter to shake128/256_init() but
rather default to 128/256 bits as per the function name.
- Provide a sha3_clear() function to zero out the context.
David Howells (7):
crypto: Add ML-DSA crypto_sig support
x509: Separately calculate sha256 for blacklist
pkcs7, x509: Rename ->digest to ->m
pkcs7: Allow the signing algo to do whatever digestion it wants itself
pkcs7, x509: Add ML-DSA support
modsign: Enable ML-DSA module signing
pkcs7: Allow authenticatedAttributes for ML-DSA
Documentation/admin-guide/module-signing.rst | 16 +-
certs/Kconfig | 40 ++++
certs/Makefile | 3 +
crypto/Kconfig | 9 +
crypto/Makefile | 2 +
crypto/asymmetric_keys/Kconfig | 11 +
crypto/asymmetric_keys/asymmetric_type.c | 4 +-
crypto/asymmetric_keys/pkcs7_parser.c | 36 +++-
crypto/asymmetric_keys/pkcs7_parser.h | 3 +
crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++---
crypto/asymmetric_keys/public_key.c | 13 +-
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++-
crypto/asymmetric_keys/x509_parser.h | 2 +
crypto/asymmetric_keys/x509_public_key.c | 42 ++--
crypto/mldsa.c | 201 +++++++++++++++++++
include/crypto/public_key.h | 6 +-
include/linux/oid_registry.h | 5 +
scripts/sign-file.c | 39 +++-
security/integrity/digsig_asymmetric.c | 4 +-
20 files changed, 473 insertions(+), 71 deletions(-)
create mode 100644 crypto/mldsa.c
|
Allow the data to be verified in a PKCS#7 or CMS message to be passed
directly to an asymmetric cipher algorithm (e.g. ML-DSA) if it wants to do
whatever passes for hashing/digestion itself. The normal digestion of the
data is then skipped as that would be ignored unless another signed info in
the message has some other algorithm that needs it.
The 'data to be verified' may be the content of the PKCS#7 message or it
will be the authenticatedAttributes (signedAttrs if CMS), modified, if
those are present.
This is done by:
(1) Make ->m and ->m_size point to the data to be verified rather than
making public_key_verify_signature() access the data directly. This
is so that keyctl(KEYCTL_PKEY_VERIFY) will still work.
(2) Add a flag, ->algo_takes_data, to indicate that the verification
algorithm wants to access the data to be verified directly rather than
having it digested first.
(3) If the PKCS#7 message has authenticatedAttributes (or CMS
signedAttrs), then the digest contained therein will be validated as
now, and the modified attrs blob will either be digested or assigned
to ->m as appropriate.
(4) If present, always copy and modify the authenticatedAttributes (or
signedAttrs) then digest that in one go rather than calling the shash
update twice (once for the tag and once for the rest).
(5) For ML-DSA, point ->m to the TBSCertificate instead of digesting it
and using the digest.
Note that whilst ML-DSA does allow for an "external mu", CMS doesn't yet
have that standardised.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Lukas Wunner <lukas@wunner.de>
cc: Ignat Korchagin <ignat@cloudflare.com>
cc: Stephan Mueller <smueller@chronox.de>
cc: Eric Biggers <ebiggers@kernel.org>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: keyrings@vger.kernel.org
cc: linux-crypto@vger.kernel.org
---
crypto/asymmetric_keys/pkcs7_parser.c | 4 +-
crypto/asymmetric_keys/pkcs7_verify.c | 52 ++++++++++++++++--------
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_public_key.c | 10 +++++
include/crypto/public_key.h | 2 +
5 files changed, 51 insertions(+), 20 deletions(-)
diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c
index 423d13c47545..3cdbab3b9f50 100644
--- a/crypto/asymmetric_keys/pkcs7_parser.c
+++ b/crypto/asymmetric_keys/pkcs7_parser.c
@@ -599,8 +599,8 @@ int pkcs7_sig_note_set_of_authattrs(void *context, size_t hdrlen,
}
/* We need to switch the 'CONT 0' to a 'SET OF' when we digest */
- sinfo->authattrs = value - (hdrlen - 1);
- sinfo->authattrs_len = vlen + (hdrlen - 1);
+ sinfo->authattrs = value - hdrlen;
+ sinfo->authattrs_len = vlen + hdrlen;
return 0;
}
diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
index aa085ec6fb1c..06abb9838f95 100644
--- a/crypto/asymmetric_keys/pkcs7_verify.c
+++ b/crypto/asymmetric_keys/pkcs7_verify.c
@@ -30,6 +30,16 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
kenter(",%u,%s", sinfo->index, sinfo->sig->hash_algo);
+ if (!sinfo->authattrs && sig->algo_takes_data) {
+ /* There's no intermediate digest and the signature algo
+ * doesn't want the data prehashing.
+ */
+ sig->m = (void *)pkcs7->data;
+ sig->m_size = pkcs7->data_len;
+ sig->m_free = false;
+ return 0;
+ }
+
/* The digest was calculated already. */
if (sig->m)
return 0;
@@ -48,9 +58,10 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
sig->m_size = crypto_shash_digestsize(tfm);
ret = -ENOMEM;
- sig->m = kmalloc(sig->m_size, GFP_KERNEL);
+ sig->m = kmalloc(umax(sinfo->authattrs_len, sig->m_size), GFP_KERNEL);
if (!sig->m)
goto error_no_desc;
+ sig->m_free = true;
desc = kzalloc(desc_size, GFP_KERNEL);
if (!desc)
@@ -69,8 +80,6 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
* digest we just calculated.
*/
if (sinfo->authattrs) {
- u8 tag;
-
if (!sinfo->msgdigest) {
pr_warn("Sig %u: No messageDigest\n", sinfo->index);
ret = -EKEYREJECTED;
@@ -96,21 +105,25 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
* as the contents of the digest instead. Note that we need to
* convert the attributes from a CONT.0 into a SET before we
* hash it.
+ *
+ * However, for certain algorithms, such as ML-DSA, the digest
+ * is integrated into the signing algorithm. In such a case,
+ * we copy the authattrs, modifying the tag type, and set that
+ * as the digest.
*/
- memset(sig->m, 0, sig->m_size);
-
-
- ret = crypto_shash_init(desc);
- if (ret < 0)
- goto error;
- tag = ASN1_CONS_BIT | ASN1_SET;
- ret = crypto_shash_update(desc, &tag, 1);
- if (ret < 0)
- goto error;
- ret = crypto_shash_finup(desc, sinfo->authattrs,
- sinfo->authattrs_len, sig->m);
- if (ret < 0)
- goto error;
+ memcpy(sig->m, sinfo->authattrs, sinfo->authattrs_len);
+ sig->m[0] = ASN1_CONS_BIT | ASN1_SET;
+
+ if (sig->algo_takes_data) {
+ sig->m_size = sinfo->authattrs_len;
+ ret = 0;
+ } else {
+ ret = crypto_shash_digest(desc, sig->m,
+ sinfo->authattrs_len,
+ sig->m);
+ if (ret < 0)
+ goto error;
+ }
pr_devel("AADigest = [%*ph]\n", 8, sig->m);
}
@@ -137,6 +150,11 @@ int pkcs7_get_digest(struct pkcs7_message *pkcs7, const u8 **buf, u32 *len,
ret = pkcs7_digest(pkcs7, sinfo);
if (ret)
return ret;
+ if (!sinfo->sig->m_free) {
+ pr_notice_once("%s: No digest available\n", __func__);
+ return -EINVAL; /* TODO: MLDSA doesn't necessarily calculate an
+ * intermediate digest. */
+ }
*buf = sinfo->sig->m;
*len = sinfo->sig->m_size;
diff --git a/crypto/asymmetric_keys/signature.c b/crypto/asymmetric_keys/signature.c
index f4ec126121b3..a5ac7a53b670 100644
--- a/crypto/asymmetric_keys/signature.c
+++ b/crypto/asymmetric_keys/signature.c
@@ -28,7 +28,8 @@ void public_key_signature_free(struct public_key_signature *sig)
for (i = 0; i < ARRAY_SIZE(sig->auth_ids); i++)
kfree(sig->auth_ids[i]);
kfree(sig->s);
- kfree(sig->m);
+ if (sig->m_free)
+ kfree(sig->m);
kfree(sig);
}
}
diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
index 3854f7ae4ed0..27b4fea37845 100644
--- a/crypto/asymmetric_keys/x509_public_key.c
+++ b/crypto/asymmetric_keys/x509_public_key.c
@@ -50,6 +50,14 @@ int x509_get_sig_params(struct x509_certificate *cert)
sig->s_size = cert->raw_sig_size;
+ if (sig->algo_takes_data) {
+ /* The signature algorithm does whatever passes for hashing. */
+ sig->m = (u8 *)cert->tbs;
+ sig->m_size = cert->tbs_size;
+ sig->m_free = false;
+ goto out;
+ }
+
/* Allocate the hashing algorithm we're going to need and find out how
* big the hash operational data will be.
*/
@@ -69,6 +77,7 @@ int x509_get_sig_params(struct x509_certificate *cert)
sig->m = kmalloc(sig->m_size, GFP_KERNEL);
if (!sig->m)
goto error;
+ sig->m_free = true;
desc = kzalloc(desc_size, GFP_KERNEL);
if (!desc)
@@ -84,6 +93,7 @@ int x509_get_sig_params(struct x509_certificate *cert)
kfree(desc);
error:
crypto_free_shash(tfm);
+out:
pr_devel("<==%s() = %d\n", __func__, ret);
return ret;
}
diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h
index bd38ba4d217d..4c5199b20338 100644
--- a/include/crypto/public_key.h
+++ b/include/crypto/public_key.h
@@ -46,6 +46,8 @@ struct public_key_signature {
u8 *m; /* Message data to pass to verifier */
u32 s_size; /* Number of bytes in signature */
u32 m_size; /* Number of bytes in ->m */
+ bool m_free; /* T if ->m needs freeing */
+ bool algo_takes_data; /* T if public key algo operates on data, not a hash */
const char *pkey_algo;
const char *hash_algo;
const char *encoding;
|
{
"author": "David Howells <dhowells@redhat.com>",
"date": "Mon, 2 Feb 2026 17:02:09 +0000",
"thread_id": "20260202170216.2467036-2-dhowells@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
|
Hi Lukas, Ignat,
[Note this is based on Eric Bigger's libcrypto-next branch].
These patches add ML-DSA module signing signing:
(1) Add a crypto_sig interface for ML-DSA, verification only.
(2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in
the blacklist. Direct-sign ML-DSA doesn't generate an easily
accessible hash. Note that this changes behaviour as we no longer use
whatever hash is specified in the certificate for this.
(3) Rename the public_key_signature struct's "digest" and "digest_size"
members to "m" and "m_size" to reflect that it's not necessarily a
digest, but it is an input to the public key algorithm.
(4) Modify PKCS#7 support to allow kernel module signatures to carry
authenticatedAttributes as OpenSSL refuses to let them be opted out of
for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the
process.
Modify PKCS#7 to pass the authenticatedAttributes directly to the
ML-DSA algorithm rather than passing over a digest as is done with RSA
as ML-DSA wants to do its own hashing and will add other stuff into
the hash. We could use hashML-DSA or an external mu instead, but they
aren't standardised for CMS yet.
(5) Add support to the PKCS#7 and X.509 parsers for ML-DSA.
(6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with
ML-DSA and add ML-DSA to the choice of algorithm with which to sign
modules. Note that this might need some more 'select' lines in the
Kconfig to select the lib stuff as well.
(7) Add a config option to allow authenticatedAttributes to be used with
ML-DSA for module signing. Ordinarily, authenticatedAttributes are
not permitted for this purpose, however direct signing with ML-DSA
will not be supported by OpenSSL until v4 is released.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc
David
Changes
=======
ver #16)
- Make the selection of ML-DSA for module signing when configuring
contingent on openssl saying it supports ML-DSA (fix from Arnd
Bergmann).
- Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0.
ver #15)
- Undo a removed blank line to simplify the X.509 patch.
- Split the rename of ->digest to ->m into its own patch.
- In pkcs7_digest(), always copy the signedAttrs and modify rather than
passing the replacement tag byte in a separate shash update call to the
rest of the data. That way the ->m buffer is very likely to be
optimally aligned for the crypto.
- Only allow authenticatedAttributes with ML-DSA for module signing and
only if permission is given in the kernel config.
ver #14)
- public_key:
- Rename public_key::digest to public_key::m.
- X.509:
- Independently calculate the SHA256 hash for the blacklist check as
an ML-DSA-signed X.509 cert doesn't generate a digest we can use.
- Point public_key::m at the TBS data for ML-DSA.
- PKCS#7:
- Allocate a big enough digest buffer rather than reallocating in order
to store the authattrs/signedattrs instead.
- Merge the two patches that add direct signing support.
- ML-DSA:
- Use bool instead of u8.
- Remove references to SHAKE in Kconfig and mention OpenSSL requirements
there.
- Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using
SHA512 only.
- Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA.
- RSASSA-PSS:
- Allow use with SHA256 and SHA384.
- Fix calculation of emBits to be number of bits in the RSA modulus 'n'.
- Use strncmp() not memcmp() to avoid reading beyond end of string.
- Use correct destructor in rsassa_params_parse().
- Drop this algo for the moment.
- Drop the pefile_context::digest_free for now - it's only set to true and
is unrelated to public_key::digest_free.
ver #13)
- Allow a zero-length salt in RSASSA-PSS.
- Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS
selftest panics when used.
- Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp).
- Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set).
ver #12)
- Rebased on Eric's libcrypto-next branch.
- Delete references to Dilithium (ML-DSA derived from this).
- Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4.
- Made it possible to do ML-DSA over the data without signedAttrs.
- Made RSASSA-PSS info parser use strsep() and match_token().
- Cleaned the RSASSA-PSS param parsing.
- Added limitation on what hashes can be used with what algos.
- Moved __free()-marked variables to the point of setting.
ver #11)
- Rebased on Eric's libcrypto-next branch.
- Added RSASSA-PSS support patches.
ver #10)
- Replaced the Leancrypto ML-DSA implementation with Eric's.
- Fixed Eric's implementation to have MODULE_* info.
- Added a patch to drive Eric's ML-DSA implementation from crypto_sig.
- Removed SHAKE256 from the list of available module hash algorithms.
- Changed a some more ML_DSA to MLDSA in config symbols.
ver #9)
- ML-DSA changes:
- Separate output into four modules (1 common, 3 strength-specific).
- Solves Kconfig issue with needing to select at least one strength.
- Separate the strength-specific crypto-lib APIs.
- This is now generated by preprocessor-templating.
- Remove the multiplexor code.
- Multiplex the crypto-lib APIs by C type.
- Fix the PKCS#7/X.509 code to have the correct algo names.
ver #8)
- Moved the ML-DSA code to lib/crypto/mldsa/.
- Renamed some bits from ml-dsa to mldsa.
- Created a simplified API and placed that in include/crypto/mldsa.h.
- Made the testing code use the simplified API.
- Fixed a warning about implicitly casting between uint16_t and __le16.
ver #7)
- Rebased on Eric's tree as that now contains all the necessary SHA-3
infrastructure and drop the SHA-3 patches from here.
- Added a minimal patch to provide shake256 support for crypto_sig.
- Got rid of the memory allocation wrappers.
- Removed the ML-DSA keypair generation code and the signing code, leaving
only the signature verification code.
- Removed the secret key handling code.
- Removed the secret keys from the kunit tests and the signing testing.
- Removed some unused bits from the ML-DSA code.
- Downgraded the kdoc comments to ordinary comments, but keep the markup
for easier comparison to Leancrypto.
ver #6)
- Added a patch to make the jitterentropy RNG use lib/sha3.
- Added back the crypto/sha3_generic changes.
- Added ML-DSA implementation (still needs more cleanup).
- Added kunit test for ML-DSA.
- Modified PKCS#7 to accommodate ML-DSA.
- Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used.
- Modified sign-file to not use CMS_NOATTR with ML-DSA.
- Allowed SHA3 and SHAKE* algorithms for module signing default.
- Allowed ML-DSA-{44,65,87} to be selected as the module signing default.
ver #5)
- Fix gen-hash-testvecs.py to correctly handle algo names that contain a
dash.
- Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as
these don't currently have HMAC variants implemented.
- Fix algo names to be correct.
- Fix kunit module description as it now tests all SHA3 variants.
ver #4)
- Fix a couple of arm64 build problems.
- Doc fixes:
- Fix the description of the algorithm to be closer to the NIST spec's
terminology.
- Don't talk of finialising the context for XOFs.
- Don't say "Return: None".
- Declare the "Context" to be "Any context" and make no mention of the
fact that it might use the FPU.
- Change "initialise" to "initialize".
- Don't warn that the context is relatively large for stack use.
- Use size_t for size parameters/variables.
- Make the module_exit unconditional.
- Dropped the crypto/ dir-affecting patches for the moment.
ver #3)
- Renamed conflicting arm64 functions.
- Made a separate wrapper API for each algorithm in the family.
- Removed sha3_init(), sha3_reinit() and sha3_final().
- Removed sha3_ctx::digest_size.
- Renamed sha3_ctx::partial to sha3_ctx::absorb_offset.
- Refer to the output of SHAKE* as "output" not "digest".
- Moved the Iota transform into the one-round function.
- Made sha3_update() warn if called after sha3_squeeze().
- Simplified the module-load test to not do update after squeeze.
- Added Return: and Context: kdoc statements and expanded the kdoc
headers.
- Added an API description document.
- Overhauled the kunit tests.
- Only have one kunit test.
- Only call the general hash tester on one algo.
- Add separate simple cursory checks for the other algos.
- Add resqueezing tests.
- Add some NIST example tests.
- Changed crypto/sha3_generic to use this
- Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr
- Folded struct sha3_state into struct sha3_ctx.
ver #2)
- Simplify the endianness handling.
- Rename sha3_final() to sha3_squeeze() and don't clear the context at the
end as it's permitted to continue calling sha3_final() to extract
continuations of the digest (needed by ML-DSA).
- Don't reapply the end marker to the hash state in continuation
sha3_squeeze() unless sha3_update() gets called again (needed by
ML-DSA).
- Give sha3_squeeze() the amount of digest to produce as a parameter
rather than using ctx->digest_size and don't return the amount digested.
- Reimplement sha3_final() as a wrapper around sha3_squeeze() that
extracts ctx->digest_size amount of digest and then zeroes out the
context. The latter is necessary to avoid upsetting
hash-test-template.h.
- Provide a sha3_reinit() function to clear the state, but to leave the
parameters that indicate the hash properties unaffected, allowing for
reuse.
- Provide a sha3_set_digestsize() function to change the size of the
digest to be extracted by sha3_final(). sha3_squeeze() takes a
parameter for this instead.
- Don't pass the digest size as a parameter to shake128/256_init() but
rather default to 128/256 bits as per the function name.
- Provide a sha3_clear() function to zero out the context.
David Howells (7):
crypto: Add ML-DSA crypto_sig support
x509: Separately calculate sha256 for blacklist
pkcs7, x509: Rename ->digest to ->m
pkcs7: Allow the signing algo to do whatever digestion it wants itself
pkcs7, x509: Add ML-DSA support
modsign: Enable ML-DSA module signing
pkcs7: Allow authenticatedAttributes for ML-DSA
Documentation/admin-guide/module-signing.rst | 16 +-
certs/Kconfig | 40 ++++
certs/Makefile | 3 +
crypto/Kconfig | 9 +
crypto/Makefile | 2 +
crypto/asymmetric_keys/Kconfig | 11 +
crypto/asymmetric_keys/asymmetric_type.c | 4 +-
crypto/asymmetric_keys/pkcs7_parser.c | 36 +++-
crypto/asymmetric_keys/pkcs7_parser.h | 3 +
crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++---
crypto/asymmetric_keys/public_key.c | 13 +-
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++-
crypto/asymmetric_keys/x509_parser.h | 2 +
crypto/asymmetric_keys/x509_public_key.c | 42 ++--
crypto/mldsa.c | 201 +++++++++++++++++++
include/crypto/public_key.h | 6 +-
include/linux/oid_registry.h | 5 +
scripts/sign-file.c | 39 +++-
security/integrity/digsig_asymmetric.c | 4 +-
20 files changed, 473 insertions(+), 71 deletions(-)
create mode 100644 crypto/mldsa.c
|
Add support for ML-DSA keys and signatures to the CMS/PKCS#7 and X.509
implementations. ML-DSA-44, -65 and -87 are all supported. For X.509
certificates, the TBSCertificate is required to be signed directly; for
CMS, direct signing of the data is preferred, though use of SHA512 (and
only that) as an intermediate hash of the content is permitted with
signedAttrs.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Lukas Wunner <lukas@wunner.de>
cc: Ignat Korchagin <ignat@cloudflare.com>
cc: Stephan Mueller <smueller@chronox.de>
cc: Eric Biggers <ebiggers@kernel.org>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: keyrings@vger.kernel.org
cc: linux-crypto@vger.kernel.org
---
crypto/asymmetric_keys/pkcs7_parser.c | 24 +++++++++++++++++++-
crypto/asymmetric_keys/public_key.c | 10 +++++++++
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++++++++++++++++++++++-
include/linux/oid_registry.h | 5 +++++
4 files changed, 64 insertions(+), 2 deletions(-)
diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c
index 3cdbab3b9f50..594a8f1d9dfb 100644
--- a/crypto/asymmetric_keys/pkcs7_parser.c
+++ b/crypto/asymmetric_keys/pkcs7_parser.c
@@ -95,11 +95,18 @@ static int pkcs7_check_authattrs(struct pkcs7_message *msg)
if (sinfo->authattrs) {
want = true;
msg->have_authattrs = true;
+ } else if (sinfo->sig->algo_takes_data) {
+ sinfo->sig->hash_algo = "none";
}
- for (sinfo = sinfo->next; sinfo; sinfo = sinfo->next)
+ for (sinfo = sinfo->next; sinfo; sinfo = sinfo->next) {
if (!!sinfo->authattrs != want)
goto inconsistent;
+
+ if (!sinfo->authattrs &&
+ sinfo->sig->algo_takes_data)
+ sinfo->sig->hash_algo = "none";
+ }
return 0;
inconsistent:
@@ -297,6 +304,21 @@ int pkcs7_sig_note_pkey_algo(void *context, size_t hdrlen,
ctx->sinfo->sig->pkey_algo = "ecrdsa";
ctx->sinfo->sig->encoding = "raw";
break;
+ case OID_id_ml_dsa_44:
+ ctx->sinfo->sig->pkey_algo = "mldsa44";
+ ctx->sinfo->sig->encoding = "raw";
+ ctx->sinfo->sig->algo_takes_data = true;
+ break;
+ case OID_id_ml_dsa_65:
+ ctx->sinfo->sig->pkey_algo = "mldsa65";
+ ctx->sinfo->sig->encoding = "raw";
+ ctx->sinfo->sig->algo_takes_data = true;
+ break;
+ case OID_id_ml_dsa_87:
+ ctx->sinfo->sig->pkey_algo = "mldsa87";
+ ctx->sinfo->sig->encoding = "raw";
+ ctx->sinfo->sig->algo_takes_data = true;
+ break;
default:
printk("Unsupported pkey algo: %u\n", ctx->last_oid);
return -ENOPKG;
diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
index a46356e0c08b..09a0b83d5d77 100644
--- a/crypto/asymmetric_keys/public_key.c
+++ b/crypto/asymmetric_keys/public_key.c
@@ -142,6 +142,16 @@ software_key_determine_akcipher(const struct public_key *pkey,
if (strcmp(hash_algo, "streebog256") != 0 &&
strcmp(hash_algo, "streebog512") != 0)
return -EINVAL;
+ } else if (strcmp(pkey->pkey_algo, "mldsa44") == 0 ||
+ strcmp(pkey->pkey_algo, "mldsa65") == 0 ||
+ strcmp(pkey->pkey_algo, "mldsa87") == 0) {
+ if (strcmp(encoding, "raw") != 0)
+ return -EINVAL;
+ if (!hash_algo)
+ return -EINVAL;
+ if (strcmp(hash_algo, "none") != 0 &&
+ strcmp(hash_algo, "sha512") != 0)
+ return -EINVAL;
} else {
/* Unknown public key algorithm */
return -ENOPKG;
diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
index b37cae914987..2fe094f5caf3 100644
--- a/crypto/asymmetric_keys/x509_cert_parser.c
+++ b/crypto/asymmetric_keys/x509_cert_parser.c
@@ -257,6 +257,15 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
case OID_gost2012Signature512:
ctx->cert->sig->hash_algo = "streebog512";
goto ecrdsa;
+ case OID_id_ml_dsa_44:
+ ctx->cert->sig->pkey_algo = "mldsa44";
+ goto ml_dsa;
+ case OID_id_ml_dsa_65:
+ ctx->cert->sig->pkey_algo = "mldsa65";
+ goto ml_dsa;
+ case OID_id_ml_dsa_87:
+ ctx->cert->sig->pkey_algo = "mldsa87";
+ goto ml_dsa;
}
rsa_pkcs1:
@@ -274,6 +283,12 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
ctx->cert->sig->encoding = "x962";
ctx->sig_algo = ctx->last_oid;
return 0;
+ml_dsa:
+ ctx->cert->sig->algo_takes_data = true;
+ ctx->cert->sig->hash_algo = "none";
+ ctx->cert->sig->encoding = "raw";
+ ctx->sig_algo = ctx->last_oid;
+ return 0;
}
/*
@@ -300,7 +315,8 @@ int x509_note_signature(void *context, size_t hdrlen,
if (strcmp(ctx->cert->sig->pkey_algo, "rsa") == 0 ||
strcmp(ctx->cert->sig->pkey_algo, "ecrdsa") == 0 ||
- strcmp(ctx->cert->sig->pkey_algo, "ecdsa") == 0) {
+ strcmp(ctx->cert->sig->pkey_algo, "ecdsa") == 0 ||
+ strncmp(ctx->cert->sig->pkey_algo, "mldsa", 5) == 0) {
/* Discard the BIT STRING metadata */
if (vlen < 1 || *(const u8 *)value != 0)
return -EBADMSG;
@@ -524,6 +540,15 @@ int x509_extract_key_data(void *context, size_t hdrlen,
return -ENOPKG;
}
break;
+ case OID_id_ml_dsa_44:
+ ctx->cert->pub->pkey_algo = "mldsa44";
+ break;
+ case OID_id_ml_dsa_65:
+ ctx->cert->pub->pkey_algo = "mldsa65";
+ break;
+ case OID_id_ml_dsa_87:
+ ctx->cert->pub->pkey_algo = "mldsa87";
+ break;
default:
return -ENOPKG;
}
diff --git a/include/linux/oid_registry.h b/include/linux/oid_registry.h
index 6de479ebbe5d..ebce402854de 100644
--- a/include/linux/oid_registry.h
+++ b/include/linux/oid_registry.h
@@ -145,6 +145,11 @@ enum OID {
OID_id_rsassa_pkcs1_v1_5_with_sha3_384, /* 2.16.840.1.101.3.4.3.15 */
OID_id_rsassa_pkcs1_v1_5_with_sha3_512, /* 2.16.840.1.101.3.4.3.16 */
+ /* NIST FIPS-204 ML-DSA */
+ OID_id_ml_dsa_44, /* 2.16.840.1.101.3.4.3.17 */
+ OID_id_ml_dsa_65, /* 2.16.840.1.101.3.4.3.18 */
+ OID_id_ml_dsa_87, /* 2.16.840.1.101.3.4.3.19 */
+
OID__NR
};
|
{
"author": "David Howells <dhowells@redhat.com>",
"date": "Mon, 2 Feb 2026 17:02:10 +0000",
"thread_id": "20260202170216.2467036-2-dhowells@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
|
Hi Lukas, Ignat,
[Note this is based on Eric Bigger's libcrypto-next branch].
These patches add ML-DSA module signing signing:
(1) Add a crypto_sig interface for ML-DSA, verification only.
(2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in
the blacklist. Direct-sign ML-DSA doesn't generate an easily
accessible hash. Note that this changes behaviour as we no longer use
whatever hash is specified in the certificate for this.
(3) Rename the public_key_signature struct's "digest" and "digest_size"
members to "m" and "m_size" to reflect that it's not necessarily a
digest, but it is an input to the public key algorithm.
(4) Modify PKCS#7 support to allow kernel module signatures to carry
authenticatedAttributes as OpenSSL refuses to let them be opted out of
for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the
process.
Modify PKCS#7 to pass the authenticatedAttributes directly to the
ML-DSA algorithm rather than passing over a digest as is done with RSA
as ML-DSA wants to do its own hashing and will add other stuff into
the hash. We could use hashML-DSA or an external mu instead, but they
aren't standardised for CMS yet.
(5) Add support to the PKCS#7 and X.509 parsers for ML-DSA.
(6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with
ML-DSA and add ML-DSA to the choice of algorithm with which to sign
modules. Note that this might need some more 'select' lines in the
Kconfig to select the lib stuff as well.
(7) Add a config option to allow authenticatedAttributes to be used with
ML-DSA for module signing. Ordinarily, authenticatedAttributes are
not permitted for this purpose, however direct signing with ML-DSA
will not be supported by OpenSSL until v4 is released.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc
David
Changes
=======
ver #16)
- Make the selection of ML-DSA for module signing when configuring
contingent on openssl saying it supports ML-DSA (fix from Arnd
Bergmann).
- Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0.
ver #15)
- Undo a removed blank line to simplify the X.509 patch.
- Split the rename of ->digest to ->m into its own patch.
- In pkcs7_digest(), always copy the signedAttrs and modify rather than
passing the replacement tag byte in a separate shash update call to the
rest of the data. That way the ->m buffer is very likely to be
optimally aligned for the crypto.
- Only allow authenticatedAttributes with ML-DSA for module signing and
only if permission is given in the kernel config.
ver #14)
- public_key:
- Rename public_key::digest to public_key::m.
- X.509:
- Independently calculate the SHA256 hash for the blacklist check as
an ML-DSA-signed X.509 cert doesn't generate a digest we can use.
- Point public_key::m at the TBS data for ML-DSA.
- PKCS#7:
- Allocate a big enough digest buffer rather than reallocating in order
to store the authattrs/signedattrs instead.
- Merge the two patches that add direct signing support.
- ML-DSA:
- Use bool instead of u8.
- Remove references to SHAKE in Kconfig and mention OpenSSL requirements
there.
- Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using
SHA512 only.
- Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA.
- RSASSA-PSS:
- Allow use with SHA256 and SHA384.
- Fix calculation of emBits to be number of bits in the RSA modulus 'n'.
- Use strncmp() not memcmp() to avoid reading beyond end of string.
- Use correct destructor in rsassa_params_parse().
- Drop this algo for the moment.
- Drop the pefile_context::digest_free for now - it's only set to true and
is unrelated to public_key::digest_free.
ver #13)
- Allow a zero-length salt in RSASSA-PSS.
- Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS
selftest panics when used.
- Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp).
- Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set).
ver #12)
- Rebased on Eric's libcrypto-next branch.
- Delete references to Dilithium (ML-DSA derived from this).
- Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4.
- Made it possible to do ML-DSA over the data without signedAttrs.
- Made RSASSA-PSS info parser use strsep() and match_token().
- Cleaned the RSASSA-PSS param parsing.
- Added limitation on what hashes can be used with what algos.
- Moved __free()-marked variables to the point of setting.
ver #11)
- Rebased on Eric's libcrypto-next branch.
- Added RSASSA-PSS support patches.
ver #10)
- Replaced the Leancrypto ML-DSA implementation with Eric's.
- Fixed Eric's implementation to have MODULE_* info.
- Added a patch to drive Eric's ML-DSA implementation from crypto_sig.
- Removed SHAKE256 from the list of available module hash algorithms.
- Changed a some more ML_DSA to MLDSA in config symbols.
ver #9)
- ML-DSA changes:
- Separate output into four modules (1 common, 3 strength-specific).
- Solves Kconfig issue with needing to select at least one strength.
- Separate the strength-specific crypto-lib APIs.
- This is now generated by preprocessor-templating.
- Remove the multiplexor code.
- Multiplex the crypto-lib APIs by C type.
- Fix the PKCS#7/X.509 code to have the correct algo names.
ver #8)
- Moved the ML-DSA code to lib/crypto/mldsa/.
- Renamed some bits from ml-dsa to mldsa.
- Created a simplified API and placed that in include/crypto/mldsa.h.
- Made the testing code use the simplified API.
- Fixed a warning about implicitly casting between uint16_t and __le16.
ver #7)
- Rebased on Eric's tree as that now contains all the necessary SHA-3
infrastructure and drop the SHA-3 patches from here.
- Added a minimal patch to provide shake256 support for crypto_sig.
- Got rid of the memory allocation wrappers.
- Removed the ML-DSA keypair generation code and the signing code, leaving
only the signature verification code.
- Removed the secret key handling code.
- Removed the secret keys from the kunit tests and the signing testing.
- Removed some unused bits from the ML-DSA code.
- Downgraded the kdoc comments to ordinary comments, but keep the markup
for easier comparison to Leancrypto.
ver #6)
- Added a patch to make the jitterentropy RNG use lib/sha3.
- Added back the crypto/sha3_generic changes.
- Added ML-DSA implementation (still needs more cleanup).
- Added kunit test for ML-DSA.
- Modified PKCS#7 to accommodate ML-DSA.
- Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used.
- Modified sign-file to not use CMS_NOATTR with ML-DSA.
- Allowed SHA3 and SHAKE* algorithms for module signing default.
- Allowed ML-DSA-{44,65,87} to be selected as the module signing default.
ver #5)
- Fix gen-hash-testvecs.py to correctly handle algo names that contain a
dash.
- Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as
these don't currently have HMAC variants implemented.
- Fix algo names to be correct.
- Fix kunit module description as it now tests all SHA3 variants.
ver #4)
- Fix a couple of arm64 build problems.
- Doc fixes:
- Fix the description of the algorithm to be closer to the NIST spec's
terminology.
- Don't talk of finialising the context for XOFs.
- Don't say "Return: None".
- Declare the "Context" to be "Any context" and make no mention of the
fact that it might use the FPU.
- Change "initialise" to "initialize".
- Don't warn that the context is relatively large for stack use.
- Use size_t for size parameters/variables.
- Make the module_exit unconditional.
- Dropped the crypto/ dir-affecting patches for the moment.
ver #3)
- Renamed conflicting arm64 functions.
- Made a separate wrapper API for each algorithm in the family.
- Removed sha3_init(), sha3_reinit() and sha3_final().
- Removed sha3_ctx::digest_size.
- Renamed sha3_ctx::partial to sha3_ctx::absorb_offset.
- Refer to the output of SHAKE* as "output" not "digest".
- Moved the Iota transform into the one-round function.
- Made sha3_update() warn if called after sha3_squeeze().
- Simplified the module-load test to not do update after squeeze.
- Added Return: and Context: kdoc statements and expanded the kdoc
headers.
- Added an API description document.
- Overhauled the kunit tests.
- Only have one kunit test.
- Only call the general hash tester on one algo.
- Add separate simple cursory checks for the other algos.
- Add resqueezing tests.
- Add some NIST example tests.
- Changed crypto/sha3_generic to use this
- Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr
- Folded struct sha3_state into struct sha3_ctx.
ver #2)
- Simplify the endianness handling.
- Rename sha3_final() to sha3_squeeze() and don't clear the context at the
end as it's permitted to continue calling sha3_final() to extract
continuations of the digest (needed by ML-DSA).
- Don't reapply the end marker to the hash state in continuation
sha3_squeeze() unless sha3_update() gets called again (needed by
ML-DSA).
- Give sha3_squeeze() the amount of digest to produce as a parameter
rather than using ctx->digest_size and don't return the amount digested.
- Reimplement sha3_final() as a wrapper around sha3_squeeze() that
extracts ctx->digest_size amount of digest and then zeroes out the
context. The latter is necessary to avoid upsetting
hash-test-template.h.
- Provide a sha3_reinit() function to clear the state, but to leave the
parameters that indicate the hash properties unaffected, allowing for
reuse.
- Provide a sha3_set_digestsize() function to change the size of the
digest to be extracted by sha3_final(). sha3_squeeze() takes a
parameter for this instead.
- Don't pass the digest size as a parameter to shake128/256_init() but
rather default to 128/256 bits as per the function name.
- Provide a sha3_clear() function to zero out the context.
David Howells (7):
crypto: Add ML-DSA crypto_sig support
x509: Separately calculate sha256 for blacklist
pkcs7, x509: Rename ->digest to ->m
pkcs7: Allow the signing algo to do whatever digestion it wants itself
pkcs7, x509: Add ML-DSA support
modsign: Enable ML-DSA module signing
pkcs7: Allow authenticatedAttributes for ML-DSA
Documentation/admin-guide/module-signing.rst | 16 +-
certs/Kconfig | 40 ++++
certs/Makefile | 3 +
crypto/Kconfig | 9 +
crypto/Makefile | 2 +
crypto/asymmetric_keys/Kconfig | 11 +
crypto/asymmetric_keys/asymmetric_type.c | 4 +-
crypto/asymmetric_keys/pkcs7_parser.c | 36 +++-
crypto/asymmetric_keys/pkcs7_parser.h | 3 +
crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++---
crypto/asymmetric_keys/public_key.c | 13 +-
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++-
crypto/asymmetric_keys/x509_parser.h | 2 +
crypto/asymmetric_keys/x509_public_key.c | 42 ++--
crypto/mldsa.c | 201 +++++++++++++++++++
include/crypto/public_key.h | 6 +-
include/linux/oid_registry.h | 5 +
scripts/sign-file.c | 39 +++-
security/integrity/digsig_asymmetric.c | 4 +-
20 files changed, 473 insertions(+), 71 deletions(-)
create mode 100644 crypto/mldsa.c
|
Allow ML-DSA module signing to be enabled.
Note that OpenSSL's CMS_*() function suite does not, as of OpenSSL-3.6,
support the use of CMS_NOATTR with ML-DSA, so the prohibition against using
signedAttrs with module signing has to be removed. The selected digest
then applies only to the algorithm used to calculate the digest stored in
the messageDigest attribute. The OpenSSL development branch has patches
applied that fix this[1], but it appears that that will only be available
in OpenSSL-4.
[1] https://github.com/openssl/openssl/pull/28923
sign-file won't set CMS_NOATTR if openssl is earlier than v4, resulting in
the use of signed attributes.
The ML-DSA algorithm takes the raw data to be signed without regard to what
digest algorithm is specified in the CMS message. The CMS specified digest
algorithm is ignored unless signedAttrs are used; in such a case, only
SHA512 is permitted.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Jarkko Sakkinen <jarkko@kernel.org>
cc: Eric Biggers <ebiggers@kernel.org>
cc: Lukas Wunner <lukas@wunner.de>
cc: Ignat Korchagin <ignat@cloudflare.com>
cc: Stephan Mueller <smueller@chronox.de>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: keyrings@vger.kernel.org
cc: linux-crypto@vger.kernel.org
---
Documentation/admin-guide/module-signing.rst | 16 ++++----
certs/Kconfig | 40 ++++++++++++++++++++
certs/Makefile | 3 ++
scripts/sign-file.c | 39 ++++++++++++++-----
4 files changed, 82 insertions(+), 16 deletions(-)
diff --git a/Documentation/admin-guide/module-signing.rst b/Documentation/admin-guide/module-signing.rst
index a8667a777490..7f2f127dc76f 100644
--- a/Documentation/admin-guide/module-signing.rst
+++ b/Documentation/admin-guide/module-signing.rst
@@ -28,10 +28,12 @@ trusted userspace bits.
This facility uses X.509 ITU-T standard certificates to encode the public keys
involved. The signatures are not themselves encoded in any industrial standard
-type. The built-in facility currently only supports the RSA & NIST P-384 ECDSA
-public key signing standard (though it is pluggable and permits others to be
-used). The possible hash algorithms that can be used are SHA-2 and SHA-3 of
-sizes 256, 384, and 512 (the algorithm is selected by data in the signature).
+type. The built-in facility currently only supports the RSA, NIST P-384 ECDSA
+and NIST FIPS-204 ML-DSA public key signing standards (though it is pluggable
+and permits others to be used). For RSA and ECDSA, the possible hash
+algorithms that can be used are SHA-2 and SHA-3 of sizes 256, 384, and 512 (the
+algorithm is selected by data in the signature); ML-DSA does its own hashing,
+but is allowed to be used with a SHA512 hash for signed attributes.
==========================
@@ -146,9 +148,9 @@ into vmlinux) using parameters in the::
file (which is also generated if it does not already exist).
-One can select between RSA (``MODULE_SIG_KEY_TYPE_RSA``) and ECDSA
-(``MODULE_SIG_KEY_TYPE_ECDSA``) to generate either RSA 4k or NIST
-P-384 keypair.
+One can select between RSA (``MODULE_SIG_KEY_TYPE_RSA``), ECDSA
+(``MODULE_SIG_KEY_TYPE_ECDSA``) and ML-DSA (``MODULE_SIG_KEY_TYPE_MLDSA_*``) to
+generate an RSA 4k, a NIST P-384 keypair or an ML-DSA 44, 65 or 87 keypair.
It is strongly recommended that you provide your own x509.genkey file.
diff --git a/certs/Kconfig b/certs/Kconfig
index 78307dc25559..8e39a80c7abe 100644
--- a/certs/Kconfig
+++ b/certs/Kconfig
@@ -39,6 +39,39 @@ config MODULE_SIG_KEY_TYPE_ECDSA
Note: Remove all ECDSA signing keys, e.g. certs/signing_key.pem,
when falling back to building Linux 5.14 and older kernels.
+config MODULE_SIG_KEY_TYPE_MLDSA_44
+ bool "ML-DSA-44"
+ select CRYPTO_MLDSA
+ depends on OPENSSL_SUPPORTS_ML_DSA
+ help
+ Use an ML-DSA-44 key (NIST FIPS 204) for module signing. ML-DSA
+ support requires OpenSSL-3.5 minimum; preferably OpenSSL-4+. With
+ the latter, the entire module body will be signed; with the former,
+ signedAttrs will be used as it lacks support for CMS_NOATTR with
+ ML-DSA.
+
+config MODULE_SIG_KEY_TYPE_MLDSA_65
+ bool "ML-DSA-65"
+ select CRYPTO_MLDSA
+ depends on OPENSSL_SUPPORTS_ML_DSA
+ help
+ Use an ML-DSA-65 key (NIST FIPS 204) for module signing. ML-DSA
+ support requires OpenSSL-3.5 minimum; preferably OpenSSL-4+. With
+ the latter, the entire module body will be signed; with the former,
+ signedAttrs will be used as it lacks support for CMS_NOATTR with
+ ML-DSA.
+
+config MODULE_SIG_KEY_TYPE_MLDSA_87
+ bool "ML-DSA-87"
+ select CRYPTO_MLDSA
+ depends on OPENSSL_SUPPORTS_ML_DSA
+ help
+ Use an ML-DSA-87 key (NIST FIPS 204) for module signing. ML-DSA
+ support requires OpenSSL-3.5 minimum; preferably OpenSSL-4+. With
+ the latter, the entire module body will be signed; with the former,
+ signedAttrs will be used as it lacks support for CMS_NOATTR with
+ ML-DSA.
+
endchoice
config SYSTEM_TRUSTED_KEYRING
@@ -154,4 +187,11 @@ config SYSTEM_BLACKLIST_AUTH_UPDATE
keyring. The PKCS#7 signature of the description is set in the key
payload. Blacklist keys cannot be removed.
+config OPENSSL_SUPPORTS_ML_DSA
+ def_bool $(success, openssl list -key-managers | grep -q ML-DSA-87)
+ help
+ Support for ML-DSA-44/65/87 was added in openssl-3.5, so as long
+ as older versions are supported, the key types may only be
+ set after testing the installed binary for support.
+
endmenu
diff --git a/certs/Makefile b/certs/Makefile
index f6fa4d8d75e0..3ee1960f9f4a 100644
--- a/certs/Makefile
+++ b/certs/Makefile
@@ -43,6 +43,9 @@ targets += x509_certificate_list
ifeq ($(CONFIG_MODULE_SIG_KEY),certs/signing_key.pem)
keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_ECDSA) := -newkey ec -pkeyopt ec_paramgen_curve:secp384r1
+keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_MLDSA_44) := -newkey ml-dsa-44
+keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_MLDSA_65) := -newkey ml-dsa-65
+keytype-$(CONFIG_MODULE_SIG_KEY_TYPE_MLDSA_87) := -newkey ml-dsa-87
quiet_cmd_gen_key = GENKEY $@
cmd_gen_key = openssl req -new -nodes -utf8 -$(CONFIG_MODULE_SIG_HASH) -days 36500 \
diff --git a/scripts/sign-file.c b/scripts/sign-file.c
index 7070245edfc1..78276b15ab23 100644
--- a/scripts/sign-file.c
+++ b/scripts/sign-file.c
@@ -27,7 +27,7 @@
#include <openssl/evp.h>
#include <openssl/pem.h>
#include <openssl/err.h>
-#if OPENSSL_VERSION_MAJOR >= 3
+#if OPENSSL_VERSION_NUMBER >= 0x30000000L
# define USE_PKCS11_PROVIDER
# include <openssl/provider.h>
# include <openssl/store.h>
@@ -315,18 +315,39 @@ int main(int argc, char **argv)
ERR(!digest_algo, "EVP_get_digestbyname");
#ifndef USE_PKCS7
+
+ unsigned int flags =
+ CMS_NOCERTS |
+ CMS_PARTIAL |
+ CMS_BINARY |
+ CMS_DETACHED |
+ CMS_STREAM |
+ CMS_NOSMIMECAP |
+#ifdef CMS_NO_SIGNING_TIME
+ CMS_NO_SIGNING_TIME |
+#endif
+ use_keyid;
+
+#if OPENSSL_VERSION_NUMBER >= 0x30000000L && OPENSSL_VERSION_NUMBER < 0x40000000L
+ if (EVP_PKEY_is_a(private_key, "ML-DSA-44") ||
+ EVP_PKEY_is_a(private_key, "ML-DSA-65") ||
+ EVP_PKEY_is_a(private_key, "ML-DSA-87")) {
+ /* ML-DSA + CMS_NOATTR is not supported in openssl-3.5
+ * and before.
+ */
+ use_signed_attrs = 0;
+ }
+#endif
+
+ flags |= use_signed_attrs;
+
/* Load the signature message from the digest buffer. */
- cms = CMS_sign(NULL, NULL, NULL, NULL,
- CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY |
- CMS_DETACHED | CMS_STREAM);
+ cms = CMS_sign(NULL, NULL, NULL, NULL, flags);
ERR(!cms, "CMS_sign");
- ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo,
- CMS_NOCERTS | CMS_BINARY |
- CMS_NOSMIMECAP | use_keyid |
- use_signed_attrs),
+ ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo, flags),
"CMS_add1_signer");
- ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) != 1,
+ ERR(CMS_final(cms, bm, NULL, flags) != 1,
"CMS_final");
#else
|
{
"author": "David Howells <dhowells@redhat.com>",
"date": "Mon, 2 Feb 2026 17:02:11 +0000",
"thread_id": "20260202170216.2467036-2-dhowells@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH v16 0/7] x509, pkcs7, crypto: Add ML-DSA signing
|
Hi Lukas, Ignat,
[Note this is based on Eric Bigger's libcrypto-next branch].
These patches add ML-DSA module signing signing:
(1) Add a crypto_sig interface for ML-DSA, verification only.
(2) Generate a SHA256 hash of the X.509 TBSCertificate and check that in
the blacklist. Direct-sign ML-DSA doesn't generate an easily
accessible hash. Note that this changes behaviour as we no longer use
whatever hash is specified in the certificate for this.
(3) Rename the public_key_signature struct's "digest" and "digest_size"
members to "m" and "m_size" to reflect that it's not necessarily a
digest, but it is an input to the public key algorithm.
(4) Modify PKCS#7 support to allow kernel module signatures to carry
authenticatedAttributes as OpenSSL refuses to let them be opted out of
for ML-DSA (CMS_NOATTR). This adds an extra digest calculation to the
process.
Modify PKCS#7 to pass the authenticatedAttributes directly to the
ML-DSA algorithm rather than passing over a digest as is done with RSA
as ML-DSA wants to do its own hashing and will add other stuff into
the hash. We could use hashML-DSA or an external mu instead, but they
aren't standardised for CMS yet.
(5) Add support to the PKCS#7 and X.509 parsers for ML-DSA.
(6) Modify sign-file to handle OpenSSL not permitting CMS_NOATTR with
ML-DSA and add ML-DSA to the choice of algorithm with which to sign
modules. Note that this might need some more 'select' lines in the
Kconfig to select the lib stuff as well.
(7) Add a config option to allow authenticatedAttributes to be used with
ML-DSA for module signing. Ordinarily, authenticatedAttributes are
not permitted for this purpose, however direct signing with ML-DSA
will not be supported by OpenSSL until v4 is released.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=keys-pqc
David
Changes
=======
ver #16)
- Make the selection of ML-DSA for module signing when configuring
contingent on openssl saying it supports ML-DSA (fix from Arnd
Bergmann).
- Make ML-DSA-related bits of sign-file contingent on openssl >= 3.0.0.
ver #15)
- Undo a removed blank line to simplify the X.509 patch.
- Split the rename of ->digest to ->m into its own patch.
- In pkcs7_digest(), always copy the signedAttrs and modify rather than
passing the replacement tag byte in a separate shash update call to the
rest of the data. That way the ->m buffer is very likely to be
optimally aligned for the crypto.
- Only allow authenticatedAttributes with ML-DSA for module signing and
only if permission is given in the kernel config.
ver #14)
- public_key:
- Rename public_key::digest to public_key::m.
- X.509:
- Independently calculate the SHA256 hash for the blacklist check as
an ML-DSA-signed X.509 cert doesn't generate a digest we can use.
- Point public_key::m at the TBS data for ML-DSA.
- PKCS#7:
- Allocate a big enough digest buffer rather than reallocating in order
to store the authattrs/signedattrs instead.
- Merge the two patches that add direct signing support.
- ML-DSA:
- Use bool instead of u8.
- Remove references to SHAKE in Kconfig and mention OpenSSL requirements
there.
- Limit ML-DSA with an intermediate hash (e.g. signedAttrs) to using
SHA512 only.
- Don't select CRYPTO_LIB_SHA3 for CRYPTO_MLDSA.
- RSASSA-PSS:
- Allow use with SHA256 and SHA384.
- Fix calculation of emBits to be number of bits in the RSA modulus 'n'.
- Use strncmp() not memcmp() to avoid reading beyond end of string.
- Use correct destructor in rsassa_params_parse().
- Drop this algo for the moment.
- Drop the pefile_context::digest_free for now - it's only set to true and
is unrelated to public_key::digest_free.
ver #13)
- Allow a zero-length salt in RSASSA-PSS.
- Don't reject ECDSA/ECRDSA with SHA256 and SHA384 otherwise the FIPS
selftest panics when used.
- Add a FIPS test for RSASSA-PSS (from NIST's SigVerPSS_186-3.rsp).
- Add a FIPS test for ML-DSA (from NIST's FIPS204 JSON set).
ver #12)
- Rebased on Eric's libcrypto-next branch.
- Delete references to Dilithium (ML-DSA derived from this).
- Made sign-file supply CMS_NOATTR for ML-DSA if openssl >= v4.
- Made it possible to do ML-DSA over the data without signedAttrs.
- Made RSASSA-PSS info parser use strsep() and match_token().
- Cleaned the RSASSA-PSS param parsing.
- Added limitation on what hashes can be used with what algos.
- Moved __free()-marked variables to the point of setting.
ver #11)
- Rebased on Eric's libcrypto-next branch.
- Added RSASSA-PSS support patches.
ver #10)
- Replaced the Leancrypto ML-DSA implementation with Eric's.
- Fixed Eric's implementation to have MODULE_* info.
- Added a patch to drive Eric's ML-DSA implementation from crypto_sig.
- Removed SHAKE256 from the list of available module hash algorithms.
- Changed a some more ML_DSA to MLDSA in config symbols.
ver #9)
- ML-DSA changes:
- Separate output into four modules (1 common, 3 strength-specific).
- Solves Kconfig issue with needing to select at least one strength.
- Separate the strength-specific crypto-lib APIs.
- This is now generated by preprocessor-templating.
- Remove the multiplexor code.
- Multiplex the crypto-lib APIs by C type.
- Fix the PKCS#7/X.509 code to have the correct algo names.
ver #8)
- Moved the ML-DSA code to lib/crypto/mldsa/.
- Renamed some bits from ml-dsa to mldsa.
- Created a simplified API and placed that in include/crypto/mldsa.h.
- Made the testing code use the simplified API.
- Fixed a warning about implicitly casting between uint16_t and __le16.
ver #7)
- Rebased on Eric's tree as that now contains all the necessary SHA-3
infrastructure and drop the SHA-3 patches from here.
- Added a minimal patch to provide shake256 support for crypto_sig.
- Got rid of the memory allocation wrappers.
- Removed the ML-DSA keypair generation code and the signing code, leaving
only the signature verification code.
- Removed the secret key handling code.
- Removed the secret keys from the kunit tests and the signing testing.
- Removed some unused bits from the ML-DSA code.
- Downgraded the kdoc comments to ordinary comments, but keep the markup
for easier comparison to Leancrypto.
ver #6)
- Added a patch to make the jitterentropy RNG use lib/sha3.
- Added back the crypto/sha3_generic changes.
- Added ML-DSA implementation (still needs more cleanup).
- Added kunit test for ML-DSA.
- Modified PKCS#7 to accommodate ML-DSA.
- Modified PKCS#7 and X.509 to allow ML-DSA to be specified and used.
- Modified sign-file to not use CMS_NOATTR with ML-DSA.
- Allowed SHA3 and SHAKE* algorithms for module signing default.
- Allowed ML-DSA-{44,65,87} to be selected as the module signing default.
ver #5)
- Fix gen-hash-testvecs.py to correctly handle algo names that contain a
dash.
- Fix gen-hash-testvecs.py to not generate HMAC for SHA3-* or SHAKE* as
these don't currently have HMAC variants implemented.
- Fix algo names to be correct.
- Fix kunit module description as it now tests all SHA3 variants.
ver #4)
- Fix a couple of arm64 build problems.
- Doc fixes:
- Fix the description of the algorithm to be closer to the NIST spec's
terminology.
- Don't talk of finialising the context for XOFs.
- Don't say "Return: None".
- Declare the "Context" to be "Any context" and make no mention of the
fact that it might use the FPU.
- Change "initialise" to "initialize".
- Don't warn that the context is relatively large for stack use.
- Use size_t for size parameters/variables.
- Make the module_exit unconditional.
- Dropped the crypto/ dir-affecting patches for the moment.
ver #3)
- Renamed conflicting arm64 functions.
- Made a separate wrapper API for each algorithm in the family.
- Removed sha3_init(), sha3_reinit() and sha3_final().
- Removed sha3_ctx::digest_size.
- Renamed sha3_ctx::partial to sha3_ctx::absorb_offset.
- Refer to the output of SHAKE* as "output" not "digest".
- Moved the Iota transform into the one-round function.
- Made sha3_update() warn if called after sha3_squeeze().
- Simplified the module-load test to not do update after squeeze.
- Added Return: and Context: kdoc statements and expanded the kdoc
headers.
- Added an API description document.
- Overhauled the kunit tests.
- Only have one kunit test.
- Only call the general hash tester on one algo.
- Add separate simple cursory checks for the other algos.
- Add resqueezing tests.
- Add some NIST example tests.
- Changed crypto/sha3_generic to use this
- Added SHAKE128/256 to crypto/sha3_generic and crypto/testmgr
- Folded struct sha3_state into struct sha3_ctx.
ver #2)
- Simplify the endianness handling.
- Rename sha3_final() to sha3_squeeze() and don't clear the context at the
end as it's permitted to continue calling sha3_final() to extract
continuations of the digest (needed by ML-DSA).
- Don't reapply the end marker to the hash state in continuation
sha3_squeeze() unless sha3_update() gets called again (needed by
ML-DSA).
- Give sha3_squeeze() the amount of digest to produce as a parameter
rather than using ctx->digest_size and don't return the amount digested.
- Reimplement sha3_final() as a wrapper around sha3_squeeze() that
extracts ctx->digest_size amount of digest and then zeroes out the
context. The latter is necessary to avoid upsetting
hash-test-template.h.
- Provide a sha3_reinit() function to clear the state, but to leave the
parameters that indicate the hash properties unaffected, allowing for
reuse.
- Provide a sha3_set_digestsize() function to change the size of the
digest to be extracted by sha3_final(). sha3_squeeze() takes a
parameter for this instead.
- Don't pass the digest size as a parameter to shake128/256_init() but
rather default to 128/256 bits as per the function name.
- Provide a sha3_clear() function to zero out the context.
David Howells (7):
crypto: Add ML-DSA crypto_sig support
x509: Separately calculate sha256 for blacklist
pkcs7, x509: Rename ->digest to ->m
pkcs7: Allow the signing algo to do whatever digestion it wants itself
pkcs7, x509: Add ML-DSA support
modsign: Enable ML-DSA module signing
pkcs7: Allow authenticatedAttributes for ML-DSA
Documentation/admin-guide/module-signing.rst | 16 +-
certs/Kconfig | 40 ++++
certs/Makefile | 3 +
crypto/Kconfig | 9 +
crypto/Makefile | 2 +
crypto/asymmetric_keys/Kconfig | 11 +
crypto/asymmetric_keys/asymmetric_type.c | 4 +-
crypto/asymmetric_keys/pkcs7_parser.c | 36 +++-
crypto/asymmetric_keys/pkcs7_parser.h | 3 +
crypto/asymmetric_keys/pkcs7_verify.c | 78 ++++---
crypto/asymmetric_keys/public_key.c | 13 +-
crypto/asymmetric_keys/signature.c | 3 +-
crypto/asymmetric_keys/x509_cert_parser.c | 27 ++-
crypto/asymmetric_keys/x509_parser.h | 2 +
crypto/asymmetric_keys/x509_public_key.c | 42 ++--
crypto/mldsa.c | 201 +++++++++++++++++++
include/crypto/public_key.h | 6 +-
include/linux/oid_registry.h | 5 +
scripts/sign-file.c | 39 +++-
security/integrity/digsig_asymmetric.c | 4 +-
20 files changed, 473 insertions(+), 71 deletions(-)
create mode 100644 crypto/mldsa.c
|
Allow the rejection of authenticatedAttributes in PKCS#7 (signedAttrs in
CMS) to be waived in the kernel config for ML-DSA when used for module
signing. This reflects the issue that openssl < 4.0 cannot do this and
openssl-4 has not yet been released.
This does not permit RSA, ECDSA or ECRDSA to be so waived (behaviour
unchanged).
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Lukas Wunner <lukas@wunner.de>
cc: Ignat Korchagin <ignat@cloudflare.com>
cc: Jarkko Sakkinen <jarkko@kernel.org>
cc: Stephan Mueller <smueller@chronox.de>
cc: Eric Biggers <ebiggers@kernel.org>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: keyrings@vger.kernel.org
cc: linux-crypto@vger.kernel.org
---
crypto/asymmetric_keys/Kconfig | 11 +++++++++++
crypto/asymmetric_keys/pkcs7_parser.c | 8 ++++++++
crypto/asymmetric_keys/pkcs7_parser.h | 3 +++
crypto/asymmetric_keys/pkcs7_verify.c | 6 ++++++
4 files changed, 28 insertions(+)
diff --git a/crypto/asymmetric_keys/Kconfig b/crypto/asymmetric_keys/Kconfig
index e1345b8f39f1..1dae2232fe9a 100644
--- a/crypto/asymmetric_keys/Kconfig
+++ b/crypto/asymmetric_keys/Kconfig
@@ -53,6 +53,17 @@ config PKCS7_MESSAGE_PARSER
This option provides support for parsing PKCS#7 format messages for
signature data and provides the ability to verify the signature.
+config PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA
+ bool "Waive rejection of authenticatedAttributes for ML-DSA"
+ depends on PKCS7_MESSAGE_PARSER
+ depends on CRYPTO_MLDSA
+ help
+ Due to use of CMS_NOATTR with ML-DSA not being supported in
+ OpenSSL < 4.0 (and thus any released version), enabling this
+ allows authenticatedAttributes to be used with ML-DSA for
+ module signing. Use of authenticatedAttributes in this
+ context is normally rejected.
+
config PKCS7_TEST_KEY
tristate "PKCS#7 testing key type"
depends on SYSTEM_DATA_VERIFICATION
diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c
index 594a8f1d9dfb..db1c90ca6fc1 100644
--- a/crypto/asymmetric_keys/pkcs7_parser.c
+++ b/crypto/asymmetric_keys/pkcs7_parser.c
@@ -92,9 +92,17 @@ static int pkcs7_check_authattrs(struct pkcs7_message *msg)
if (!sinfo)
goto inconsistent;
+#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA
+ msg->authattrs_rej_waivable = true;
+#endif
+
if (sinfo->authattrs) {
want = true;
msg->have_authattrs = true;
+#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA
+ if (strncmp(sinfo->sig->pkey_algo, "mldsa", 5) != 0)
+ msg->authattrs_rej_waivable = false;
+#endif
} else if (sinfo->sig->algo_takes_data) {
sinfo->sig->hash_algo = "none";
}
diff --git a/crypto/asymmetric_keys/pkcs7_parser.h b/crypto/asymmetric_keys/pkcs7_parser.h
index e17f7ce4fb43..6ef9f335bb17 100644
--- a/crypto/asymmetric_keys/pkcs7_parser.h
+++ b/crypto/asymmetric_keys/pkcs7_parser.h
@@ -55,6 +55,9 @@ struct pkcs7_message {
struct pkcs7_signed_info *signed_infos;
u8 version; /* Version of cert (1 -> PKCS#7 or CMS; 3 -> CMS) */
bool have_authattrs; /* T if have authattrs */
+#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA
+ bool authattrs_rej_waivable; /* T if authatts rejection can be waived */
+#endif
/* Content Data (or NULL) */
enum OID data_type; /* Type of Data */
diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
index 06abb9838f95..519eecfe6778 100644
--- a/crypto/asymmetric_keys/pkcs7_verify.c
+++ b/crypto/asymmetric_keys/pkcs7_verify.c
@@ -425,6 +425,12 @@ int pkcs7_verify(struct pkcs7_message *pkcs7,
return -EKEYREJECTED;
}
if (pkcs7->have_authattrs) {
+#ifdef CONFIG_PKCS7_WAIVE_AUTHATTRS_REJECTION_FOR_MLDSA
+ if (pkcs7->authattrs_rej_waivable) {
+ pr_warn("Waived invalid module sig (has authattrs)\n");
+ break;
+ }
+#endif
pr_warn("Invalid module sig (has authattrs)\n");
return -EKEYREJECTED;
}
|
{
"author": "David Howells <dhowells@redhat.com>",
"date": "Mon, 2 Feb 2026 17:02:12 +0000",
"thread_id": "20260202170216.2467036-2-dhowells@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
jbd2_inode fields are updated under journal->j_list_lock, but some
paths read them without holding the lock (e.g. fast commit
helpers and the ordered truncate fast path).
Use READ_ONCE() for these lockless reads to correct the
concurrency assumptions.
Suggested-by: Jan Kara <jack@suse.com>
Signed-off-by: Li Chen <me@linux.beauty>
---
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
2 files changed, 33 insertions(+), 8 deletions(-)
diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
index 7203d2d2624d..3347d75da2f8 100644
--- a/fs/jbd2/commit.c
+++ b/fs/jbd2/commit.c
@@ -180,7 +180,13 @@ static int journal_wait_on_commit_record(journal_t *journal,
/* Send all the data buffers related to an inode */
int jbd2_submit_inode_data(journal_t *journal, struct jbd2_inode *jinode)
{
- if (!jinode || !(jinode->i_flags & JI_WRITE_DATA))
+ unsigned long flags;
+
+ if (!jinode)
+ return 0;
+
+ flags = READ_ONCE(jinode->i_flags);
+ if (!(flags & JI_WRITE_DATA))
return 0;
trace_jbd2_submit_inode_data(jinode->i_vfs_inode);
@@ -191,12 +197,30 @@ EXPORT_SYMBOL(jbd2_submit_inode_data);
int jbd2_wait_inode_data(journal_t *journal, struct jbd2_inode *jinode)
{
- if (!jinode || !(jinode->i_flags & JI_WAIT_DATA) ||
- !jinode->i_vfs_inode || !jinode->i_vfs_inode->i_mapping)
+ struct address_space *mapping;
+ struct inode *inode;
+ unsigned long flags;
+ loff_t start, end;
+
+ if (!jinode)
+ return 0;
+
+ flags = READ_ONCE(jinode->i_flags);
+ if (!(flags & JI_WAIT_DATA))
+ return 0;
+
+ inode = READ_ONCE(jinode->i_vfs_inode);
+ if (!inode)
+ return 0;
+
+ mapping = inode->i_mapping;
+ start = READ_ONCE(jinode->i_dirty_start);
+ end = READ_ONCE(jinode->i_dirty_end);
+
+ if (!mapping)
return 0;
return filemap_fdatawait_range_keep_errors(
- jinode->i_vfs_inode->i_mapping, jinode->i_dirty_start,
- jinode->i_dirty_end);
+ mapping, start, end);
}
EXPORT_SYMBOL(jbd2_wait_inode_data);
@@ -240,10 +264,11 @@ static int journal_submit_data_buffers(journal_t *journal,
int jbd2_journal_finish_inode_data_buffers(struct jbd2_inode *jinode)
{
struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
+ loff_t start = READ_ONCE(jinode->i_dirty_start);
+ loff_t end = READ_ONCE(jinode->i_dirty_end);
return filemap_fdatawait_range_keep_errors(mapping,
- jinode->i_dirty_start,
- jinode->i_dirty_end);
+ start, end);
}
/*
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index dca4b5d8aaaa..302b2090eea7 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -2739,7 +2739,7 @@ int jbd2_journal_begin_ordered_truncate(journal_t *journal,
int ret = 0;
/* This is a quick check to avoid locking if not necessary */
- if (!jinode->i_transaction)
+ if (!READ_ONCE(jinode->i_transaction))
goto out;
/* Locks are here just to force reading of recent values, it is
* enough that the transaction was not committing before we started
--
2.52.0
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 11:12:30 +0800",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
ext4 journal commit callbacks access jbd2_inode fields such as
i_transaction and i_dirty_start/end without holding journal->j_list_lock.
Use READ_ONCE() for these reads to correct the concurrency assumptions.
Suggested-by: Jan Kara <jack@suse.com>
Signed-off-by: Li Chen <me@linux.beauty>
---
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
2 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index d99296d7315f..2d451388e080 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3033,11 +3033,13 @@ static int ext4_writepages(struct address_space *mapping,
int ext4_normal_submit_inode_data_buffers(struct jbd2_inode *jinode)
{
+ loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
+ loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = LONG_MAX,
- .range_start = jinode->i_dirty_start,
- .range_end = jinode->i_dirty_end,
+ .range_start = dirty_start,
+ .range_end = dirty_end,
};
struct mpage_da_data mpd = {
.inode = jinode->i_vfs_inode,
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 5cf6c2b54bbb..acb2bc016fd4 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -521,6 +521,7 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode,
{
struct buffer_head *bh, *head;
struct journal_head *jh;
+ transaction_t *trans = READ_ONCE(jinode->i_transaction);
bh = head = folio_buffers(folio);
do {
@@ -539,7 +540,7 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode,
*/
jh = bh2jh(bh);
if (buffer_dirty(bh) ||
- (jh && (jh->b_transaction != jinode->i_transaction ||
+ (jh && (jh->b_transaction != trans ||
jh->b_next_transaction)))
return true;
} while ((bh = bh->b_this_page) != head);
@@ -550,12 +551,14 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode,
static int ext4_journalled_submit_inode_data_buffers(struct jbd2_inode *jinode)
{
struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
+ loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
+ loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
struct writeback_control wbc = {
- .sync_mode = WB_SYNC_ALL,
+ .sync_mode = WB_SYNC_ALL,
.nr_to_write = LONG_MAX,
- .range_start = jinode->i_dirty_start,
- .range_end = jinode->i_dirty_end,
- };
+ .range_start = dirty_start,
+ .range_end = dirty_end,
+ };
struct folio *folio = NULL;
int error;
--
2.52.0
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 11:12:31 +0800",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
ocfs2 journal commit callback reads jbd2_inode dirty range fields without
holding journal->j_list_lock.
Use READ_ONCE() for these reads to correct the concurrency assumptions.
Suggested-by: Jan Kara <jack@suse.com>
Signed-off-by: Li Chen <me@linux.beauty>
---
fs/ocfs2/journal.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
index 85239807dec7..7032284cdbd6 100644
--- a/fs/ocfs2/journal.c
+++ b/fs/ocfs2/journal.c
@@ -902,8 +902,11 @@ int ocfs2_journal_alloc(struct ocfs2_super *osb)
static int ocfs2_journal_submit_inode_data_buffers(struct jbd2_inode *jinode)
{
- return filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping,
- jinode->i_dirty_start, jinode->i_dirty_end);
+ struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
+ loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
+ loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
+
+ return filemap_fdatawrite_range(mapping, dirty_start, dirty_end);
}
int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
--
2.52.0
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 11:12:32 +0800",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri, Jan 30, 2026 at 11:12:32AM +0800, Li Chen wrote:
I don't think this is the right solution to the problem. If it is,
there needs to be much better argumentation in the commit message.
As I understand it, jbd2_journal_file_inode() initialises jinode,
then adds it to the t_inode_list, then drops the j_list_lock. So the
actual problem we need to address is that there's no memory barrier
between the store to i_dirty_start and the list_add(). Once that's
added, there's no need for a READ_ONCE here.
Or have I misunderstood the problem?
|
{
"author": "Matthew Wilcox <willy@infradead.org>",
"date": "Fri, 30 Jan 2026 05:27:59 +0000",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
Hi Matthew,
> On Fri, Jan 30, 2026 at 11:12:32AM +0800, Li Chen wrote:
> > ocfs2 journal commit callback reads jbd2_inode dirty range fields without
> > holding journal->j_list_lock.
> >
> > Use READ_ONCE() for these reads to correct the concurrency assumptions.
>
> I don't think this is the right solution to the problem. If it is,
> there needs to be much better argumentation in the commit message.
>
> As I understand it, jbd2_journal_file_inode() initialises jinode,
> then adds it to the t_inode_list, then drops the j_list_lock. So the
> actual problem we need to address is that there's no memory barrier
> between the store to i_dirty_start and the list_add(). Once that's
> added, there's no need for a READ_ONCE here.
>
> Or have I misunderstood the problem?
Thanks for the review.
My understanding of your point is that you're worried about a missing
"publish" ordering in jbd2_journal_file_inode(): we store
jinode->i_dirty_start/end and then list_add() the jinode to
t_inode_list, and a core which observes the list entry might miss the prior
i_dirty_* stores. Is that the issue you had in mind?
If so, for the normal commit path where the list is walked under
journal->j_list_lock (e.g. journal_submit_data_buffers() in
fs/jbd2/commit.c), spin_lock()/spin_unlock() should already provide the
necessary ordering, since both the i_dirty_* updates and the list_add()
happen inside the same critical section.
The ocfs2 case I was aiming at is different: the filesystem callback is
invoked after unlocking journal->j_list_lock and may sleep, so it can't hold
j_list_lock but it still reads jinode->i_dirty_start/end while other
threads update these fields under the lock. Adding a barrier between the
stores and list_add() would not address that concurrent update window.
So the itent of READ_ONCE() in ocfs2 is to take a single snapshot of the
dirty range values from memory (avoid compiler to reuse a value kept in a
register or fold multiple reads). I'm not trying to claim any additional
memory ordering from this change.
I'll respin and adjust the commit message accordingly. The updated part will
say along the lines of:
"ocfs2 reads jinode->i_dirty_start/end without journal->j_list_lock
(callback may sleep); these fields are updated under j_list_lock in jbd2.
Use READ_ONCE() so the callback takes a single snapshot via actual loads
from the variable (i.e. don't let the compiler reuse a value kept in a register
or fold multiple reads)."
Does that match your understanding?
Regards,
Li
> > Suggested-by: Jan Kara <jack@suse.com>
> > Signed-off-by: Li Chen <me@linux.beauty>
> > ---
> > fs/ocfs2/journal.c | 7 +++++--
> > 1 file changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
> > index 85239807dec7..7032284cdbd6 100644
> > --- a/fs/ocfs2/journal.c
> > +++ b/fs/ocfs2/journal.c
> > @@ -902,8 +902,11 @@ int ocfs2_journal_alloc(struct ocfs2_super *osb)
> >
> > static int ocfs2_journal_submit_inode_data_buffers(struct jbd2_inode *jinode)
> > {
> > - return filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping,
> > - jinode->i_dirty_start, jinode->i_dirty_end);
> > + struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
> > + loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
> > + loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
> > +
> > + return filemap_fdatawrite_range(mapping, dirty_start, dirty_end);
> > }
> >
> > int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
> > --
> > 2.52.0
> >
>
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 20:26:40 +0800",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri, Jan 30, 2026 at 08:26:40PM +0800, Li Chen wrote:
I think that's the only issue that exists ...
I don't think that's true. I think what you're asserting is that:
int *pi;
int **ppi;
spin_lock(&lock);
*pi = 1;
*ppi = pi;
spin_unlock(&lock);
that the store to *pi must be observed before the store to *ppi, and
that's not true for a reader which doesn't read the value of lock.
The store to *ppi needs a store barrier before it.
I don't think that race exists. If it does exist, the READ_ONCE will
not help (on 32 bit platforms) because it's a 64-bit quantity and 32-bit
platforms do not, in general, have a way to do an atomic 64-bit load
(look at the implementation of i_size_read() for the gyrations we go
through to assure a non-torn read of that value).
I think the prevention of this race occurs at a higher level than
"it's updated under a lock". That is, jbd2_journal_file_inode()
is never called for a jinode which is currently being operated on by
j_submit_inode_data_buffers(). Now, I'm not an expert on the jbd code,
so I may be wrong here.
|
{
"author": "Matthew Wilcox <willy@infradead.org>",
"date": "Fri, 30 Jan 2026 16:36:28 +0000",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
Hi Matthew,
Thank you very much for the detailed explanation and for your patience.
On Sat, 31 Jan 2026 00:36:28 +0800,
Matthew Wilcox wrote:
Understood.
Yes, agreed $B!=(B thank you. I was implicitly assuming the reader had taken the same lock
at some point, which is not a valid assumption for a lockless reader.
Thanks. I tried to sanity-check whether that $B!H(Bnever called$B!I(B invariant holds
in practice.
I added a small local-only tracepoint (not for upstream) which fires from
jbd2_journal_file_inode() when it observes JI_COMMIT_RUNNING already set
on the same jinode:
/* fs/jbd2/transaction.c */
if (unlikely(jinode->i_flags & JI_COMMIT_RUNNING))
trace_jbd2_file_inode_commit_running(...);
The trace event prints dev, ino, current tid, jinode flags, and the
i_transaction / i_next_transaction tids.
With an ext4 test (ordered mode) I do see repeated hits. Trace output:
... jbd2_submit_inode_data: dev 7,0 ino 20
... jbd2_file_inode_commit_running: dev 7,0 ino 20 tid 3 op 0x6 i_flags 0x7
j_tid 2 j_next 3 ... comm python3
So it looks like jbd2_journal_file_inode() can run while JI_COMMIT_RUNNING
is set for that inode, i.e. during the window where the commit thread drops
j_list_lock around ->j_submit_inode_data_buffers() / ->j_finish_inode_data_buffers().
Given this, would you prefer the series to move towards something like:
1. taking a snapshot of i_dirty_start/end under j_list_lock in the commit path and passing the snapshot
to the filesystem callback (so callbacks never read jinode->i_dirty_* locklessly), or
2. introducing a real synchronization mechanism for the dirty range itself (seqcount/atomic64/etc)?
3. something else.
I$B!G(Bd be very grateful for guidance on what you consider the most appropriate direction or point out something I'm wrong.
Thanks again.
Regards,
Li
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Sun, 01 Feb 2026 12:37:36 +0800",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri 30-01-26 11:12:30, Li Chen wrote:
Just one nit below. With that fixed feel free to add:
Reviewed-by: Jan Kara <jack@suse.cz>
i_vfs_inode never changes so READ_ONCE is pointless here.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 17:40:45 +0100",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri 30-01-26 11:12:31, Li Chen wrote:
Looks good. Feel free to add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 17:41:39 +0100",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Mon 02-02-26 17:40:45, Jan Kara wrote:
One more note: I've realized that for this to work you also need to make
jbd2_journal_file_inode() use WRITE_ONCE() when updating i_dirty_start,
i_dirty_end and i_flags.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 17:52:30 +0100",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri 30-01-26 16:36:28, Matthew Wilcox wrote:
Well, the above reasonably accurately describes the code making jinode
visible. The reader code is like:
spin_lock(&lock);
pi = *ppi;
spin_unlock(&lock);
work with pi
so it is guaranteed to see pi properly initialized. The problem is that
"work with pi" can race with other code updating the content of pi which is
what this patch is trying to deal with.
Sadly the race does exist - journal_submit_data_buffers() on the committing
transaction can run in parallel with jbd2_journal_file_inode() in the
running transaction. There's nothing preventing that. The problems arising
out of that are mostly theoretical but they do exist. In particular you're
correct that on 32-bit platforms this will be racy even with READ_ONCE /
WRITE_ONCE which I didn't realize.
Li, the best way to address this concern would be to modify jbd2_inode to
switch i_dirty_start / i_dirty_end to account in PAGE_SIZE units instead of
bytes and be of type pgoff_t. jbd2_journal_file_inode() just needs to round
the passed ranges properly...
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 18:17:49 +0100",
"thread_id": "emoxxh6xn5mm5dl2ra5vz2g7t553z4kxricolekz6umiwcu5ys@ogxvdjfq66u3.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
Extend the DPLL core to support associating a DPLL pin with a firmware
node. This association is required to allow other subsystems (such as
network drivers) to locate and request specific DPLL pins defined in
the Device Tree or ACPI.
* Add a .fwnode field to the struct dpll_pin
* Introduce dpll_pin_fwnode_set() helper to allow the provider driver
to associate a pin with a fwnode after the pin has been allocated
* Introduce fwnode_dpll_pin_find() helper to allow consumers to search
for a registered DPLL pin using its associated fwnode handle
* Ensure the fwnode reference is properly released in dpll_pin_put()
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
---
v4:
* fixed fwnode_dpll_pin_find() return value description
---
drivers/dpll/dpll_core.c | 49 ++++++++++++++++++++++++++++++++++++++++
drivers/dpll/dpll_core.h | 2 ++
include/linux/dpll.h | 11 +++++++++
3 files changed, 62 insertions(+)
diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
index 8879a72351561..f04ed7195cadd 100644
--- a/drivers/dpll/dpll_core.c
+++ b/drivers/dpll/dpll_core.c
@@ -10,6 +10,7 @@
#include <linux/device.h>
#include <linux/err.h>
+#include <linux/property.h>
#include <linux/slab.h>
#include <linux/string.h>
@@ -595,12 +596,60 @@ void dpll_pin_put(struct dpll_pin *pin)
xa_destroy(&pin->parent_refs);
xa_destroy(&pin->ref_sync_pins);
dpll_pin_prop_free(&pin->prop);
+ fwnode_handle_put(pin->fwnode);
kfree_rcu(pin, rcu);
}
mutex_unlock(&dpll_lock);
}
EXPORT_SYMBOL_GPL(dpll_pin_put);
+/**
+ * dpll_pin_fwnode_set - set dpll pin firmware node reference
+ * @pin: pointer to a dpll pin
+ * @fwnode: firmware node handle
+ *
+ * Set firmware node handle for the given dpll pin.
+ */
+void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode)
+{
+ mutex_lock(&dpll_lock);
+ fwnode_handle_put(pin->fwnode); /* Drop fwnode previously set */
+ pin->fwnode = fwnode_handle_get(fwnode);
+ mutex_unlock(&dpll_lock);
+}
+EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set);
+
+/**
+ * fwnode_dpll_pin_find - find dpll pin by firmware node reference
+ * @fwnode: reference to firmware node
+ *
+ * Get existing object of a pin that is associated with given firmware node
+ * reference.
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Return:
+ * * valid dpll_pin pointer on success
+ * * NULL when no such pin exists
+ */
+struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode)
+{
+ struct dpll_pin *pin, *ret = NULL;
+ unsigned long index;
+
+ mutex_lock(&dpll_lock);
+ xa_for_each(&dpll_pin_xa, index, pin) {
+ if (pin->fwnode == fwnode) {
+ ret = pin;
+ refcount_inc(&ret->refcount);
+ break;
+ }
+ }
+ mutex_unlock(&dpll_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(fwnode_dpll_pin_find);
+
static int
__dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
const struct dpll_pin_ops *ops, void *priv, void *cookie)
diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h
index 8ce969bbeb64e..d3e17ff0ecef0 100644
--- a/drivers/dpll/dpll_core.h
+++ b/drivers/dpll/dpll_core.h
@@ -42,6 +42,7 @@ struct dpll_device {
* @pin_idx: index of a pin given by dev driver
* @clock_id: clock_id of creator
* @module: module of creator
+ * @fwnode: optional reference to firmware node
* @dpll_refs: hold referencees to dplls pin was registered with
* @parent_refs: hold references to parent pins pin was registered with
* @ref_sync_pins: hold references to pins for Reference SYNC feature
@@ -54,6 +55,7 @@ struct dpll_pin {
u32 pin_idx;
u64 clock_id;
struct module *module;
+ struct fwnode_handle *fwnode;
struct xarray dpll_refs;
struct xarray parent_refs;
struct xarray ref_sync_pins;
diff --git a/include/linux/dpll.h b/include/linux/dpll.h
index c6d0248fa5273..f2e8660e90cdf 100644
--- a/include/linux/dpll.h
+++ b/include/linux/dpll.h
@@ -16,6 +16,7 @@
struct dpll_device;
struct dpll_pin;
struct dpll_pin_esync;
+struct fwnode_handle;
struct dpll_device_ops {
int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv,
@@ -178,6 +179,8 @@ void dpll_netdev_pin_clear(struct net_device *dev);
size_t dpll_netdev_pin_handle_size(const struct net_device *dev);
int dpll_netdev_add_pin_handle(struct sk_buff *msg,
const struct net_device *dev);
+
+struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode);
#else
static inline void
dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { }
@@ -193,6 +196,12 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev)
{
return 0;
}
+
+static inline struct dpll_pin *
+fwnode_dpll_pin_find(struct fwnode_handle *fwnode)
+{
+ return NULL;
+}
#endif
struct dpll_device *
@@ -218,6 +227,8 @@ void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin,
void dpll_pin_put(struct dpll_pin *pin);
+void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode);
+
int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
const struct dpll_pin_ops *ops, void *priv);
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:30 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
Associate the registered DPLL pin with its firmware node by calling
dpll_pin_fwnode_set().
This links the created pin object to its corresponding DT/ACPI node
in the DPLL core. Consequently, this enables consumer drivers (such as
network drivers) to locate and request this specific pin using the
fwnode_dpll_pin_find() helper.
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
---
drivers/dpll/zl3073x/dpll.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c
index 7d8ed948b9706..9eed21088adac 100644
--- a/drivers/dpll/zl3073x/dpll.c
+++ b/drivers/dpll/zl3073x/dpll.c
@@ -1485,6 +1485,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index)
rc = PTR_ERR(pin->dpll_pin);
goto err_pin_get;
}
+ dpll_pin_fwnode_set(pin->dpll_pin, props->fwnode);
if (zl3073x_dpll_is_input_pin(pin))
ops = &zl3073x_dpll_input_pin_ops;
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:31 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
From: Petr Oros <poros@redhat.com>
Currently, the DPLL subsystem reports events (creation, deletion, changes)
to userspace via Netlink. However, there is no mechanism for other kernel
components to be notified of these events directly.
Add a raw notifier chain to the DPLL core protected by dpll_lock. This
allows other kernel subsystems or drivers to register callbacks and
receive notifications when DPLL devices or pins are created, deleted,
or modified.
Define the following:
- Registration helpers: {,un}register_dpll_notifier()
- Event types: DPLL_DEVICE_CREATED, DPLL_PIN_CREATED, etc.
- Context structures: dpll_{device,pin}_notifier_info to pass relevant
data to the listeners.
The notification chain is invoked alongside the existing Netlink event
generation to ensure in-kernel listeners are kept in sync with the
subsystem state.
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Co-developed-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: Petr Oros <poros@redhat.com>
---
drivers/dpll/dpll_core.c | 57 +++++++++++++++++++++++++++++++++++++
drivers/dpll/dpll_core.h | 4 +++
drivers/dpll/dpll_netlink.c | 6 ++++
include/linux/dpll.h | 29 +++++++++++++++++++
4 files changed, 96 insertions(+)
diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
index f04ed7195cadd..b05fe2ba46d91 100644
--- a/drivers/dpll/dpll_core.c
+++ b/drivers/dpll/dpll_core.c
@@ -23,6 +23,8 @@ DEFINE_MUTEX(dpll_lock);
DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC);
DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC);
+static RAW_NOTIFIER_HEAD(dpll_notifier_chain);
+
static u32 dpll_device_xa_id;
static u32 dpll_pin_xa_id;
@@ -46,6 +48,39 @@ struct dpll_pin_registration {
void *cookie;
};
+static int call_dpll_notifiers(unsigned long action, void *info)
+{
+ lockdep_assert_held(&dpll_lock);
+ return raw_notifier_call_chain(&dpll_notifier_chain, action, info);
+}
+
+void dpll_device_notify(struct dpll_device *dpll, unsigned long action)
+{
+ struct dpll_device_notifier_info info = {
+ .dpll = dpll,
+ .id = dpll->id,
+ .idx = dpll->device_idx,
+ .clock_id = dpll->clock_id,
+ .type = dpll->type,
+ };
+
+ call_dpll_notifiers(action, &info);
+}
+
+void dpll_pin_notify(struct dpll_pin *pin, unsigned long action)
+{
+ struct dpll_pin_notifier_info info = {
+ .pin = pin,
+ .id = pin->id,
+ .idx = pin->pin_idx,
+ .clock_id = pin->clock_id,
+ .fwnode = pin->fwnode,
+ .prop = &pin->prop,
+ };
+
+ call_dpll_notifiers(action, &info);
+}
+
struct dpll_device *dpll_device_get_by_id(int id)
{
if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED))
@@ -539,6 +574,28 @@ void dpll_netdev_pin_clear(struct net_device *dev)
}
EXPORT_SYMBOL(dpll_netdev_pin_clear);
+int register_dpll_notifier(struct notifier_block *nb)
+{
+ int ret;
+
+ mutex_lock(&dpll_lock);
+ ret = raw_notifier_chain_register(&dpll_notifier_chain, nb);
+ mutex_unlock(&dpll_lock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(register_dpll_notifier);
+
+int unregister_dpll_notifier(struct notifier_block *nb)
+{
+ int ret;
+
+ mutex_lock(&dpll_lock);
+ ret = raw_notifier_chain_unregister(&dpll_notifier_chain, nb);
+ mutex_unlock(&dpll_lock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(unregister_dpll_notifier);
+
/**
* dpll_pin_get - find existing or create new dpll pin
* @clock_id: clock_id of creator
diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h
index d3e17ff0ecef0..b7b4bb251f739 100644
--- a/drivers/dpll/dpll_core.h
+++ b/drivers/dpll/dpll_core.h
@@ -91,4 +91,8 @@ struct dpll_pin_ref *dpll_xa_ref_dpll_first(struct xarray *xa_refs);
extern struct xarray dpll_device_xa;
extern struct xarray dpll_pin_xa;
extern struct mutex dpll_lock;
+
+void dpll_device_notify(struct dpll_device *dpll, unsigned long action);
+void dpll_pin_notify(struct dpll_pin *pin, unsigned long action);
+
#endif
diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c
index 904199ddd1781..83cbd64abf5a4 100644
--- a/drivers/dpll/dpll_netlink.c
+++ b/drivers/dpll/dpll_netlink.c
@@ -761,17 +761,20 @@ dpll_device_event_send(enum dpll_cmd event, struct dpll_device *dpll)
int dpll_device_create_ntf(struct dpll_device *dpll)
{
+ dpll_device_notify(dpll, DPLL_DEVICE_CREATED);
return dpll_device_event_send(DPLL_CMD_DEVICE_CREATE_NTF, dpll);
}
int dpll_device_delete_ntf(struct dpll_device *dpll)
{
+ dpll_device_notify(dpll, DPLL_DEVICE_DELETED);
return dpll_device_event_send(DPLL_CMD_DEVICE_DELETE_NTF, dpll);
}
static int
__dpll_device_change_ntf(struct dpll_device *dpll)
{
+ dpll_device_notify(dpll, DPLL_DEVICE_CHANGED);
return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll);
}
@@ -829,16 +832,19 @@ dpll_pin_event_send(enum dpll_cmd event, struct dpll_pin *pin)
int dpll_pin_create_ntf(struct dpll_pin *pin)
{
+ dpll_pin_notify(pin, DPLL_PIN_CREATED);
return dpll_pin_event_send(DPLL_CMD_PIN_CREATE_NTF, pin);
}
int dpll_pin_delete_ntf(struct dpll_pin *pin)
{
+ dpll_pin_notify(pin, DPLL_PIN_DELETED);
return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin);
}
int __dpll_pin_change_ntf(struct dpll_pin *pin)
{
+ dpll_pin_notify(pin, DPLL_PIN_CHANGED);
return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin);
}
diff --git a/include/linux/dpll.h b/include/linux/dpll.h
index f2e8660e90cdf..8ed90dfc65f05 100644
--- a/include/linux/dpll.h
+++ b/include/linux/dpll.h
@@ -11,6 +11,7 @@
#include <linux/device.h>
#include <linux/netlink.h>
#include <linux/netdevice.h>
+#include <linux/notifier.h>
#include <linux/rtnetlink.h>
struct dpll_device;
@@ -172,6 +173,30 @@ struct dpll_pin_properties {
u32 phase_gran;
};
+#define DPLL_DEVICE_CREATED 1
+#define DPLL_DEVICE_DELETED 2
+#define DPLL_DEVICE_CHANGED 3
+#define DPLL_PIN_CREATED 4
+#define DPLL_PIN_DELETED 5
+#define DPLL_PIN_CHANGED 6
+
+struct dpll_device_notifier_info {
+ struct dpll_device *dpll;
+ u32 id;
+ u32 idx;
+ u64 clock_id;
+ enum dpll_type type;
+};
+
+struct dpll_pin_notifier_info {
+ struct dpll_pin *pin;
+ u32 id;
+ u32 idx;
+ u64 clock_id;
+ const struct fwnode_handle *fwnode;
+ const struct dpll_pin_properties *prop;
+};
+
#if IS_ENABLED(CONFIG_DPLL)
void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin);
void dpll_netdev_pin_clear(struct net_device *dev);
@@ -242,4 +267,8 @@ int dpll_device_change_ntf(struct dpll_device *dpll);
int dpll_pin_change_ntf(struct dpll_pin *pin);
+int register_dpll_notifier(struct notifier_block *nb);
+
+int unregister_dpll_notifier(struct notifier_block *nb);
+
#endif
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:32 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
Allow drivers to register DPLL pins without manually specifying a pin
index.
Currently, drivers must provide a unique pin index when calling
dpll_pin_get(). This works well for hardware-mapped pins but creates
friction for drivers handling virtual pins or those without a strict
hardware indexing scheme.
Introduce DPLL_PIN_IDX_UNSPEC (U32_MAX). When a driver passes this
value as the pin index:
1. The core allocates a unique index using an IDA
2. The allocated index is mapped to a range starting above `INT_MAX`
This separation ensures that dynamically allocated indices never collide
with standard driver-provided hardware indices, which are assumed to be
within the `0` to `INT_MAX` range. The index is automatically freed when
the pin is released in dpll_pin_put().
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
---
v2:
* fixed integer overflow in dpll_pin_idx_free()
---
drivers/dpll/dpll_core.c | 48 ++++++++++++++++++++++++++++++++++++++--
include/linux/dpll.h | 2 ++
2 files changed, 48 insertions(+), 2 deletions(-)
diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
index b05fe2ba46d91..59081cf2c73ae 100644
--- a/drivers/dpll/dpll_core.c
+++ b/drivers/dpll/dpll_core.c
@@ -10,6 +10,7 @@
#include <linux/device.h>
#include <linux/err.h>
+#include <linux/idr.h>
#include <linux/property.h>
#include <linux/slab.h>
#include <linux/string.h>
@@ -24,6 +25,7 @@ DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC);
DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC);
static RAW_NOTIFIER_HEAD(dpll_notifier_chain);
+static DEFINE_IDA(dpll_pin_idx_ida);
static u32 dpll_device_xa_id;
static u32 dpll_pin_xa_id;
@@ -464,6 +466,36 @@ void dpll_device_unregister(struct dpll_device *dpll,
}
EXPORT_SYMBOL_GPL(dpll_device_unregister);
+static int dpll_pin_idx_alloc(u32 *pin_idx)
+{
+ int ret;
+
+ if (!pin_idx)
+ return -EINVAL;
+
+ /* Alloc unique number from IDA. Number belongs to <0, INT_MAX> range */
+ ret = ida_alloc(&dpll_pin_idx_ida, GFP_KERNEL);
+ if (ret < 0)
+ return ret;
+
+ /* Map the value to dynamic pin index range <INT_MAX+1, U32_MAX> */
+ *pin_idx = (u32)ret + INT_MAX + 1;
+
+ return 0;
+}
+
+static void dpll_pin_idx_free(u32 pin_idx)
+{
+ if (pin_idx <= INT_MAX)
+ return; /* Not a dynamic pin index */
+
+ /* Map the index value from dynamic pin index range to IDA range and
+ * free it.
+ */
+ pin_idx -= (u32)INT_MAX + 1;
+ ida_free(&dpll_pin_idx_ida, pin_idx);
+}
+
static void dpll_pin_prop_free(struct dpll_pin_properties *prop)
{
kfree(prop->package_label);
@@ -521,9 +553,18 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
struct dpll_pin *pin;
int ret;
+ if (pin_idx == DPLL_PIN_IDX_UNSPEC) {
+ ret = dpll_pin_idx_alloc(&pin_idx);
+ if (ret)
+ return ERR_PTR(ret);
+ } else if (pin_idx > INT_MAX) {
+ return ERR_PTR(-EINVAL);
+ }
pin = kzalloc(sizeof(*pin), GFP_KERNEL);
- if (!pin)
- return ERR_PTR(-ENOMEM);
+ if (!pin) {
+ ret = -ENOMEM;
+ goto err_pin_alloc;
+ }
pin->pin_idx = pin_idx;
pin->clock_id = clock_id;
pin->module = module;
@@ -551,6 +592,8 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
dpll_pin_prop_free(&pin->prop);
err_pin_prop:
kfree(pin);
+err_pin_alloc:
+ dpll_pin_idx_free(pin_idx);
return ERR_PTR(ret);
}
@@ -654,6 +697,7 @@ void dpll_pin_put(struct dpll_pin *pin)
xa_destroy(&pin->ref_sync_pins);
dpll_pin_prop_free(&pin->prop);
fwnode_handle_put(pin->fwnode);
+ dpll_pin_idx_free(pin->pin_idx);
kfree_rcu(pin, rcu);
}
mutex_unlock(&dpll_lock);
diff --git a/include/linux/dpll.h b/include/linux/dpll.h
index 8ed90dfc65f05..8fff048131f1d 100644
--- a/include/linux/dpll.h
+++ b/include/linux/dpll.h
@@ -240,6 +240,8 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type,
void dpll_device_unregister(struct dpll_device *dpll,
const struct dpll_device_ops *ops, void *priv);
+#define DPLL_PIN_IDX_UNSPEC U32_MAX
+
struct dpll_pin *
dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module,
const struct dpll_pin_properties *prop);
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:33 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
Add parsing for the "mux" string in the 'connection-type' pin property
mapping it to DPLL_PIN_TYPE_MUX.
Recognizing this type in the driver allows these pins to be taken as
parent pins for pin-on-pin pins coming from different modules (e.g.
network drivers).
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
---
drivers/dpll/zl3073x/prop.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/dpll/zl3073x/prop.c b/drivers/dpll/zl3073x/prop.c
index 4ed153087570b..ad1f099cbe2b5 100644
--- a/drivers/dpll/zl3073x/prop.c
+++ b/drivers/dpll/zl3073x/prop.c
@@ -249,6 +249,8 @@ struct zl3073x_pin_props *zl3073x_pin_props_get(struct zl3073x_dev *zldev,
props->dpll_props.type = DPLL_PIN_TYPE_INT_OSCILLATOR;
else if (!strcmp(type, "synce"))
props->dpll_props.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT;
+ else if (!strcmp(type, "mux"))
+ props->dpll_props.type = DPLL_PIN_TYPE_MUX;
else
dev_warn(zldev->dev,
"Unknown or unsupported pin type '%s'\n",
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:34 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
Refactor the reference counting mechanism for DPLL devices and pins to
improve consistency and prevent potential lifetime issues.
Introduce internal helpers __dpll_{device,pin}_{hold,put}() to
centralize reference management.
Update the internal XArray reference helpers (dpll_xa_ref_*) to
automatically grab a reference to the target object when it is added to
a list, and release it when removed. This ensures that objects linked
internally (e.g., pins referenced by parent pins) are properly kept
alive without relying on the caller to manually manage the count.
Consequently, remove the now redundant manual `refcount_inc/dec` calls
in dpll_pin_on_pin_{,un}register()`, as ownership is now correctly handled
by the dpll_xa_ref_* functions.
Additionally, ensure that dpll_device_{,un}register()` takes/releases
a reference to the device, ensuring the device object remains valid for
the duration of its registration.
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
---
drivers/dpll/dpll_core.c | 74 +++++++++++++++++++++++++++-------------
1 file changed, 50 insertions(+), 24 deletions(-)
diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
index 59081cf2c73ae..f6ab4f0cad84d 100644
--- a/drivers/dpll/dpll_core.c
+++ b/drivers/dpll/dpll_core.c
@@ -83,6 +83,45 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action)
call_dpll_notifiers(action, &info);
}
+static void __dpll_device_hold(struct dpll_device *dpll)
+{
+ refcount_inc(&dpll->refcount);
+}
+
+static void __dpll_device_put(struct dpll_device *dpll)
+{
+ if (refcount_dec_and_test(&dpll->refcount)) {
+ ASSERT_DPLL_NOT_REGISTERED(dpll);
+ WARN_ON_ONCE(!xa_empty(&dpll->pin_refs));
+ xa_destroy(&dpll->pin_refs);
+ xa_erase(&dpll_device_xa, dpll->id);
+ WARN_ON(!list_empty(&dpll->registration_list));
+ kfree(dpll);
+ }
+}
+
+static void __dpll_pin_hold(struct dpll_pin *pin)
+{
+ refcount_inc(&pin->refcount);
+}
+
+static void dpll_pin_idx_free(u32 pin_idx);
+static void dpll_pin_prop_free(struct dpll_pin_properties *prop);
+
+static void __dpll_pin_put(struct dpll_pin *pin)
+{
+ if (refcount_dec_and_test(&pin->refcount)) {
+ xa_erase(&dpll_pin_xa, pin->id);
+ xa_destroy(&pin->dpll_refs);
+ xa_destroy(&pin->parent_refs);
+ xa_destroy(&pin->ref_sync_pins);
+ dpll_pin_prop_free(&pin->prop);
+ fwnode_handle_put(pin->fwnode);
+ dpll_pin_idx_free(pin->pin_idx);
+ kfree_rcu(pin, rcu);
+ }
+}
+
struct dpll_device *dpll_device_get_by_id(int id)
{
if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED))
@@ -152,6 +191,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin,
reg->ops = ops;
reg->priv = priv;
reg->cookie = cookie;
+ __dpll_pin_hold(pin);
if (ref_exists)
refcount_inc(&ref->refcount);
list_add_tail(®->list, &ref->registration_list);
@@ -174,6 +214,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin,
if (WARN_ON(!reg))
return -EINVAL;
list_del(®->list);
+ __dpll_pin_put(pin);
kfree(reg);
if (refcount_dec_and_test(&ref->refcount)) {
xa_erase(xa_pins, i);
@@ -231,6 +272,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll,
reg->ops = ops;
reg->priv = priv;
reg->cookie = cookie;
+ __dpll_device_hold(dpll);
if (ref_exists)
refcount_inc(&ref->refcount);
list_add_tail(®->list, &ref->registration_list);
@@ -253,6 +295,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll,
if (WARN_ON(!reg))
return;
list_del(®->list);
+ __dpll_device_put(dpll);
kfree(reg);
if (refcount_dec_and_test(&ref->refcount)) {
xa_erase(xa_dplls, i);
@@ -323,8 +366,8 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module)
if (dpll->clock_id == clock_id &&
dpll->device_idx == device_idx &&
dpll->module == module) {
+ __dpll_device_hold(dpll);
ret = dpll;
- refcount_inc(&ret->refcount);
break;
}
}
@@ -347,14 +390,7 @@ EXPORT_SYMBOL_GPL(dpll_device_get);
void dpll_device_put(struct dpll_device *dpll)
{
mutex_lock(&dpll_lock);
- if (refcount_dec_and_test(&dpll->refcount)) {
- ASSERT_DPLL_NOT_REGISTERED(dpll);
- WARN_ON_ONCE(!xa_empty(&dpll->pin_refs));
- xa_destroy(&dpll->pin_refs);
- xa_erase(&dpll_device_xa, dpll->id);
- WARN_ON(!list_empty(&dpll->registration_list));
- kfree(dpll);
- }
+ __dpll_device_put(dpll);
mutex_unlock(&dpll_lock);
}
EXPORT_SYMBOL_GPL(dpll_device_put);
@@ -416,6 +452,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type,
reg->ops = ops;
reg->priv = priv;
dpll->type = type;
+ __dpll_device_hold(dpll);
first_registration = list_empty(&dpll->registration_list);
list_add_tail(®->list, &dpll->registration_list);
if (!first_registration) {
@@ -455,6 +492,7 @@ void dpll_device_unregister(struct dpll_device *dpll,
return;
}
list_del(®->list);
+ __dpll_device_put(dpll);
kfree(reg);
if (!list_empty(&dpll->registration_list)) {
@@ -666,8 +704,8 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module,
if (pos->clock_id == clock_id &&
pos->pin_idx == pin_idx &&
pos->module == module) {
+ __dpll_pin_hold(pos);
ret = pos;
- refcount_inc(&ret->refcount);
break;
}
}
@@ -690,16 +728,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_get);
void dpll_pin_put(struct dpll_pin *pin)
{
mutex_lock(&dpll_lock);
- if (refcount_dec_and_test(&pin->refcount)) {
- xa_erase(&dpll_pin_xa, pin->id);
- xa_destroy(&pin->dpll_refs);
- xa_destroy(&pin->parent_refs);
- xa_destroy(&pin->ref_sync_pins);
- dpll_pin_prop_free(&pin->prop);
- fwnode_handle_put(pin->fwnode);
- dpll_pin_idx_free(pin->pin_idx);
- kfree_rcu(pin, rcu);
- }
+ __dpll_pin_put(pin);
mutex_unlock(&dpll_lock);
}
EXPORT_SYMBOL_GPL(dpll_pin_put);
@@ -740,8 +769,8 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode)
mutex_lock(&dpll_lock);
xa_for_each(&dpll_pin_xa, index, pin) {
if (pin->fwnode == fwnode) {
+ __dpll_pin_hold(pin);
ret = pin;
- refcount_inc(&ret->refcount);
break;
}
}
@@ -893,7 +922,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv, pin);
if (ret)
goto unlock;
- refcount_inc(&pin->refcount);
xa_for_each(&parent->dpll_refs, i, ref) {
ret = __dpll_pin_register(ref->dpll, pin, ops, priv, parent);
if (ret) {
@@ -913,7 +941,6 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
parent);
dpll_pin_delete_ntf(pin);
}
- refcount_dec(&pin->refcount);
dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin);
unlock:
mutex_unlock(&dpll_lock);
@@ -940,7 +967,6 @@ void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin,
mutex_lock(&dpll_lock);
dpll_pin_delete_ntf(pin);
dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin);
- refcount_dec(&pin->refcount);
xa_for_each(&pin->dpll_refs, i, ref)
__dpll_pin_unregister(ref->dpll, pin, ops, priv, parent);
mutex_unlock(&dpll_lock);
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:35 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
Add support for the REF_TRACKER infrastructure to the DPLL subsystem.
When enabled, this allows developers to track and debug reference counting
leaks or imbalances for dpll_device and dpll_pin objects. It records stack
traces for every get/put operation and exposes this information via
debugfs at:
/sys/kernel/debug/ref_tracker/dpll_device_*
/sys/kernel/debug/ref_tracker/dpll_pin_*
The following API changes are made to support this:
1. dpll_device_get() / dpll_device_put() now accept a 'dpll_tracker *'
(which is a typedef to 'struct ref_tracker *' when enabled, or an empty
struct otherwise).
2. dpll_pin_get() / dpll_pin_put() and fwnode_dpll_pin_find() similarly
accept the tracker argument.
3. Internal registration structures now hold a tracker to associate the
reference held by the registration with the specific owner.
All existing in-tree drivers (ice, mlx5, ptp_ocp, zl3073x) are updated
to pass NULL for the new tracker argument, maintaining current behavior
while enabling future debugging capabilities.
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Co-developed-by: Petr Oros <poros@redhat.com>
Signed-off-by: Petr Oros <poros@redhat.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
---
v4:
* added missing tracker parameter to fwnode_dpll_pin_find() stub
v3:
* added Kconfig dependency on STACKTRACE_SUPPORT and DEBUG_KERNEL
---
drivers/dpll/Kconfig | 15 +++
drivers/dpll/dpll_core.c | 98 ++++++++++++++-----
drivers/dpll/dpll_core.h | 5 +
drivers/dpll/zl3073x/dpll.c | 12 +--
drivers/net/ethernet/intel/ice/ice_dpll.c | 14 +--
.../net/ethernet/mellanox/mlx5/core/dpll.c | 13 +--
drivers/ptp/ptp_ocp.c | 15 +--
include/linux/dpll.h | 21 ++--
8 files changed, 139 insertions(+), 54 deletions(-)
diff --git a/drivers/dpll/Kconfig b/drivers/dpll/Kconfig
index ade872c915ac6..be98969f040ab 100644
--- a/drivers/dpll/Kconfig
+++ b/drivers/dpll/Kconfig
@@ -8,6 +8,21 @@ menu "DPLL device support"
config DPLL
bool
+config DPLL_REFCNT_TRACKER
+ bool "DPLL reference count tracking"
+ depends on DEBUG_KERNEL && STACKTRACE_SUPPORT && DPLL
+ select REF_TRACKER
+ help
+ Enable reference count tracking for DPLL devices and pins.
+ This helps debugging reference leaks and use-after-free bugs
+ by recording stack traces for each get/put operation.
+
+ The tracking information is exposed via debugfs at:
+ /sys/kernel/debug/ref_tracker/dpll_device_*
+ /sys/kernel/debug/ref_tracker/dpll_pin_*
+
+ If unsure, say N.
+
source "drivers/dpll/zl3073x/Kconfig"
endmenu
diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
index f6ab4f0cad84d..627a5b39a0efd 100644
--- a/drivers/dpll/dpll_core.c
+++ b/drivers/dpll/dpll_core.c
@@ -41,6 +41,7 @@ struct dpll_device_registration {
struct list_head list;
const struct dpll_device_ops *ops;
void *priv;
+ dpll_tracker tracker;
};
struct dpll_pin_registration {
@@ -48,6 +49,7 @@ struct dpll_pin_registration {
const struct dpll_pin_ops *ops;
void *priv;
void *cookie;
+ dpll_tracker tracker;
};
static int call_dpll_notifiers(unsigned long action, void *info)
@@ -83,33 +85,68 @@ void dpll_pin_notify(struct dpll_pin *pin, unsigned long action)
call_dpll_notifiers(action, &info);
}
-static void __dpll_device_hold(struct dpll_device *dpll)
+static void dpll_device_tracker_alloc(struct dpll_device *dpll,
+ dpll_tracker *tracker)
{
+#ifdef CONFIG_DPLL_REFCNT_TRACKER
+ ref_tracker_alloc(&dpll->refcnt_tracker, tracker, GFP_KERNEL);
+#endif
+}
+
+static void dpll_device_tracker_free(struct dpll_device *dpll,
+ dpll_tracker *tracker)
+{
+#ifdef CONFIG_DPLL_REFCNT_TRACKER
+ ref_tracker_free(&dpll->refcnt_tracker, tracker);
+#endif
+}
+
+static void __dpll_device_hold(struct dpll_device *dpll, dpll_tracker *tracker)
+{
+ dpll_device_tracker_alloc(dpll, tracker);
refcount_inc(&dpll->refcount);
}
-static void __dpll_device_put(struct dpll_device *dpll)
+static void __dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker)
{
+ dpll_device_tracker_free(dpll, tracker);
if (refcount_dec_and_test(&dpll->refcount)) {
ASSERT_DPLL_NOT_REGISTERED(dpll);
WARN_ON_ONCE(!xa_empty(&dpll->pin_refs));
xa_destroy(&dpll->pin_refs);
xa_erase(&dpll_device_xa, dpll->id);
WARN_ON(!list_empty(&dpll->registration_list));
+ ref_tracker_dir_exit(&dpll->refcnt_tracker);
kfree(dpll);
}
}
-static void __dpll_pin_hold(struct dpll_pin *pin)
+static void dpll_pin_tracker_alloc(struct dpll_pin *pin, dpll_tracker *tracker)
{
+#ifdef CONFIG_DPLL_REFCNT_TRACKER
+ ref_tracker_alloc(&pin->refcnt_tracker, tracker, GFP_KERNEL);
+#endif
+}
+
+static void dpll_pin_tracker_free(struct dpll_pin *pin, dpll_tracker *tracker)
+{
+#ifdef CONFIG_DPLL_REFCNT_TRACKER
+ ref_tracker_free(&pin->refcnt_tracker, tracker);
+#endif
+}
+
+static void __dpll_pin_hold(struct dpll_pin *pin, dpll_tracker *tracker)
+{
+ dpll_pin_tracker_alloc(pin, tracker);
refcount_inc(&pin->refcount);
}
static void dpll_pin_idx_free(u32 pin_idx);
static void dpll_pin_prop_free(struct dpll_pin_properties *prop);
-static void __dpll_pin_put(struct dpll_pin *pin)
+static void __dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker)
{
+ dpll_pin_tracker_free(pin, tracker);
if (refcount_dec_and_test(&pin->refcount)) {
xa_erase(&dpll_pin_xa, pin->id);
xa_destroy(&pin->dpll_refs);
@@ -118,6 +155,7 @@ static void __dpll_pin_put(struct dpll_pin *pin)
dpll_pin_prop_free(&pin->prop);
fwnode_handle_put(pin->fwnode);
dpll_pin_idx_free(pin->pin_idx);
+ ref_tracker_dir_exit(&pin->refcnt_tracker);
kfree_rcu(pin, rcu);
}
}
@@ -191,7 +229,7 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin,
reg->ops = ops;
reg->priv = priv;
reg->cookie = cookie;
- __dpll_pin_hold(pin);
+ __dpll_pin_hold(pin, ®->tracker);
if (ref_exists)
refcount_inc(&ref->refcount);
list_add_tail(®->list, &ref->registration_list);
@@ -214,7 +252,7 @@ static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin,
if (WARN_ON(!reg))
return -EINVAL;
list_del(®->list);
- __dpll_pin_put(pin);
+ __dpll_pin_put(pin, ®->tracker);
kfree(reg);
if (refcount_dec_and_test(&ref->refcount)) {
xa_erase(xa_pins, i);
@@ -272,7 +310,7 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll,
reg->ops = ops;
reg->priv = priv;
reg->cookie = cookie;
- __dpll_device_hold(dpll);
+ __dpll_device_hold(dpll, ®->tracker);
if (ref_exists)
refcount_inc(&ref->refcount);
list_add_tail(®->list, &ref->registration_list);
@@ -295,7 +333,7 @@ dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll,
if (WARN_ON(!reg))
return;
list_del(®->list);
- __dpll_device_put(dpll);
+ __dpll_device_put(dpll, ®->tracker);
kfree(reg);
if (refcount_dec_and_test(&ref->refcount)) {
xa_erase(xa_dplls, i);
@@ -337,6 +375,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module)
return ERR_PTR(ret);
}
xa_init_flags(&dpll->pin_refs, XA_FLAGS_ALLOC);
+ ref_tracker_dir_init(&dpll->refcnt_tracker, 128, "dpll_device");
return dpll;
}
@@ -346,6 +385,7 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module)
* @clock_id: clock_id of creator
* @device_idx: idx given by device driver
* @module: reference to registering module
+ * @tracker: tracking object for the acquired reference
*
* Get existing object of a dpll device, unique for given arguments.
* Create new if doesn't exist yet.
@@ -356,7 +396,8 @@ dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module)
* * ERR_PTR(X) - error
*/
struct dpll_device *
-dpll_device_get(u64 clock_id, u32 device_idx, struct module *module)
+dpll_device_get(u64 clock_id, u32 device_idx, struct module *module,
+ dpll_tracker *tracker)
{
struct dpll_device *dpll, *ret = NULL;
unsigned long index;
@@ -366,13 +407,17 @@ dpll_device_get(u64 clock_id, u32 device_idx, struct module *module)
if (dpll->clock_id == clock_id &&
dpll->device_idx == device_idx &&
dpll->module == module) {
- __dpll_device_hold(dpll);
+ __dpll_device_hold(dpll, tracker);
ret = dpll;
break;
}
}
- if (!ret)
+ if (!ret) {
ret = dpll_device_alloc(clock_id, device_idx, module);
+ if (!IS_ERR(ret))
+ dpll_device_tracker_alloc(ret, tracker);
+ }
+
mutex_unlock(&dpll_lock);
return ret;
@@ -382,15 +427,16 @@ EXPORT_SYMBOL_GPL(dpll_device_get);
/**
* dpll_device_put - decrease the refcount and free memory if possible
* @dpll: dpll_device struct pointer
+ * @tracker: tracking object for the acquired reference
*
* Context: Acquires a lock (dpll_lock)
* Drop reference for a dpll device, if all references are gone, delete
* dpll device object.
*/
-void dpll_device_put(struct dpll_device *dpll)
+void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker)
{
mutex_lock(&dpll_lock);
- __dpll_device_put(dpll);
+ __dpll_device_put(dpll, tracker);
mutex_unlock(&dpll_lock);
}
EXPORT_SYMBOL_GPL(dpll_device_put);
@@ -452,7 +498,7 @@ int dpll_device_register(struct dpll_device *dpll, enum dpll_type type,
reg->ops = ops;
reg->priv = priv;
dpll->type = type;
- __dpll_device_hold(dpll);
+ __dpll_device_hold(dpll, ®->tracker);
first_registration = list_empty(&dpll->registration_list);
list_add_tail(®->list, &dpll->registration_list);
if (!first_registration) {
@@ -492,7 +538,7 @@ void dpll_device_unregister(struct dpll_device *dpll,
return;
}
list_del(®->list);
- __dpll_device_put(dpll);
+ __dpll_device_put(dpll, ®->tracker);
kfree(reg);
if (!list_empty(&dpll->registration_list)) {
@@ -622,6 +668,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
&dpll_pin_xa_id, GFP_KERNEL);
if (ret < 0)
goto err_xa_alloc;
+ ref_tracker_dir_init(&pin->refcnt_tracker, 128, "dpll_pin");
return pin;
err_xa_alloc:
xa_destroy(&pin->dpll_refs);
@@ -683,6 +730,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier);
* @pin_idx: idx given by dev driver
* @module: reference to registering module
* @prop: dpll pin properties
+ * @tracker: tracking object for the acquired reference
*
* Get existing object of a pin (unique for given arguments) or create new
* if doesn't exist yet.
@@ -694,7 +742,7 @@ EXPORT_SYMBOL_GPL(unregister_dpll_notifier);
*/
struct dpll_pin *
dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module,
- const struct dpll_pin_properties *prop)
+ const struct dpll_pin_properties *prop, dpll_tracker *tracker)
{
struct dpll_pin *pos, *ret = NULL;
unsigned long i;
@@ -704,13 +752,16 @@ dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module,
if (pos->clock_id == clock_id &&
pos->pin_idx == pin_idx &&
pos->module == module) {
- __dpll_pin_hold(pos);
+ __dpll_pin_hold(pos, tracker);
ret = pos;
break;
}
}
- if (!ret)
+ if (!ret) {
ret = dpll_pin_alloc(clock_id, pin_idx, module, prop);
+ if (!IS_ERR(ret))
+ dpll_pin_tracker_alloc(ret, tracker);
+ }
mutex_unlock(&dpll_lock);
return ret;
@@ -720,15 +771,16 @@ EXPORT_SYMBOL_GPL(dpll_pin_get);
/**
* dpll_pin_put - decrease the refcount and free memory if possible
* @pin: pointer to a pin to be put
+ * @tracker: tracking object for the acquired reference
*
* Drop reference for a pin, if all references are gone, delete pin object.
*
* Context: Acquires a lock (dpll_lock)
*/
-void dpll_pin_put(struct dpll_pin *pin)
+void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker)
{
mutex_lock(&dpll_lock);
- __dpll_pin_put(pin);
+ __dpll_pin_put(pin, tracker);
mutex_unlock(&dpll_lock);
}
EXPORT_SYMBOL_GPL(dpll_pin_put);
@@ -752,6 +804,7 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set);
/**
* fwnode_dpll_pin_find - find dpll pin by firmware node reference
* @fwnode: reference to firmware node
+ * @tracker: tracking object for the acquired reference
*
* Get existing object of a pin that is associated with given firmware node
* reference.
@@ -761,7 +814,8 @@ EXPORT_SYMBOL_GPL(dpll_pin_fwnode_set);
* * valid dpll_pin pointer on success
* * NULL when no such pin exists
*/
-struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode)
+struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode,
+ dpll_tracker *tracker)
{
struct dpll_pin *pin, *ret = NULL;
unsigned long index;
@@ -769,7 +823,7 @@ struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode)
mutex_lock(&dpll_lock);
xa_for_each(&dpll_pin_xa, index, pin) {
if (pin->fwnode == fwnode) {
- __dpll_pin_hold(pin);
+ __dpll_pin_hold(pin, tracker);
ret = pin;
break;
}
diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h
index b7b4bb251f739..71ac88ef20172 100644
--- a/drivers/dpll/dpll_core.h
+++ b/drivers/dpll/dpll_core.h
@@ -10,6 +10,7 @@
#include <linux/dpll.h>
#include <linux/list.h>
#include <linux/refcount.h>
+#include <linux/ref_tracker.h>
#include "dpll_nl.h"
#define DPLL_REGISTERED XA_MARK_1
@@ -23,6 +24,7 @@
* @type: type of a dpll
* @pin_refs: stores pins registered within a dpll
* @refcount: refcount
+ * @refcnt_tracker: ref_tracker directory for debugging reference leaks
* @registration_list: list of registered ops and priv data of dpll owners
**/
struct dpll_device {
@@ -33,6 +35,7 @@ struct dpll_device {
enum dpll_type type;
struct xarray pin_refs;
refcount_t refcount;
+ struct ref_tracker_dir refcnt_tracker;
struct list_head registration_list;
};
@@ -48,6 +51,7 @@ struct dpll_device {
* @ref_sync_pins: hold references to pins for Reference SYNC feature
* @prop: pin properties copied from the registerer
* @refcount: refcount
+ * @refcnt_tracker: ref_tracker directory for debugging reference leaks
* @rcu: rcu_head for kfree_rcu()
**/
struct dpll_pin {
@@ -61,6 +65,7 @@ struct dpll_pin {
struct xarray ref_sync_pins;
struct dpll_pin_properties prop;
refcount_t refcount;
+ struct ref_tracker_dir refcnt_tracker;
struct rcu_head rcu;
};
diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c
index 9eed21088adac..8788bcab7ec53 100644
--- a/drivers/dpll/zl3073x/dpll.c
+++ b/drivers/dpll/zl3073x/dpll.c
@@ -1480,7 +1480,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index)
/* Create or get existing DPLL pin */
pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE,
- &props->dpll_props);
+ &props->dpll_props, NULL);
if (IS_ERR(pin->dpll_pin)) {
rc = PTR_ERR(pin->dpll_pin);
goto err_pin_get;
@@ -1503,7 +1503,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index)
return 0;
err_register:
- dpll_pin_put(pin->dpll_pin);
+ dpll_pin_put(pin->dpll_pin, NULL);
err_prio_get:
pin->dpll_pin = NULL;
err_pin_get:
@@ -1534,7 +1534,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin)
/* Unregister the pin */
dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin);
- dpll_pin_put(pin->dpll_pin);
+ dpll_pin_put(pin->dpll_pin, NULL);
pin->dpll_pin = NULL;
}
@@ -1708,7 +1708,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll)
dpll_mode_refsel);
zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id,
- THIS_MODULE);
+ THIS_MODULE, NULL);
if (IS_ERR(zldpll->dpll_dev)) {
rc = PTR_ERR(zldpll->dpll_dev);
zldpll->dpll_dev = NULL;
@@ -1720,7 +1720,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll)
zl3073x_prop_dpll_type_get(zldev, zldpll->id),
&zl3073x_dpll_device_ops, zldpll);
if (rc) {
- dpll_device_put(zldpll->dpll_dev);
+ dpll_device_put(zldpll->dpll_dev, NULL);
zldpll->dpll_dev = NULL;
}
@@ -1743,7 +1743,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll)
dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops,
zldpll);
- dpll_device_put(zldpll->dpll_dev);
+ dpll_device_put(zldpll->dpll_dev, NULL);
zldpll->dpll_dev = NULL;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
index 53b54e395a2ed..64b7b045ecd58 100644
--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
+++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
@@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count)
int i;
for (i = 0; i < count; i++)
- dpll_pin_put(pins[i].pin);
+ dpll_pin_put(pins[i].pin, NULL);
}
/**
@@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
for (i = 0; i < count; i++) {
pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE,
- &pins[i].prop);
+ &pins[i].prop, NULL);
if (IS_ERR(pins[i].pin)) {
ret = PTR_ERR(pins[i].pin);
goto release_pins;
@@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
release_pins:
while (--i >= 0)
- dpll_pin_put(pins[i].pin);
+ dpll_pin_put(pins[i].pin, NULL);
return ret;
}
@@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf)
if (WARN_ON_ONCE(!vsi || !vsi->netdev))
return;
dpll_netdev_pin_clear(vsi->netdev);
- dpll_pin_put(rclk->pin);
+ dpll_pin_put(rclk->pin, NULL);
}
/**
@@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu)
{
if (cgu)
dpll_device_unregister(d->dpll, d->ops, d);
- dpll_device_put(d->dpll);
+ dpll_device_put(d->dpll, NULL);
}
/**
@@ -3271,7 +3271,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu,
u64 clock_id = pf->dplls.clock_id;
int ret;
- d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE);
+ d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL);
if (IS_ERR(d->dpll)) {
ret = PTR_ERR(d->dpll);
dev_err(ice_pf_to_dev(pf),
@@ -3287,7 +3287,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu,
ice_dpll_update_state(pf, d, true);
ret = dpll_device_register(d->dpll, type, ops, d);
if (ret) {
- dpll_device_put(d->dpll);
+ dpll_device_put(d->dpll, NULL);
return ret;
}
d->ops = ops;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
index 3ea8a1766ae28..541d83e5d7183 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
@@ -438,7 +438,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev,
auxiliary_set_drvdata(adev, mdpll);
/* Multiple mdev instances might share one DPLL device. */
- mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE);
+ mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL);
if (IS_ERR(mdpll->dpll)) {
err = PTR_ERR(mdpll->dpll);
goto err_free_mdpll;
@@ -451,7 +451,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev,
/* Multiple mdev instances might share one DPLL pin. */
mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev),
- THIS_MODULE, &mlx5_dpll_pin_properties);
+ THIS_MODULE, &mlx5_dpll_pin_properties,
+ NULL);
if (IS_ERR(mdpll->dpll_pin)) {
err = PTR_ERR(mdpll->dpll_pin);
goto err_unregister_dpll_device;
@@ -479,11 +480,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev,
dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
&mlx5_dpll_pins_ops, mdpll);
err_put_dpll_pin:
- dpll_pin_put(mdpll->dpll_pin);
+ dpll_pin_put(mdpll->dpll_pin, NULL);
err_unregister_dpll_device:
dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll);
err_put_dpll_device:
- dpll_device_put(mdpll->dpll);
+ dpll_device_put(mdpll->dpll, NULL);
err_free_mdpll:
kfree(mdpll);
return err;
@@ -499,9 +500,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev)
destroy_workqueue(mdpll->wq);
dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
&mlx5_dpll_pins_ops, mdpll);
- dpll_pin_put(mdpll->dpll_pin);
+ dpll_pin_put(mdpll->dpll_pin, NULL);
dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll);
- dpll_device_put(mdpll->dpll);
+ dpll_device_put(mdpll->dpll, NULL);
kfree(mdpll);
mlx5_dpll_synce_status_set(mdev,
diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
index 65fe05cac8c42..f39b3966b3e8c 100644
--- a/drivers/ptp/ptp_ocp.c
+++ b/drivers/ptp/ptp_ocp.c
@@ -4788,7 +4788,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
devlink_register(devlink);
clkid = pci_get_dsn(pdev);
- bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE);
+ bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL);
if (IS_ERR(bp->dpll)) {
err = PTR_ERR(bp->dpll);
dev_err(&pdev->dev, "dpll_device_alloc failed\n");
@@ -4800,7 +4800,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto out;
for (i = 0; i < OCP_SMA_NUM; i++) {
- bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, &bp->sma[i].dpll_prop);
+ bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE,
+ &bp->sma[i].dpll_prop, NULL);
if (IS_ERR(bp->sma[i].dpll_pin)) {
err = PTR_ERR(bp->sma[i].dpll_pin);
goto out_dpll;
@@ -4809,7 +4810,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops,
&bp->sma[i]);
if (err) {
- dpll_pin_put(bp->sma[i].dpll_pin);
+ dpll_pin_put(bp->sma[i].dpll_pin, NULL);
goto out_dpll;
}
}
@@ -4819,9 +4820,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
out_dpll:
while (i--) {
dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]);
- dpll_pin_put(bp->sma[i].dpll_pin);
+ dpll_pin_put(bp->sma[i].dpll_pin, NULL);
}
- dpll_device_put(bp->dpll);
+ dpll_device_put(bp->dpll, NULL);
out:
ptp_ocp_detach(bp);
out_disable:
@@ -4842,11 +4843,11 @@ ptp_ocp_remove(struct pci_dev *pdev)
for (i = 0; i < OCP_SMA_NUM; i++) {
if (bp->sma[i].dpll_pin) {
dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]);
- dpll_pin_put(bp->sma[i].dpll_pin);
+ dpll_pin_put(bp->sma[i].dpll_pin, NULL);
}
}
dpll_device_unregister(bp->dpll, &dpll_ops, bp);
- dpll_device_put(bp->dpll);
+ dpll_device_put(bp->dpll, NULL);
devlink_unregister(devlink);
ptp_ocp_detach(bp);
pci_disable_device(pdev);
diff --git a/include/linux/dpll.h b/include/linux/dpll.h
index 8fff048131f1d..5c80cdab0c180 100644
--- a/include/linux/dpll.h
+++ b/include/linux/dpll.h
@@ -18,6 +18,7 @@ struct dpll_device;
struct dpll_pin;
struct dpll_pin_esync;
struct fwnode_handle;
+struct ref_tracker;
struct dpll_device_ops {
int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv,
@@ -173,6 +174,12 @@ struct dpll_pin_properties {
u32 phase_gran;
};
+#ifdef CONFIG_DPLL_REFCNT_TRACKER
+typedef struct ref_tracker *dpll_tracker;
+#else
+typedef struct {} dpll_tracker;
+#endif
+
#define DPLL_DEVICE_CREATED 1
#define DPLL_DEVICE_DELETED 2
#define DPLL_DEVICE_CHANGED 3
@@ -205,7 +212,8 @@ size_t dpll_netdev_pin_handle_size(const struct net_device *dev);
int dpll_netdev_add_pin_handle(struct sk_buff *msg,
const struct net_device *dev);
-struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode);
+struct dpll_pin *fwnode_dpll_pin_find(struct fwnode_handle *fwnode,
+ dpll_tracker *tracker);
#else
static inline void
dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { }
@@ -223,16 +231,17 @@ dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev)
}
static inline struct dpll_pin *
-fwnode_dpll_pin_find(struct fwnode_handle *fwnode)
+fwnode_dpll_pin_find(struct fwnode_handle *fwnode, dpll_tracker *tracker);
{
return NULL;
}
#endif
struct dpll_device *
-dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module);
+dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module,
+ dpll_tracker *tracker);
-void dpll_device_put(struct dpll_device *dpll);
+void dpll_device_put(struct dpll_device *dpll, dpll_tracker *tracker);
int dpll_device_register(struct dpll_device *dpll, enum dpll_type type,
const struct dpll_device_ops *ops, void *priv);
@@ -244,7 +253,7 @@ void dpll_device_unregister(struct dpll_device *dpll,
struct dpll_pin *
dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module,
- const struct dpll_pin_properties *prop);
+ const struct dpll_pin_properties *prop, dpll_tracker *tracker);
int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
const struct dpll_pin_ops *ops, void *priv);
@@ -252,7 +261,7 @@ int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin,
const struct dpll_pin_ops *ops, void *priv);
-void dpll_pin_put(struct dpll_pin *pin);
+void dpll_pin_put(struct dpll_pin *pin, dpll_tracker *tracker);
void dpll_pin_fwnode_set(struct dpll_pin *pin, struct fwnode_handle *fwnode);
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:36 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
Update existing DPLL drivers to utilize the DPLL reference count
tracking infrastructure.
Add dpll_tracker fields to the drivers' internal device and pin
structures. Pass pointers to these trackers when calling
dpll_device_get/put() and dpll_pin_get/put().
This allows developers to inspect the specific references held by this
driver via debugfs when CONFIG_DPLL_REFCNT_TRACKER is enabled, aiding
in the debugging of resource leaks.
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
---
drivers/dpll/zl3073x/dpll.c | 14 ++++++++------
drivers/dpll/zl3073x/dpll.h | 2 ++
drivers/net/ethernet/intel/ice/ice_dpll.c | 15 ++++++++-------
drivers/net/ethernet/intel/ice/ice_dpll.h | 4 ++++
drivers/net/ethernet/mellanox/mlx5/core/dpll.c | 15 +++++++++------
drivers/ptp/ptp_ocp.c | 17 ++++++++++-------
6 files changed, 41 insertions(+), 26 deletions(-)
diff --git a/drivers/dpll/zl3073x/dpll.c b/drivers/dpll/zl3073x/dpll.c
index 8788bcab7ec53..a99d143a7acde 100644
--- a/drivers/dpll/zl3073x/dpll.c
+++ b/drivers/dpll/zl3073x/dpll.c
@@ -29,6 +29,7 @@
* @list: this DPLL pin list entry
* @dpll: DPLL the pin is registered to
* @dpll_pin: pointer to registered dpll_pin
+ * @tracker: tracking object for the acquired reference
* @label: package label
* @dir: pin direction
* @id: pin id
@@ -44,6 +45,7 @@ struct zl3073x_dpll_pin {
struct list_head list;
struct zl3073x_dpll *dpll;
struct dpll_pin *dpll_pin;
+ dpll_tracker tracker;
char label[8];
enum dpll_pin_direction dir;
u8 id;
@@ -1480,7 +1482,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index)
/* Create or get existing DPLL pin */
pin->dpll_pin = dpll_pin_get(zldpll->dev->clock_id, index, THIS_MODULE,
- &props->dpll_props, NULL);
+ &props->dpll_props, &pin->tracker);
if (IS_ERR(pin->dpll_pin)) {
rc = PTR_ERR(pin->dpll_pin);
goto err_pin_get;
@@ -1503,7 +1505,7 @@ zl3073x_dpll_pin_register(struct zl3073x_dpll_pin *pin, u32 index)
return 0;
err_register:
- dpll_pin_put(pin->dpll_pin, NULL);
+ dpll_pin_put(pin->dpll_pin, &pin->tracker);
err_prio_get:
pin->dpll_pin = NULL;
err_pin_get:
@@ -1534,7 +1536,7 @@ zl3073x_dpll_pin_unregister(struct zl3073x_dpll_pin *pin)
/* Unregister the pin */
dpll_pin_unregister(zldpll->dpll_dev, pin->dpll_pin, ops, pin);
- dpll_pin_put(pin->dpll_pin, NULL);
+ dpll_pin_put(pin->dpll_pin, &pin->tracker);
pin->dpll_pin = NULL;
}
@@ -1708,7 +1710,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll)
dpll_mode_refsel);
zldpll->dpll_dev = dpll_device_get(zldev->clock_id, zldpll->id,
- THIS_MODULE, NULL);
+ THIS_MODULE, &zldpll->tracker);
if (IS_ERR(zldpll->dpll_dev)) {
rc = PTR_ERR(zldpll->dpll_dev);
zldpll->dpll_dev = NULL;
@@ -1720,7 +1722,7 @@ zl3073x_dpll_device_register(struct zl3073x_dpll *zldpll)
zl3073x_prop_dpll_type_get(zldev, zldpll->id),
&zl3073x_dpll_device_ops, zldpll);
if (rc) {
- dpll_device_put(zldpll->dpll_dev, NULL);
+ dpll_device_put(zldpll->dpll_dev, &zldpll->tracker);
zldpll->dpll_dev = NULL;
}
@@ -1743,7 +1745,7 @@ zl3073x_dpll_device_unregister(struct zl3073x_dpll *zldpll)
dpll_device_unregister(zldpll->dpll_dev, &zl3073x_dpll_device_ops,
zldpll);
- dpll_device_put(zldpll->dpll_dev, NULL);
+ dpll_device_put(zldpll->dpll_dev, &zldpll->tracker);
zldpll->dpll_dev = NULL;
}
diff --git a/drivers/dpll/zl3073x/dpll.h b/drivers/dpll/zl3073x/dpll.h
index e8c39b44b356c..c65c798c37927 100644
--- a/drivers/dpll/zl3073x/dpll.h
+++ b/drivers/dpll/zl3073x/dpll.h
@@ -18,6 +18,7 @@
* @check_count: periodic check counter
* @phase_monitor: is phase offset monitor enabled
* @dpll_dev: pointer to registered DPLL device
+ * @tracker: tracking object for the acquired reference
* @lock_status: last saved DPLL lock status
* @pins: list of pins
* @change_work: device change notification work
@@ -31,6 +32,7 @@ struct zl3073x_dpll {
u8 check_count;
bool phase_monitor;
struct dpll_device *dpll_dev;
+ dpll_tracker tracker;
enum dpll_lock_status lock_status;
struct list_head pins;
struct work_struct change_work;
diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
index 64b7b045ecd58..4eca62688d834 100644
--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
+++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
@@ -2814,7 +2814,7 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count)
int i;
for (i = 0; i < count; i++)
- dpll_pin_put(pins[i].pin, NULL);
+ dpll_pin_put(pins[i].pin, &pins[i].tracker);
}
/**
@@ -2840,7 +2840,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
for (i = 0; i < count; i++) {
pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE,
- &pins[i].prop, NULL);
+ &pins[i].prop, &pins[i].tracker);
if (IS_ERR(pins[i].pin)) {
ret = PTR_ERR(pins[i].pin);
goto release_pins;
@@ -2851,7 +2851,7 @@ ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
release_pins:
while (--i >= 0)
- dpll_pin_put(pins[i].pin, NULL);
+ dpll_pin_put(pins[i].pin, &pins[i].tracker);
return ret;
}
@@ -3037,7 +3037,7 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf)
if (WARN_ON_ONCE(!vsi || !vsi->netdev))
return;
dpll_netdev_pin_clear(vsi->netdev);
- dpll_pin_put(rclk->pin, NULL);
+ dpll_pin_put(rclk->pin, &rclk->tracker);
}
/**
@@ -3247,7 +3247,7 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu)
{
if (cgu)
dpll_device_unregister(d->dpll, d->ops, d);
- dpll_device_put(d->dpll, NULL);
+ dpll_device_put(d->dpll, &d->tracker);
}
/**
@@ -3271,7 +3271,8 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu,
u64 clock_id = pf->dplls.clock_id;
int ret;
- d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE, NULL);
+ d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE,
+ &d->tracker);
if (IS_ERR(d->dpll)) {
ret = PTR_ERR(d->dpll);
dev_err(ice_pf_to_dev(pf),
@@ -3287,7 +3288,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu,
ice_dpll_update_state(pf, d, true);
ret = dpll_device_register(d->dpll, type, ops, d);
if (ret) {
- dpll_device_put(d->dpll, NULL);
+ dpll_device_put(d->dpll, &d->tracker);
return ret;
}
d->ops = ops;
diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h
index c0da03384ce91..63fac6510df6e 100644
--- a/drivers/net/ethernet/intel/ice/ice_dpll.h
+++ b/drivers/net/ethernet/intel/ice/ice_dpll.h
@@ -23,6 +23,7 @@ enum ice_dpll_pin_sw {
/** ice_dpll_pin - store info about pins
* @pin: dpll pin structure
* @pf: pointer to pf, which has registered the dpll_pin
+ * @tracker: reference count tracker
* @idx: ice pin private idx
* @num_parents: hols number of parent pins
* @parent_idx: hold indexes of parent pins
@@ -37,6 +38,7 @@ enum ice_dpll_pin_sw {
struct ice_dpll_pin {
struct dpll_pin *pin;
struct ice_pf *pf;
+ dpll_tracker tracker;
u8 idx;
u8 num_parents;
u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX];
@@ -58,6 +60,7 @@ struct ice_dpll_pin {
/** ice_dpll - store info required for DPLL control
* @dpll: pointer to dpll dev
* @pf: pointer to pf, which has registered the dpll_device
+ * @tracker: reference count tracker
* @dpll_idx: index of dpll on the NIC
* @input_idx: currently selected input index
* @prev_input_idx: previously selected input index
@@ -76,6 +79,7 @@ struct ice_dpll_pin {
struct ice_dpll {
struct dpll_device *dpll;
struct ice_pf *pf;
+ dpll_tracker tracker;
u8 dpll_idx;
u8 input_idx;
u8 prev_input_idx;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
index 541d83e5d7183..3981dd81d4c17 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
@@ -9,7 +9,9 @@
*/
struct mlx5_dpll {
struct dpll_device *dpll;
+ dpll_tracker dpll_tracker;
struct dpll_pin *dpll_pin;
+ dpll_tracker pin_tracker;
struct mlx5_core_dev *mdev;
struct workqueue_struct *wq;
struct delayed_work work;
@@ -438,7 +440,8 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev,
auxiliary_set_drvdata(adev, mdpll);
/* Multiple mdev instances might share one DPLL device. */
- mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE, NULL);
+ mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE,
+ &mdpll->dpll_tracker);
if (IS_ERR(mdpll->dpll)) {
err = PTR_ERR(mdpll->dpll);
goto err_free_mdpll;
@@ -452,7 +455,7 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev,
/* Multiple mdev instances might share one DPLL pin. */
mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev),
THIS_MODULE, &mlx5_dpll_pin_properties,
- NULL);
+ &mdpll->pin_tracker);
if (IS_ERR(mdpll->dpll_pin)) {
err = PTR_ERR(mdpll->dpll_pin);
goto err_unregister_dpll_device;
@@ -480,11 +483,11 @@ static int mlx5_dpll_probe(struct auxiliary_device *adev,
dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
&mlx5_dpll_pins_ops, mdpll);
err_put_dpll_pin:
- dpll_pin_put(mdpll->dpll_pin, NULL);
+ dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker);
err_unregister_dpll_device:
dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll);
err_put_dpll_device:
- dpll_device_put(mdpll->dpll, NULL);
+ dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker);
err_free_mdpll:
kfree(mdpll);
return err;
@@ -500,9 +503,9 @@ static void mlx5_dpll_remove(struct auxiliary_device *adev)
destroy_workqueue(mdpll->wq);
dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
&mlx5_dpll_pins_ops, mdpll);
- dpll_pin_put(mdpll->dpll_pin, NULL);
+ dpll_pin_put(mdpll->dpll_pin, &mdpll->pin_tracker);
dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll);
- dpll_device_put(mdpll->dpll, NULL);
+ dpll_device_put(mdpll->dpll, &mdpll->dpll_tracker);
kfree(mdpll);
mlx5_dpll_synce_status_set(mdev,
diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
index f39b3966b3e8c..1b16a9c3d7fdc 100644
--- a/drivers/ptp/ptp_ocp.c
+++ b/drivers/ptp/ptp_ocp.c
@@ -285,6 +285,7 @@ struct ptp_ocp_sma_connector {
u8 default_fcn;
struct dpll_pin *dpll_pin;
struct dpll_pin_properties dpll_prop;
+ dpll_tracker tracker;
};
struct ocp_attr_group {
@@ -383,6 +384,7 @@ struct ptp_ocp {
struct ptp_ocp_sma_connector sma[OCP_SMA_NUM];
const struct ocp_sma_op *sma_op;
struct dpll_device *dpll;
+ dpll_tracker tracker;
int signals_nr;
int freq_in_nr;
};
@@ -4788,7 +4790,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
devlink_register(devlink);
clkid = pci_get_dsn(pdev);
- bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, NULL);
+ bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE, &bp->tracker);
if (IS_ERR(bp->dpll)) {
err = PTR_ERR(bp->dpll);
dev_err(&pdev->dev, "dpll_device_alloc failed\n");
@@ -4801,7 +4803,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
for (i = 0; i < OCP_SMA_NUM; i++) {
bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE,
- &bp->sma[i].dpll_prop, NULL);
+ &bp->sma[i].dpll_prop,
+ &bp->sma[i].tracker);
if (IS_ERR(bp->sma[i].dpll_pin)) {
err = PTR_ERR(bp->sma[i].dpll_pin);
goto out_dpll;
@@ -4810,7 +4813,7 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops,
&bp->sma[i]);
if (err) {
- dpll_pin_put(bp->sma[i].dpll_pin, NULL);
+ dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker);
goto out_dpll;
}
}
@@ -4820,9 +4823,9 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
out_dpll:
while (i--) {
dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]);
- dpll_pin_put(bp->sma[i].dpll_pin, NULL);
+ dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker);
}
- dpll_device_put(bp->dpll, NULL);
+ dpll_device_put(bp->dpll, &bp->tracker);
out:
ptp_ocp_detach(bp);
out_disable:
@@ -4843,11 +4846,11 @@ ptp_ocp_remove(struct pci_dev *pdev)
for (i = 0; i < OCP_SMA_NUM; i++) {
if (bp->sma[i].dpll_pin) {
dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]);
- dpll_pin_put(bp->sma[i].dpll_pin, NULL);
+ dpll_pin_put(bp->sma[i].dpll_pin, &bp->sma[i].tracker);
}
}
dpll_device_unregister(bp->dpll, &dpll_ops, bp);
- dpll_device_put(bp->dpll, NULL);
+ dpll_device_put(bp->dpll, &bp->tracker);
devlink_unregister(devlink);
ptp_ocp_detach(bp);
pci_disable_device(pdev);
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:37 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCH net-next v4 0/9] dpll: Core improvements and ice E825-C SyncE support
|
This series introduces Synchronous Ethernet (SyncE) support for the Intel
E825-C Ethernet controller. Unlike previous generations where DPLL
connections were implicitly assumed, the E825-C architecture relies
on the platform firmware (ACPI) to describe the physical connections
between the Ethernet controller and external DPLLs (such as the ZL3073x).
To accommodate this, the series extends the DPLL subsystem to support
firmware node (fwnode) associations, asynchronous discovery via notifiers,
and dynamic pin management. Additionally, a significant refactor of
the DPLL reference counting logic is included to ensure robustness and
debuggability.
DPLL Core Extensions:
* Firmware Node Association: Pins can now be associated with a struct
fwnode_handle after allocation via dpll_pin_fwnode_set(). This allows
drivers to link pin objects with their corresponding DT/ACPI nodes.
* Asynchronous Notifiers: A raw notifier chain is added to the DPLL core.
This allows the Ethernet driver to subscribe to events and react when
the platform DPLL driver registers the parent pins, resolving probe
ordering dependencies.
* Dynamic Indexing: Drivers can now request DPLL_PIN_IDX_UNSPEC to have
the core automatically allocate a unique pin index.
Reference Counting & Debugging:
* Refactor: The reference counting logic in the core is consolidated.
Internal list management helpers now automatically handle hold/put
operations, removing fragile open-coded logic in the registration paths.
* Reference Tracking: A new Kconfig option DPLL_REFCNT_TRACKER is added.
This allows developers to instrument and debug reference leaks by
recording stack traces for every get/put operation.
Driver Updates:
* zl3073x: Updated to associate pins with fwnode handles using the new
setter and support the 'mux' pin type.
* ice: Implements the E825-C specific hardware configuration for SyncE
(CGU registers). It utilizes the new notifier and fwnode APIs to
dynamically discover and attach to the platform DPLLs.
Patch Summary:
Patch 1: DPLL Core (fwnode association).
Patch 2: Driver zl3073x (Set fwnode).
Patch 3-4: DPLL Core (Notifiers and dynamic IDs).
Patch 5: Driver zl3073x (Mux type).
Patch 6: DPLL Core (Refcount refactor).
Patch 7-8: Refcount tracking infrastructure and driver updates.
Patch 9: Driver ice (E825-C SyncE logic).
Changes in v4:
* Fixed documentation and function stub issues found by AI
Arkadiusz Kubalewski (1):
ice: dpll: Support E825-C SyncE and dynamic pin discovery
Ivan Vecera (7):
dpll: Allow associating dpll pin with a firmware node
dpll: zl3073x: Associate pin with fwnode handle
dpll: Support dynamic pin index allocation
dpll: zl3073x: Add support for mux pin type
dpll: Enhance and consolidate reference counting logic
dpll: Add reference count tracking support
drivers: Add support for DPLL reference count tracking
Petr Oros (1):
dpll: Add notifier chain for dpll events
drivers/dpll/Kconfig | 15 +
drivers/dpll/dpll_core.c | 288 ++++++-
drivers/dpll/dpll_core.h | 11 +
drivers/dpll/dpll_netlink.c | 6 +
drivers/dpll/zl3073x/dpll.c | 15 +-
drivers/dpll/zl3073x/dpll.h | 2 +
drivers/dpll/zl3073x/prop.c | 2 +
drivers/net/ethernet/intel/ice/ice_dpll.c | 755 +++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 30 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 +++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
.../net/ethernet/mellanox/mlx5/core/dpll.c | 16 +-
drivers/ptp/ptp_ocp.c | 18 +-
include/linux/dpll.h | 59 +-
18 files changed, 1347 insertions(+), 150 deletions(-)
--
2.52.0
|
From: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com>
Implement SyncE support for the E825-C Ethernet controller using the
DPLL subsystem. Unlike E810, the E825-C architecture relies on platform
firmware (ACPI) to describe connections between the NIC's recovered clock
outputs and external DPLL inputs.
Implement the following mechanisms to support this architecture:
1. Discovery Mechanism: The driver parses the 'dpll-pins' and 'dpll-pin names'
firmware properties to identify the external DPLL pins (parents)
corresponding to its RCLK outputs ("rclk0", "rclk1"). It uses
fwnode_dpll_pin_find() to locate these parent pins in the DPLL core.
2. Asynchronous Registration: Since the platform DPLL driver (e.g.
zl3073x) may probe independently of the network driver, utilize
the DPLL notifier chain The driver listens for DPLL_PIN_CREATED
events to detect when the parent MUX pins become available, then
registers its own Recovered Clock (RCLK) pins as children of those
parents.
3. Hardware Configuration: Implement the specific register access logic
for E825-C CGU (Clock Generation Unit) registers (R10, R11). This
includes configuring the bypass MUXes and clock dividers required to
drive SyncE signals.
4. Split Initialization: Refactor `ice_dpll_init()` to separate the
static initialization path of E810 from the dynamic, firmware-driven
path required for E825-C.
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Co-developed-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com>
Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com>
Signed-off-by: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com>
---
v3:
* DPLL init check in ice_ptp_link_change()
* using completion for dpll initization to avoid races with DPLL
notifier scheduled works
* added parsing of dpll-pin-names and dpll-pins properties
v2:
* fixed error path in ice_dpll_init_pins_e825()
* fixed misleading comment referring 'device tree'
---
drivers/net/ethernet/intel/ice/ice_dpll.c | 742 +++++++++++++++++---
drivers/net/ethernet/intel/ice/ice_dpll.h | 26 +
drivers/net/ethernet/intel/ice/ice_lib.c | 3 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 32 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 9 +-
drivers/net/ethernet/intel/ice/ice_tspll.c | 217 ++++++
drivers/net/ethernet/intel/ice/ice_tspll.h | 13 +-
drivers/net/ethernet/intel/ice/ice_type.h | 6 +
8 files changed, 956 insertions(+), 92 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
index 4eca62688d834..a8c99e49bfae6 100644
--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
+++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
@@ -5,6 +5,7 @@
#include "ice_lib.h"
#include "ice_trace.h"
#include <linux/dpll.h>
+#include <linux/property.h>
#define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50
#define ICE_DPLL_PIN_IDX_INVALID 0xff
@@ -528,6 +529,92 @@ ice_dpll_pin_disable(struct ice_hw *hw, struct ice_dpll_pin *pin,
return ret;
}
+/**
+ * ice_dpll_pin_store_state - updates the state of pin in SW bookkeeping
+ * @pin: pointer to a pin
+ * @parent: parent pin index
+ * @state: pin state (connected or disconnected)
+ */
+static void
+ice_dpll_pin_store_state(struct ice_dpll_pin *pin, int parent, bool state)
+{
+ pin->state[parent] = state ? DPLL_PIN_STATE_CONNECTED :
+ DPLL_PIN_STATE_DISCONNECTED;
+}
+
+/**
+ * ice_dpll_rclk_update_e825c - updates the state of rclk pin on e825c device
+ * @pf: private board struct
+ * @pin: pointer to a pin
+ *
+ * Update struct holding pin states info, states are separate for each parent
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - OK
+ * * negative - error
+ */
+static int ice_dpll_rclk_update_e825c(struct ice_pf *pf,
+ struct ice_dpll_pin *pin)
+{
+ u8 rclk_bits;
+ int err;
+ u32 reg;
+
+ if (pf->dplls.rclk.num_parents > ICE_SYNCE_CLK_NUM)
+ return -EINVAL;
+
+ err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R10, ®);
+ if (err)
+ return err;
+
+ rclk_bits = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, reg);
+ ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK0, rclk_bits ==
+ (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C));
+
+ err = ice_read_cgu_reg(&pf->hw, ICE_CGU_R11, ®);
+ if (err)
+ return err;
+
+ rclk_bits = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, reg);
+ ice_dpll_pin_store_state(pin, ICE_SYNCE_CLK1, rclk_bits ==
+ (pf->ptp.port.port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C));
+
+ return 0;
+}
+
+/**
+ * ice_dpll_rclk_update - updates the state of rclk pin on a device
+ * @pf: private board struct
+ * @pin: pointer to a pin
+ * @port_num: port number
+ *
+ * Update struct holding pin states info, states are separate for each parent
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - OK
+ * * negative - error
+ */
+static int ice_dpll_rclk_update(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ u8 port_num)
+{
+ int ret;
+
+ for (u8 parent = 0; parent < pf->dplls.rclk.num_parents; parent++) {
+ ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &parent, &port_num,
+ &pin->flags[parent], NULL);
+ if (ret)
+ return ret;
+
+ ice_dpll_pin_store_state(pin, parent,
+ ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN &
+ pin->flags[parent]);
+ }
+
+ return 0;
+}
+
/**
* ice_dpll_sw_pins_update - update status of all SW pins
* @pf: private board struct
@@ -668,22 +755,14 @@ ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin,
}
break;
case ICE_DPLL_PIN_TYPE_RCLK_INPUT:
- for (parent = 0; parent < pf->dplls.rclk.num_parents;
- parent++) {
- u8 p = parent;
-
- ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p,
- &port_num,
- &pin->flags[parent],
- NULL);
+ if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825) {
+ ret = ice_dpll_rclk_update_e825c(pf, pin);
+ if (ret)
+ goto err;
+ } else {
+ ret = ice_dpll_rclk_update(pf, pin, port_num);
if (ret)
goto err;
- if (ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN &
- pin->flags[parent])
- pin->state[parent] = DPLL_PIN_STATE_CONNECTED;
- else
- pin->state[parent] =
- DPLL_PIN_STATE_DISCONNECTED;
}
break;
case ICE_DPLL_PIN_TYPE_SOFTWARE:
@@ -1842,6 +1921,40 @@ ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv,
return 0;
}
+/**
+ * ice_dpll_synce_update_e825c - setting PHY recovered clock pins on e825c
+ * @hw: Pointer to the HW struct
+ * @ena: true if enable, false in disable
+ * @port_num: port number
+ * @output: output pin, we have two in E825C
+ *
+ * DPLL subsystem callback. Set proper signals to recover clock from port.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+static int ice_dpll_synce_update_e825c(struct ice_hw *hw, bool ena,
+ u32 port_num, enum ice_synce_clk output)
+{
+ int err;
+
+ /* configure the mux to deliver proper signal to DPLL from the MUX */
+ err = ice_tspll_cfg_bypass_mux_e825c(hw, ena, port_num, output);
+ if (err)
+ return err;
+
+ err = ice_tspll_cfg_synce_ethdiv_e825c(hw, output);
+ if (err)
+ return err;
+
+ dev_dbg(ice_hw_to_dev(hw), "CLK_SYNCE%u recovered clock: pin %s\n",
+ output, str_enabled_disabled(ena));
+
+ return 0;
+}
+
/**
* ice_dpll_output_esync_set - callback for setting embedded sync
* @pin: pointer to a pin
@@ -2263,6 +2376,28 @@ ice_dpll_sw_input_ref_sync_get(const struct dpll_pin *pin, void *pin_priv,
state, extack);
}
+static int
+ice_dpll_pin_get_parent_num(struct ice_dpll_pin *pin,
+ const struct dpll_pin *parent)
+{
+ int i;
+
+ for (i = 0; i < pin->num_parents; i++)
+ if (pin->pf->dplls.inputs[pin->parent_idx[i]].pin == parent)
+ return i;
+
+ return -ENOENT;
+}
+
+static int
+ice_dpll_pin_get_parent_idx(struct ice_dpll_pin *pin,
+ const struct dpll_pin *parent)
+{
+ int num = ice_dpll_pin_get_parent_num(pin, parent);
+
+ return num < 0 ? num : pin->parent_idx[num];
+}
+
/**
* ice_dpll_rclk_state_on_pin_set - set a state on rclk pin
* @pin: pointer to a pin
@@ -2286,35 +2421,44 @@ ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv,
enum dpll_pin_state state,
struct netlink_ext_ack *extack)
{
- struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv;
bool enable = state == DPLL_PIN_STATE_CONNECTED;
+ struct ice_dpll_pin *p = pin_priv;
struct ice_pf *pf = p->pf;
+ struct ice_hw *hw;
int ret = -EINVAL;
- u32 hw_idx;
+ int hw_idx;
+
+ hw = &pf->hw;
if (ice_dpll_is_reset(pf, extack))
return -EBUSY;
mutex_lock(&pf->dplls.lock);
- hw_idx = parent->idx - pf->dplls.base_rclk_idx;
- if (hw_idx >= pf->dplls.num_inputs)
+ hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin);
+ if (hw_idx < 0)
goto unlock;
if ((enable && p->state[hw_idx] == DPLL_PIN_STATE_CONNECTED) ||
(!enable && p->state[hw_idx] == DPLL_PIN_STATE_DISCONNECTED)) {
NL_SET_ERR_MSG_FMT(extack,
"pin:%u state:%u on parent:%u already set",
- p->idx, state, parent->idx);
+ p->idx, state,
+ ice_dpll_pin_get_parent_num(p, parent_pin));
goto unlock;
}
- ret = ice_aq_set_phy_rec_clk_out(&pf->hw, hw_idx, enable,
- &p->freq);
+
+ ret = hw->mac_type == ICE_MAC_GENERIC_3K_E825 ?
+ ice_dpll_synce_update_e825c(hw, enable,
+ pf->ptp.port.port_num,
+ (enum ice_synce_clk)hw_idx) :
+ ice_aq_set_phy_rec_clk_out(hw, hw_idx, enable, &p->freq);
if (ret)
NL_SET_ERR_MSG_FMT(extack,
"err:%d %s failed to set pin state:%u for pin:%u on parent:%u",
ret,
- libie_aq_str(pf->hw.adminq.sq_last_status),
- state, p->idx, parent->idx);
+ libie_aq_str(hw->adminq.sq_last_status),
+ state, p->idx,
+ ice_dpll_pin_get_parent_num(p, parent_pin));
unlock:
mutex_unlock(&pf->dplls.lock);
@@ -2344,17 +2488,17 @@ ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv,
enum dpll_pin_state *state,
struct netlink_ext_ack *extack)
{
- struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv;
+ struct ice_dpll_pin *p = pin_priv;
struct ice_pf *pf = p->pf;
int ret = -EINVAL;
- u32 hw_idx;
+ int hw_idx;
if (ice_dpll_is_reset(pf, extack))
return -EBUSY;
mutex_lock(&pf->dplls.lock);
- hw_idx = parent->idx - pf->dplls.base_rclk_idx;
- if (hw_idx >= pf->dplls.num_inputs)
+ hw_idx = ice_dpll_pin_get_parent_idx(p, parent_pin);
+ if (hw_idx < 0)
goto unlock;
ret = ice_dpll_pin_state_update(pf, p, ICE_DPLL_PIN_TYPE_RCLK_INPUT,
@@ -2814,7 +2958,8 @@ static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count)
int i;
for (i = 0; i < count; i++)
- dpll_pin_put(pins[i].pin, &pins[i].tracker);
+ if (!IS_ERR_OR_NULL(pins[i].pin))
+ dpll_pin_put(pins[i].pin, &pins[i].tracker);
}
/**
@@ -2836,10 +2981,14 @@ static int
ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
int start_idx, int count, u64 clock_id)
{
+ u32 pin_index;
int i, ret;
for (i = 0; i < count; i++) {
- pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE,
+ pin_index = start_idx;
+ if (start_idx != DPLL_PIN_IDX_UNSPEC)
+ pin_index += i;
+ pins[i].pin = dpll_pin_get(clock_id, pin_index, THIS_MODULE,
&pins[i].prop, &pins[i].tracker);
if (IS_ERR(pins[i].pin)) {
ret = PTR_ERR(pins[i].pin);
@@ -2944,6 +3093,7 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins,
/**
* ice_dpll_deinit_direct_pins - deinitialize direct pins
+ * @pf: board private structure
* @cgu: if cgu is present and controlled by this NIC
* @pins: pointer to pins array
* @count: number of pins
@@ -2955,7 +3105,8 @@ ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins,
* Release pins resources to the dpll subsystem.
*/
static void
-ice_dpll_deinit_direct_pins(bool cgu, struct ice_dpll_pin *pins, int count,
+ice_dpll_deinit_direct_pins(struct ice_pf *pf, bool cgu,
+ struct ice_dpll_pin *pins, int count,
const struct dpll_pin_ops *ops,
struct dpll_device *first,
struct dpll_device *second)
@@ -3024,14 +3175,14 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf)
{
struct ice_dpll_pin *rclk = &pf->dplls.rclk;
struct ice_vsi *vsi = ice_get_main_vsi(pf);
- struct dpll_pin *parent;
+ struct ice_dpll_pin *parent;
int i;
for (i = 0; i < rclk->num_parents; i++) {
- parent = pf->dplls.inputs[rclk->parent_idx[i]].pin;
- if (!parent)
+ parent = &pf->dplls.inputs[rclk->parent_idx[i]];
+ if (IS_ERR_OR_NULL(parent->pin))
continue;
- dpll_pin_on_pin_unregister(parent, rclk->pin,
+ dpll_pin_on_pin_unregister(parent->pin, rclk->pin,
&ice_dpll_rclk_ops, rclk);
}
if (WARN_ON_ONCE(!vsi || !vsi->netdev))
@@ -3040,60 +3191,213 @@ static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf)
dpll_pin_put(rclk->pin, &rclk->tracker);
}
+static bool ice_dpll_is_fwnode_pin(struct ice_dpll_pin *pin)
+{
+ return !IS_ERR_OR_NULL(pin->fwnode);
+}
+
+static void ice_dpll_pin_notify_work(struct work_struct *work)
+{
+ struct ice_dpll_pin_work *w = container_of(work,
+ struct ice_dpll_pin_work,
+ work);
+ struct ice_dpll_pin *pin, *parent = w->pin;
+ struct ice_pf *pf = parent->pf;
+ int ret;
+
+ wait_for_completion(&pf->dplls.dpll_init);
+ if (!test_bit(ICE_FLAG_DPLL, pf->flags))
+ return; /* DPLL initialization failed */
+
+ switch (w->action) {
+ case DPLL_PIN_CREATED:
+ if (!IS_ERR_OR_NULL(parent->pin)) {
+ /* We have already our pin registered */
+ goto out;
+ }
+
+ /* Grab reference on fwnode pin */
+ parent->pin = fwnode_dpll_pin_find(parent->fwnode,
+ &parent->tracker);
+ if (IS_ERR_OR_NULL(parent->pin)) {
+ dev_err(ice_pf_to_dev(pf),
+ "Cannot get fwnode pin reference\n");
+ goto out;
+ }
+
+ /* Register rclk pin */
+ pin = &pf->dplls.rclk;
+ ret = dpll_pin_on_pin_register(parent->pin, pin->pin,
+ &ice_dpll_rclk_ops, pin);
+ if (ret) {
+ dev_err(ice_pf_to_dev(pf),
+ "Failed to register pin: %pe\n", ERR_PTR(ret));
+ dpll_pin_put(parent->pin, &parent->tracker);
+ parent->pin = NULL;
+ goto out;
+ }
+ break;
+ case DPLL_PIN_DELETED:
+ if (IS_ERR_OR_NULL(parent->pin)) {
+ /* We have already our pin unregistered */
+ goto out;
+ }
+
+ /* Unregister rclk pin */
+ pin = &pf->dplls.rclk;
+ dpll_pin_on_pin_unregister(parent->pin, pin->pin,
+ &ice_dpll_rclk_ops, pin);
+
+ /* Drop fwnode pin reference */
+ dpll_pin_put(parent->pin, &parent->tracker);
+ parent->pin = NULL;
+ break;
+ default:
+ break;
+ }
+out:
+ kfree(w);
+}
+
+static int ice_dpll_pin_notify(struct notifier_block *nb, unsigned long action,
+ void *data)
+{
+ struct ice_dpll_pin *pin = container_of(nb, struct ice_dpll_pin, nb);
+ struct dpll_pin_notifier_info *info = data;
+ struct ice_dpll_pin_work *work;
+
+ if (action != DPLL_PIN_CREATED && action != DPLL_PIN_DELETED)
+ return NOTIFY_DONE;
+
+ /* Check if the reported pin is this one */
+ if (pin->fwnode != info->fwnode)
+ return NOTIFY_DONE; /* Not this pin */
+
+ work = kzalloc(sizeof(*work), GFP_KERNEL);
+ if (!work)
+ return NOTIFY_DONE;
+
+ INIT_WORK(&work->work, ice_dpll_pin_notify_work);
+ work->action = action;
+ work->pin = pin;
+
+ queue_work(pin->pf->dplls.wq, &work->work);
+
+ return NOTIFY_OK;
+}
+
/**
- * ice_dpll_init_rclk_pins - initialize recovered clock pin
+ * ice_dpll_init_pin_common - initialize pin
* @pf: board private structure
* @pin: pin to register
* @start_idx: on which index shall allocation start in dpll subsystem
* @ops: callback ops registered with the pins
*
- * Allocate resource for recovered clock pin in dpll subsystem. Register the
- * pin with the parents it has in the info. Register pin with the pf's main vsi
- * netdev.
+ * Allocate resource for given pin in dpll subsystem. Register the pin with
+ * the parents it has in the info.
*
* Return:
* * 0 - success
* * negative - registration failure reason
*/
static int
-ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin,
- int start_idx, const struct dpll_pin_ops *ops)
+ice_dpll_init_pin_common(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ int start_idx, const struct dpll_pin_ops *ops)
{
- struct ice_vsi *vsi = ice_get_main_vsi(pf);
- struct dpll_pin *parent;
+ struct ice_dpll_pin *parent;
int ret, i;
- if (WARN_ON((!vsi || !vsi->netdev)))
- return -EINVAL;
- ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF,
- pf->dplls.clock_id);
+ ret = ice_dpll_get_pins(pf, pin, start_idx, 1, pf->dplls.clock_id);
if (ret)
return ret;
- for (i = 0; i < pf->dplls.rclk.num_parents; i++) {
- parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[i]].pin;
- if (!parent) {
- ret = -ENODEV;
- goto unregister_pins;
+
+ for (i = 0; i < pin->num_parents; i++) {
+ parent = &pf->dplls.inputs[pin->parent_idx[i]];
+ if (IS_ERR_OR_NULL(parent->pin)) {
+ if (!ice_dpll_is_fwnode_pin(parent)) {
+ ret = -ENODEV;
+ goto unregister_pins;
+ }
+ parent->pin = fwnode_dpll_pin_find(parent->fwnode,
+ &parent->tracker);
+ if (IS_ERR_OR_NULL(parent->pin)) {
+ dev_info(ice_pf_to_dev(pf),
+ "Mux pin not registered yet\n");
+ continue;
+ }
}
- ret = dpll_pin_on_pin_register(parent, pf->dplls.rclk.pin,
- ops, &pf->dplls.rclk);
+ ret = dpll_pin_on_pin_register(parent->pin, pin->pin, ops, pin);
if (ret)
goto unregister_pins;
}
- dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin);
return 0;
unregister_pins:
while (i) {
- parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[--i]].pin;
- dpll_pin_on_pin_unregister(parent, pf->dplls.rclk.pin,
- &ice_dpll_rclk_ops, &pf->dplls.rclk);
+ parent = &pf->dplls.inputs[pin->parent_idx[--i]];
+ if (IS_ERR_OR_NULL(parent->pin))
+ continue;
+ dpll_pin_on_pin_unregister(parent->pin, pin->pin, ops, pin);
}
- ice_dpll_release_pins(pin, ICE_DPLL_RCLK_NUM_PER_PF);
+ ice_dpll_release_pins(pin, 1);
+
return ret;
}
+/**
+ * ice_dpll_init_rclk_pin - initialize recovered clock pin
+ * @pf: board private structure
+ * @start_idx: on which index shall allocation start in dpll subsystem
+ * @ops: callback ops registered with the pins
+ *
+ * Allocate resource for recovered clock pin in dpll subsystem. Register the
+ * pin with the parents it has in the info.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - registration failure reason
+ */
+static int
+ice_dpll_init_rclk_pin(struct ice_pf *pf, int start_idx,
+ const struct dpll_pin_ops *ops)
+{
+ struct ice_vsi *vsi = ice_get_main_vsi(pf);
+ int ret;
+
+ ret = ice_dpll_init_pin_common(pf, &pf->dplls.rclk, start_idx, ops);
+ if (ret)
+ return ret;
+
+ dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin);
+
+ return 0;
+}
+
+static void
+ice_dpll_deinit_fwnode_pin(struct ice_dpll_pin *pin)
+{
+ unregister_dpll_notifier(&pin->nb);
+ flush_workqueue(pin->pf->dplls.wq);
+ if (!IS_ERR_OR_NULL(pin->pin)) {
+ dpll_pin_put(pin->pin, &pin->tracker);
+ pin->pin = NULL;
+ }
+ fwnode_handle_put(pin->fwnode);
+ pin->fwnode = NULL;
+}
+
+static void
+ice_dpll_deinit_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
+ int start_idx)
+{
+ int i;
+
+ for (i = 0; i < pf->dplls.rclk.num_parents; i++)
+ ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]);
+ destroy_workqueue(pf->dplls.wq);
+}
+
/**
* ice_dpll_deinit_pins - deinitialize direct pins
* @pf: board private structure
@@ -3113,6 +3417,8 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu)
struct ice_dpll *dp = &d->pps;
ice_dpll_deinit_rclk_pin(pf);
+ if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825)
+ ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0);
if (cgu) {
ice_dpll_unregister_pins(dp->dpll, inputs, &ice_dpll_input_ops,
num_inputs);
@@ -3127,12 +3433,12 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu)
&ice_dpll_output_ops, num_outputs);
ice_dpll_release_pins(outputs, num_outputs);
if (!pf->dplls.generic) {
- ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl,
+ ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl,
ICE_DPLL_PIN_SW_NUM,
&ice_dpll_pin_ufl_ops,
pf->dplls.pps.dpll,
pf->dplls.eec.dpll);
- ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma,
+ ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma,
ICE_DPLL_PIN_SW_NUM,
&ice_dpll_pin_sma_ops,
pf->dplls.pps.dpll,
@@ -3141,6 +3447,141 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu)
}
}
+static struct fwnode_handle *
+ice_dpll_pin_node_get(struct ice_pf *pf, const char *name)
+{
+ struct fwnode_handle *fwnode = dev_fwnode(ice_pf_to_dev(pf));
+ int index;
+
+ index = fwnode_property_match_string(fwnode, "dpll-pin-names", name);
+ if (index < 0)
+ return ERR_PTR(-ENOENT);
+
+ return fwnode_find_reference(fwnode, "dpll-pins", index);
+}
+
+static int
+ice_dpll_init_fwnode_pin(struct ice_dpll_pin *pin, const char *name)
+{
+ struct ice_pf *pf = pin->pf;
+ int ret;
+
+ pin->fwnode = ice_dpll_pin_node_get(pf, name);
+ if (IS_ERR(pin->fwnode)) {
+ dev_err(ice_pf_to_dev(pf),
+ "Failed to find %s firmware node: %pe\n", name,
+ pin->fwnode);
+ pin->fwnode = NULL;
+ return -ENODEV;
+ }
+
+ dev_dbg(ice_pf_to_dev(pf), "Found fwnode node for %s\n", name);
+
+ pin->pin = fwnode_dpll_pin_find(pin->fwnode, &pin->tracker);
+ if (IS_ERR_OR_NULL(pin->pin)) {
+ dev_info(ice_pf_to_dev(pf),
+ "DPLL pin for %pfwp not registered yet\n",
+ pin->fwnode);
+ pin->pin = NULL;
+ }
+
+ pin->nb.notifier_call = ice_dpll_pin_notify;
+ ret = register_dpll_notifier(&pin->nb);
+ if (ret) {
+ dev_err(ice_pf_to_dev(pf),
+ "Failed to subscribe for DPLL notifications\n");
+
+ if (!IS_ERR_OR_NULL(pin->pin)) {
+ dpll_pin_put(pin->pin, &pin->tracker);
+ pin->pin = NULL;
+ }
+ fwnode_handle_put(pin->fwnode);
+ pin->fwnode = NULL;
+
+ return ret;
+ }
+
+ return ret;
+}
+
+/**
+ * ice_dpll_init_fwnode_pins - initialize pins from device tree
+ * @pf: board private structure
+ * @pins: pointer to pins array
+ * @start_idx: starting index for pins
+ * @count: number of pins to initialize
+ *
+ * Initialize input pins for E825 RCLK support. The parent pins (rclk0, rclk1)
+ * are expected to be defined by the system firmware (ACPI). This function
+ * allocates them in the dpll subsystem and stores their indices for later
+ * registration with the rclk pin.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - initialization failure reason
+ */
+static int
+ice_dpll_init_fwnode_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
+ int start_idx)
+{
+ char pin_name[8];
+ int i, ret;
+
+ pf->dplls.wq = create_singlethread_workqueue("ice_dpll_wq");
+ if (!pf->dplls.wq)
+ return -ENOMEM;
+
+ for (i = 0; i < pf->dplls.rclk.num_parents; i++) {
+ pins[start_idx + i].pf = pf;
+ snprintf(pin_name, sizeof(pin_name), "rclk%u", i);
+ ret = ice_dpll_init_fwnode_pin(&pins[start_idx + i], pin_name);
+ if (ret)
+ goto error;
+ }
+
+ return 0;
+error:
+ while (i--)
+ ice_dpll_deinit_fwnode_pin(&pins[start_idx + i]);
+
+ destroy_workqueue(pf->dplls.wq);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_init_pins_e825 - init pins and register pins with a dplls
+ * @pf: board private structure
+ * @cgu: if cgu is present and controlled by this NIC
+ *
+ * Initialize directly connected pf's pins within pf's dplls in a Linux dpll
+ * subsystem.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - initialization failure reason
+ */
+static int ice_dpll_init_pins_e825(struct ice_pf *pf)
+{
+ int ret;
+
+ ret = ice_dpll_init_fwnode_pins(pf, pf->dplls.inputs, 0);
+ if (ret)
+ return ret;
+
+ ret = ice_dpll_init_rclk_pin(pf, DPLL_PIN_IDX_UNSPEC,
+ &ice_dpll_rclk_ops);
+ if (ret) {
+ /* Inform DPLL notifier works that DPLL init was finished
+ * unsuccessfully (ICE_DPLL_FLAG not set).
+ */
+ complete_all(&pf->dplls.dpll_init);
+ ice_dpll_deinit_fwnode_pins(pf, pf->dplls.inputs, 0);
+ }
+
+ return ret;
+}
+
/**
* ice_dpll_init_pins - init pins and register pins with a dplls
* @pf: board private structure
@@ -3155,21 +3596,24 @@ static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu)
*/
static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu)
{
+ const struct dpll_pin_ops *output_ops;
+ const struct dpll_pin_ops *input_ops;
int ret, count;
+ input_ops = &ice_dpll_input_ops;
+ output_ops = &ice_dpll_output_ops;
+
ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.inputs, 0,
- pf->dplls.num_inputs,
- &ice_dpll_input_ops,
- pf->dplls.eec.dpll, pf->dplls.pps.dpll);
+ pf->dplls.num_inputs, input_ops,
+ pf->dplls.eec.dpll,
+ pf->dplls.pps.dpll);
if (ret)
return ret;
count = pf->dplls.num_inputs;
if (cgu) {
ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.outputs,
- count,
- pf->dplls.num_outputs,
- &ice_dpll_output_ops,
- pf->dplls.eec.dpll,
+ count, pf->dplls.num_outputs,
+ output_ops, pf->dplls.eec.dpll,
pf->dplls.pps.dpll);
if (ret)
goto deinit_inputs;
@@ -3205,30 +3649,30 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu)
} else {
count += pf->dplls.num_outputs + 2 * ICE_DPLL_PIN_SW_NUM;
}
- ret = ice_dpll_init_rclk_pins(pf, &pf->dplls.rclk, count + pf->hw.pf_id,
- &ice_dpll_rclk_ops);
+
+ ret = ice_dpll_init_rclk_pin(pf, count + pf->ptp.port.port_num,
+ &ice_dpll_rclk_ops);
if (ret)
goto deinit_ufl;
return 0;
deinit_ufl:
- ice_dpll_deinit_direct_pins(cgu, pf->dplls.ufl,
- ICE_DPLL_PIN_SW_NUM,
- &ice_dpll_pin_ufl_ops,
- pf->dplls.pps.dpll, pf->dplls.eec.dpll);
+ ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.ufl, ICE_DPLL_PIN_SW_NUM,
+ &ice_dpll_pin_ufl_ops, pf->dplls.pps.dpll,
+ pf->dplls.eec.dpll);
deinit_sma:
- ice_dpll_deinit_direct_pins(cgu, pf->dplls.sma,
- ICE_DPLL_PIN_SW_NUM,
- &ice_dpll_pin_sma_ops,
- pf->dplls.pps.dpll, pf->dplls.eec.dpll);
+ ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.sma, ICE_DPLL_PIN_SW_NUM,
+ &ice_dpll_pin_sma_ops, pf->dplls.pps.dpll,
+ pf->dplls.eec.dpll);
deinit_outputs:
- ice_dpll_deinit_direct_pins(cgu, pf->dplls.outputs,
+ ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.outputs,
pf->dplls.num_outputs,
- &ice_dpll_output_ops, pf->dplls.pps.dpll,
+ output_ops, pf->dplls.pps.dpll,
pf->dplls.eec.dpll);
deinit_inputs:
- ice_dpll_deinit_direct_pins(cgu, pf->dplls.inputs, pf->dplls.num_inputs,
- &ice_dpll_input_ops, pf->dplls.pps.dpll,
+ ice_dpll_deinit_direct_pins(pf, cgu, pf->dplls.inputs,
+ pf->dplls.num_inputs,
+ input_ops, pf->dplls.pps.dpll,
pf->dplls.eec.dpll);
return ret;
}
@@ -3239,8 +3683,8 @@ static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu)
* @d: pointer to ice_dpll
* @cgu: if cgu is present and controlled by this NIC
*
- * If cgu is owned unregister the dpll from dpll subsystem.
- * Release resources of dpll device from dpll subsystem.
+ * If cgu is owned, unregister the DPL from DPLL subsystem.
+ * Release resources of DPLL device from DPLL subsystem.
*/
static void
ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu)
@@ -3257,8 +3701,8 @@ ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu)
* @cgu: if cgu is present and controlled by this NIC
* @type: type of dpll being initialized
*
- * Allocate dpll instance for this board in dpll subsystem, if cgu is controlled
- * by this NIC, register dpll with the callback ops.
+ * Allocate DPLL instance for this board in dpll subsystem, if cgu is controlled
+ * by this NIC, register DPLL with the callback ops.
*
* Return:
* * 0 - success
@@ -3289,6 +3733,7 @@ ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu,
ret = dpll_device_register(d->dpll, type, ops, d);
if (ret) {
dpll_device_put(d->dpll, &d->tracker);
+ d->dpll = NULL;
return ret;
}
d->ops = ops;
@@ -3506,6 +3951,26 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
return ret;
}
+/**
+ * ice_dpll_init_info_pin_on_pin_e825c - initializes rclk pin information
+ * @pf: board private structure
+ *
+ * Init information for rclk pin, cache them in pf->dplls.rclk.
+ *
+ * Return:
+ * * 0 - success
+ */
+static int ice_dpll_init_info_pin_on_pin_e825c(struct ice_pf *pf)
+{
+ struct ice_dpll_pin *rclk_pin = &pf->dplls.rclk;
+
+ rclk_pin->prop.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT;
+ rclk_pin->prop.capabilities |= DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE;
+ rclk_pin->pf = pf;
+
+ return 0;
+}
+
/**
* ice_dpll_init_info_rclk_pin - initializes rclk pin information
* @pf: board private structure
@@ -3632,7 +4097,10 @@ ice_dpll_init_pins_info(struct ice_pf *pf, enum ice_dpll_pin_type pin_type)
case ICE_DPLL_PIN_TYPE_OUTPUT:
return ice_dpll_init_info_direct_pins(pf, pin_type);
case ICE_DPLL_PIN_TYPE_RCLK_INPUT:
- return ice_dpll_init_info_rclk_pin(pf);
+ if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825)
+ return ice_dpll_init_info_pin_on_pin_e825c(pf);
+ else
+ return ice_dpll_init_info_rclk_pin(pf);
case ICE_DPLL_PIN_TYPE_SOFTWARE:
return ice_dpll_init_info_sw_pins(pf);
default:
@@ -3654,6 +4122,50 @@ static void ice_dpll_deinit_info(struct ice_pf *pf)
kfree(pf->dplls.pps.input_prio);
}
+/**
+ * ice_dpll_init_info_e825c - prepare pf's dpll information structure for e825c
+ * device
+ * @pf: board private structure
+ *
+ * Acquire (from HW) and set basic DPLL information (on pf->dplls struct).
+ *
+ * Return:
+ * * 0 - success
+ * * negative - init failure reason
+ */
+static int ice_dpll_init_info_e825c(struct ice_pf *pf)
+{
+ struct ice_dplls *d = &pf->dplls;
+ int ret = 0;
+ int i;
+
+ d->clock_id = ice_generate_clock_id(pf);
+ d->num_inputs = ICE_SYNCE_CLK_NUM;
+
+ d->inputs = kcalloc(d->num_inputs, sizeof(*d->inputs), GFP_KERNEL);
+ if (!d->inputs)
+ return -ENOMEM;
+
+ ret = ice_get_cgu_rclk_pin_info(&pf->hw, &d->base_rclk_idx,
+ &pf->dplls.rclk.num_parents);
+ if (ret)
+ goto deinit_info;
+
+ for (i = 0; i < pf->dplls.rclk.num_parents; i++)
+ pf->dplls.rclk.parent_idx[i] = d->base_rclk_idx + i;
+
+ ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_RCLK_INPUT);
+ if (ret)
+ goto deinit_info;
+ dev_dbg(ice_pf_to_dev(pf),
+ "%s - success, inputs: %u, outputs: %u, rclk-parents: %u\n",
+ __func__, d->num_inputs, d->num_outputs, d->rclk.num_parents);
+ return 0;
+deinit_info:
+ ice_dpll_deinit_info(pf);
+ return ret;
+}
+
/**
* ice_dpll_init_info - prepare pf's dpll information structure
* @pf: board private structure
@@ -3773,14 +4285,16 @@ void ice_dpll_deinit(struct ice_pf *pf)
ice_dpll_deinit_worker(pf);
ice_dpll_deinit_pins(pf, cgu);
- ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu);
- ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu);
+ if (!IS_ERR_OR_NULL(pf->dplls.pps.dpll))
+ ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu);
+ if (!IS_ERR_OR_NULL(pf->dplls.eec.dpll))
+ ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu);
ice_dpll_deinit_info(pf);
mutex_destroy(&pf->dplls.lock);
}
/**
- * ice_dpll_init - initialize support for dpll subsystem
+ * ice_dpll_init_e825 - initialize support for dpll subsystem
* @pf: board private structure
*
* Set up the device dplls, register them and pins connected within Linux dpll
@@ -3789,7 +4303,43 @@ void ice_dpll_deinit(struct ice_pf *pf)
*
* Context: Initializes pf->dplls.lock mutex.
*/
-void ice_dpll_init(struct ice_pf *pf)
+static void ice_dpll_init_e825(struct ice_pf *pf)
+{
+ struct ice_dplls *d = &pf->dplls;
+ int err;
+
+ mutex_init(&d->lock);
+ init_completion(&d->dpll_init);
+
+ err = ice_dpll_init_info_e825c(pf);
+ if (err)
+ goto err_exit;
+ err = ice_dpll_init_pins_e825(pf);
+ if (err)
+ goto deinit_info;
+ set_bit(ICE_FLAG_DPLL, pf->flags);
+ complete_all(&d->dpll_init);
+
+ return;
+
+deinit_info:
+ ice_dpll_deinit_info(pf);
+err_exit:
+ mutex_destroy(&d->lock);
+ dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err);
+}
+
+/**
+ * ice_dpll_init_e810 - initialize support for dpll subsystem
+ * @pf: board private structure
+ *
+ * Set up the device dplls, register them and pins connected within Linux dpll
+ * subsystem. Allow userspace to obtain state of DPLL and handling of DPLL
+ * configuration requests.
+ *
+ * Context: Initializes pf->dplls.lock mutex.
+ */
+static void ice_dpll_init_e810(struct ice_pf *pf)
{
bool cgu = ice_is_feature_supported(pf, ICE_F_CGU);
struct ice_dplls *d = &pf->dplls;
@@ -3829,3 +4379,15 @@ void ice_dpll_init(struct ice_pf *pf)
mutex_destroy(&d->lock);
dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err);
}
+
+void ice_dpll_init(struct ice_pf *pf)
+{
+ switch (pf->hw.mac_type) {
+ case ICE_MAC_GENERIC_3K_E825:
+ ice_dpll_init_e825(pf);
+ break;
+ default:
+ ice_dpll_init_e810(pf);
+ break;
+ }
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h
index 63fac6510df6e..ae42cdea0ee14 100644
--- a/drivers/net/ethernet/intel/ice/ice_dpll.h
+++ b/drivers/net/ethernet/intel/ice/ice_dpll.h
@@ -20,6 +20,12 @@ enum ice_dpll_pin_sw {
ICE_DPLL_PIN_SW_NUM
};
+struct ice_dpll_pin_work {
+ struct work_struct work;
+ unsigned long action;
+ struct ice_dpll_pin *pin;
+};
+
/** ice_dpll_pin - store info about pins
* @pin: dpll pin structure
* @pf: pointer to pf, which has registered the dpll_pin
@@ -39,6 +45,8 @@ struct ice_dpll_pin {
struct dpll_pin *pin;
struct ice_pf *pf;
dpll_tracker tracker;
+ struct fwnode_handle *fwnode;
+ struct notifier_block nb;
u8 idx;
u8 num_parents;
u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX];
@@ -118,7 +126,9 @@ struct ice_dpll {
struct ice_dplls {
struct kthread_worker *kworker;
struct kthread_delayed_work work;
+ struct workqueue_struct *wq;
struct mutex lock;
+ struct completion dpll_init;
struct ice_dpll eec;
struct ice_dpll pps;
struct ice_dpll_pin *inputs;
@@ -147,3 +157,19 @@ static inline void ice_dpll_deinit(struct ice_pf *pf) { }
#endif
#endif
+
+#define ICE_CGU_R10 0x28
+#define ICE_CGU_R10_SYNCE_CLKO_SEL GENMASK(8, 5)
+#define ICE_CGU_R10_SYNCE_CLKODIV_M1 GENMASK(13, 9)
+#define ICE_CGU_R10_SYNCE_CLKODIV_LOAD BIT(14)
+#define ICE_CGU_R10_SYNCE_DCK_RST BIT(15)
+#define ICE_CGU_R10_SYNCE_ETHCLKO_SEL GENMASK(18, 16)
+#define ICE_CGU_R10_SYNCE_ETHDIV_M1 GENMASK(23, 19)
+#define ICE_CGU_R10_SYNCE_ETHDIV_LOAD BIT(24)
+#define ICE_CGU_R10_SYNCE_DCK2_RST BIT(25)
+#define ICE_CGU_R10_SYNCE_S_REF_CLK GENMASK(31, 27)
+
+#define ICE_CGU_R11 0x2C
+#define ICE_CGU_R11_SYNCE_S_BYP_CLK GENMASK(6, 1)
+
+#define ICE_CGU_BYPASS_MUX_OFFSET_E825C 3
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 2522ebdea9139..d921269e1fe71 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -3989,6 +3989,9 @@ void ice_init_feature_support(struct ice_pf *pf)
break;
}
+ if (pf->hw.mac_type == ICE_MAC_GENERIC_3K_E825)
+ ice_set_feature_support(pf, ICE_F_PHY_RCLK);
+
if (pf->hw.mac_type == ICE_MAC_E830) {
ice_set_feature_support(pf, ICE_F_MBX_LIMIT);
ice_set_feature_support(pf, ICE_F_GCS);
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
index 4c8d20f2d2c0a..1d26be58e29a0 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
@@ -1341,6 +1341,38 @@ void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
if (pf->hw.reset_ongoing)
return;
+ if (hw->mac_type == ICE_MAC_GENERIC_3K_E825) {
+ int pin, err;
+
+ if (!test_bit(ICE_FLAG_DPLL, pf->flags))
+ return;
+
+ mutex_lock(&pf->dplls.lock);
+ for (pin = 0; pin < ICE_SYNCE_CLK_NUM; pin++) {
+ enum ice_synce_clk clk_pin;
+ bool active;
+ u8 port_num;
+
+ port_num = ptp_port->port_num;
+ clk_pin = (enum ice_synce_clk)pin;
+ err = ice_tspll_bypass_mux_active_e825c(hw,
+ port_num,
+ &active,
+ clk_pin);
+ if (WARN_ON_ONCE(err)) {
+ mutex_unlock(&pf->dplls.lock);
+ return;
+ }
+
+ err = ice_tspll_cfg_synce_ethdiv_e825c(hw, clk_pin);
+ if (active && WARN_ON_ONCE(err)) {
+ mutex_unlock(&pf->dplls.lock);
+ return;
+ }
+ }
+ mutex_unlock(&pf->dplls.lock);
+ }
+
switch (hw->mac_type) {
case ICE_MAC_E810:
case ICE_MAC_E830:
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
index 35680dbe4a7f7..61c0a0d93ea89 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
@@ -5903,7 +5903,14 @@ int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num)
*base_idx = SI_REF1P;
else
ret = -ENODEV;
-
+ break;
+ case ICE_DEV_ID_E825C_BACKPLANE:
+ case ICE_DEV_ID_E825C_QSFP:
+ case ICE_DEV_ID_E825C_SFP:
+ case ICE_DEV_ID_E825C_SGMII:
+ *pin_num = ICE_SYNCE_CLK_NUM;
+ *base_idx = 0;
+ ret = 0;
break;
default:
ret = -ENODEV;
diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.c b/drivers/net/ethernet/intel/ice/ice_tspll.c
index 66320a4ab86fd..fd4b58eb9bc00 100644
--- a/drivers/net/ethernet/intel/ice/ice_tspll.c
+++ b/drivers/net/ethernet/intel/ice/ice_tspll.c
@@ -624,3 +624,220 @@ int ice_tspll_init(struct ice_hw *hw)
return err;
}
+
+/**
+ * ice_tspll_bypass_mux_active_e825c - check if the given port is set active
+ * @hw: Pointer to the HW struct
+ * @port: Number of the port
+ * @active: Output flag showing if port is active
+ * @output: Output pin, we have two in E825C
+ *
+ * Check if given port is selected as recovered clock source for given output.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active,
+ enum ice_synce_clk output)
+{
+ u8 active_clk;
+ u32 val;
+ int err;
+
+ switch (output) {
+ case ICE_SYNCE_CLK0:
+ err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val);
+ if (err)
+ return err;
+ active_clk = FIELD_GET(ICE_CGU_R10_SYNCE_S_REF_CLK, val);
+ break;
+ case ICE_SYNCE_CLK1:
+ err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val);
+ if (err)
+ return err;
+ active_clk = FIELD_GET(ICE_CGU_R11_SYNCE_S_BYP_CLK, val);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (active_clk == port % hw->ptp.ports_per_phy +
+ ICE_CGU_BYPASS_MUX_OFFSET_E825C)
+ *active = true;
+ else
+ *active = false;
+
+ return 0;
+}
+
+/**
+ * ice_tspll_cfg_bypass_mux_e825c - configure reference clock mux
+ * @hw: Pointer to the HW struct
+ * @ena: true to enable the reference, false if disable
+ * @port_num: Number of the port
+ * @output: Output pin, we have two in E825C
+ *
+ * Set reference clock source and output clock selection.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num,
+ enum ice_synce_clk output)
+{
+ u8 first_mux;
+ int err;
+ u32 r10;
+
+ err = ice_read_cgu_reg(hw, ICE_CGU_R10, &r10);
+ if (err)
+ return err;
+
+ if (!ena)
+ first_mux = ICE_CGU_NET_REF_CLK0;
+ else
+ first_mux = port_num + ICE_CGU_BYPASS_MUX_OFFSET_E825C;
+
+ r10 &= ~(ICE_CGU_R10_SYNCE_DCK_RST | ICE_CGU_R10_SYNCE_DCK2_RST);
+
+ switch (output) {
+ case ICE_SYNCE_CLK0:
+ r10 &= ~(ICE_CGU_R10_SYNCE_ETHCLKO_SEL |
+ ICE_CGU_R10_SYNCE_ETHDIV_LOAD |
+ ICE_CGU_R10_SYNCE_S_REF_CLK);
+ r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_S_REF_CLK, first_mux);
+ r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHCLKO_SEL,
+ ICE_CGU_REF_CLK_BYP0_DIV);
+ break;
+ case ICE_SYNCE_CLK1:
+ {
+ u32 val;
+
+ err = ice_read_cgu_reg(hw, ICE_CGU_R11, &val);
+ if (err)
+ return err;
+ val &= ~ICE_CGU_R11_SYNCE_S_BYP_CLK;
+ val |= FIELD_PREP(ICE_CGU_R11_SYNCE_S_BYP_CLK, first_mux);
+ err = ice_write_cgu_reg(hw, ICE_CGU_R11, val);
+ if (err)
+ return err;
+ r10 &= ~(ICE_CGU_R10_SYNCE_CLKODIV_LOAD |
+ ICE_CGU_R10_SYNCE_CLKO_SEL);
+ r10 |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKO_SEL,
+ ICE_CGU_REF_CLK_BYP1_DIV);
+ break;
+ }
+ default:
+ return -EINVAL;
+ }
+
+ err = ice_write_cgu_reg(hw, ICE_CGU_R10, r10);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+/**
+ * ice_tspll_get_div_e825c - get the divider for the given speed
+ * @link_speed: link speed of the port
+ * @divider: output value, calculated divider
+ *
+ * Get CGU divider value based on the link speed.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+static int ice_tspll_get_div_e825c(u16 link_speed, unsigned int *divider)
+{
+ switch (link_speed) {
+ case ICE_AQ_LINK_SPEED_100GB:
+ case ICE_AQ_LINK_SPEED_50GB:
+ case ICE_AQ_LINK_SPEED_25GB:
+ *divider = 10;
+ break;
+ case ICE_AQ_LINK_SPEED_40GB:
+ case ICE_AQ_LINK_SPEED_10GB:
+ *divider = 4;
+ break;
+ case ICE_AQ_LINK_SPEED_5GB:
+ case ICE_AQ_LINK_SPEED_2500MB:
+ case ICE_AQ_LINK_SPEED_1000MB:
+ *divider = 2;
+ break;
+ case ICE_AQ_LINK_SPEED_100MB:
+ *divider = 1;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+/**
+ * ice_tspll_cfg_synce_ethdiv_e825c - set the divider on the mux
+ * @hw: Pointer to the HW struct
+ * @output: Output pin, we have two in E825C
+ *
+ * Set the correct CGU divider for RCLKA or RCLKB.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw,
+ enum ice_synce_clk output)
+{
+ unsigned int divider;
+ u16 link_speed;
+ u32 val;
+ int err;
+
+ link_speed = hw->port_info->phy.link_info.link_speed;
+ if (!link_speed)
+ return 0;
+
+ err = ice_tspll_get_div_e825c(link_speed, ÷r);
+ if (err)
+ return err;
+
+ err = ice_read_cgu_reg(hw, ICE_CGU_R10, &val);
+ if (err)
+ return err;
+
+ /* programmable divider value (from 2 to 16) minus 1 for ETHCLKOUT */
+ switch (output) {
+ case ICE_SYNCE_CLK0:
+ val &= ~(ICE_CGU_R10_SYNCE_ETHDIV_M1 |
+ ICE_CGU_R10_SYNCE_ETHDIV_LOAD);
+ val |= FIELD_PREP(ICE_CGU_R10_SYNCE_ETHDIV_M1, divider - 1);
+ err = ice_write_cgu_reg(hw, ICE_CGU_R10, val);
+ if (err)
+ return err;
+ val |= ICE_CGU_R10_SYNCE_ETHDIV_LOAD;
+ break;
+ case ICE_SYNCE_CLK1:
+ val &= ~(ICE_CGU_R10_SYNCE_CLKODIV_M1 |
+ ICE_CGU_R10_SYNCE_CLKODIV_LOAD);
+ val |= FIELD_PREP(ICE_CGU_R10_SYNCE_CLKODIV_M1, divider - 1);
+ err = ice_write_cgu_reg(hw, ICE_CGU_R10, val);
+ if (err)
+ return err;
+ val |= ICE_CGU_R10_SYNCE_CLKODIV_LOAD;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ err = ice_write_cgu_reg(hw, ICE_CGU_R10, val);
+ if (err)
+ return err;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_tspll.h b/drivers/net/ethernet/intel/ice/ice_tspll.h
index c0b1232cc07c3..d650867004d1f 100644
--- a/drivers/net/ethernet/intel/ice/ice_tspll.h
+++ b/drivers/net/ethernet/intel/ice/ice_tspll.h
@@ -21,11 +21,22 @@ struct ice_tspll_params_e82x {
u32 frac_n_div;
};
+#define ICE_CGU_NET_REF_CLK0 0x0
+#define ICE_CGU_REF_CLK_BYP0 0x5
+#define ICE_CGU_REF_CLK_BYP0_DIV 0x0
+#define ICE_CGU_REF_CLK_BYP1 0x4
+#define ICE_CGU_REF_CLK_BYP1_DIV 0x1
+
#define ICE_TSPLL_CK_REFCLKFREQ_E825 0x1F
#define ICE_TSPLL_NDIVRATIO_E825 5
#define ICE_TSPLL_FBDIV_INTGR_E825 256
int ice_tspll_cfg_pps_out_e825c(struct ice_hw *hw, bool enable);
int ice_tspll_init(struct ice_hw *hw);
-
+int ice_tspll_bypass_mux_active_e825c(struct ice_hw *hw, u8 port, bool *active,
+ enum ice_synce_clk output);
+int ice_tspll_cfg_bypass_mux_e825c(struct ice_hw *hw, bool ena, u32 port_num,
+ enum ice_synce_clk output);
+int ice_tspll_cfg_synce_ethdiv_e825c(struct ice_hw *hw,
+ enum ice_synce_clk output);
#endif /* _ICE_TSPLL_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 6a2ec8389a8f3..1e82f4c40b326 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -349,6 +349,12 @@ enum ice_clk_src {
NUM_ICE_CLK_SRC
};
+enum ice_synce_clk {
+ ICE_SYNCE_CLK0,
+ ICE_SYNCE_CLK1,
+ ICE_SYNCE_CLK_NUM
+};
+
struct ice_ts_func_info {
/* Function specific info */
enum ice_tspll_freq time_ref;
--
2.52.0
|
{
"author": "Ivan Vecera <ivecera@redhat.com>",
"date": "Mon, 2 Feb 2026 18:16:38 +0100",
"thread_id": "20260202171638.17427-7-ivecera@redhat.com.mbox.gz"
}
|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
From: Joel Fernandes <joelagnelf@nvidia.com>
The defer params were not cleared in __dl_clear_params. Clear them.
Without this is some of my test cases are flaking and the DL timer is
not starting correctly AFAICS.
Fixes: a110a81c52a9 ("sched/deadline: Deferrable dl server")
Tested-by: Christian Loehle <christian.loehle@arm.com>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Reviewed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
kernel/sched/deadline.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index e42867061ea77..28823f7eb8667 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3646,6 +3646,9 @@ static void __dl_clear_params(struct sched_dl_entity *dl_se)
dl_se->dl_non_contending = 0;
dl_se->dl_overrun = 0;
dl_se->dl_server = 0;
+ dl_se->dl_defer = 0;
+ dl_se->dl_defer_running = 0;
+ dl_se->dl_defer_armed = 0;
#ifdef CONFIG_RT_MUTEXES
dl_se->pi_se = dl_se;
--
2.52.0
|
{
"author": "Andrea Righi <arighi@nvidia.com>",
"date": "Mon, 26 Jan 2026 10:58:59 +0100",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
From: Joel Fernandes <joelagnelf@nvidia.com>
Updating "ppos" on error conditions does not make much sense. The pattern
is to return the error code directly without modifying the position, or
modify the position on success and return the number of bytes written.
Since on success, the return value of apply is 0, there is no point in
modifying ppos either. Fix it by removing all this and just returning
error code or number of bytes written on success.
Tested-by: Christian Loehle <christian.loehle@arm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Juri Lelli <juri.lelli@redhat.com>
Reviewed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
kernel/sched/debug.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 41caa22e0680a..93f009e1076d8 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -345,8 +345,8 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
long cpu = (long) ((struct seq_file *) filp->private_data)->private;
struct rq *rq = cpu_rq(cpu);
u64 runtime, period;
+ int retval = 0;
size_t err;
- int retval;
u64 value;
err = kstrtoull_from_user(ubuf, cnt, 10, &value);
@@ -380,8 +380,6 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
dl_server_stop(&rq->fair_server);
retval = dl_server_apply_params(&rq->fair_server, runtime, period, 0);
- if (retval)
- cnt = retval;
if (!runtime)
printk_deferred("Fair server disabled in CPU %d, system may crash due to starvation.\n",
@@ -389,6 +387,9 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
if (rq->cfs.h_nr_queued)
dl_server_start(&rq->fair_server);
+
+ if (retval < 0)
+ return retval;
}
*ppos += cnt;
--
2.52.0
|
{
"author": "Andrea Righi <arighi@nvidia.com>",
"date": "Mon, 26 Jan 2026 10:59:00 +0100",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
From: Joel Fernandes <joelagnelf@nvidia.com>
Currently the DL server interface for applying parameters checks
CFS-internals to identify if the server is active. This is error-prone
and makes it difficult when adding new servers in the future.
Fix it, by using dl_server_active() which is also used by the DL server
code to determine if the DL server was started.
Tested-by: Christian Loehle <christian.loehle@arm.com>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Juri Lelli <juri.lelli@redhat.com>
Reviewed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
---
kernel/sched/debug.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 93f009e1076d8..dd793f8f3858a 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -354,6 +354,8 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
return err;
scoped_guard (rq_lock_irqsave, rq) {
+ bool is_active;
+
runtime = rq->fair_server.dl_runtime;
period = rq->fair_server.dl_period;
@@ -376,8 +378,11 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
return -EINVAL;
}
- update_rq_clock(rq);
- dl_server_stop(&rq->fair_server);
+ is_active = dl_server_active(&rq->fair_server);
+ if (is_active) {
+ update_rq_clock(rq);
+ dl_server_stop(&rq->fair_server);
+ }
retval = dl_server_apply_params(&rq->fair_server, runtime, period, 0);
@@ -385,7 +390,7 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
printk_deferred("Fair server disabled in CPU %d, system may crash due to starvation.\n",
cpu_of(rq));
- if (rq->cfs.h_nr_queued)
+ if (is_active && runtime)
dl_server_start(&rq->fair_server);
if (retval < 0)
--
2.52.0
|
{
"author": "Andrea Righi <arighi@nvidia.com>",
"date": "Mon, 26 Jan 2026 10:59:01 +0100",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
lkml
|
[PATCHSET v12 sched_ext/for-6.20] Add a deadline server for sched_ext tasks
|
sched_ext tasks can be starved by long-running RT tasks, especially since
RT throttling was replaced by deadline servers to boost only SCHED_NORMAL
tasks.
Several users in the community have reported issues with RT stalling
sched_ext tasks. This is fairly common on distributions or environments
where applications like video compositors, audio services, etc. run as RT
tasks by default.
Example trace (showing a per-CPU kthread stalled due to the sway Wayland
compositor running as an RT task):
runnable task stall (kworker/0:0[106377] failed to run for 5.043s)
...
CPU 0 : nr_run=3 flags=0xd cpu_rel=0 ops_qseq=20646200 pnt_seq=45388738
curr=sway[994] class=rt_sched_class
R kworker/0:0[106377] -5043ms
scx_state/flags=3/0x1 dsq_flags=0x0 ops_state/qseq=0/0
sticky/holding_cpu=-1/-1 dsq_id=0x8000000000000002 dsq_vtime=0 slice=20000000
cpus=01
This is often perceived as a bug in the BPF schedulers, but in reality they
can't do much: RT tasks run outside their control and can potentially
consume 100% of the CPU bandwidth.
Fix this by adding a sched_ext deadline server, so that sched_ext tasks are
also boosted and do not suffer starvation.
Two kselftests are also provided to verify the starvation fixes and
bandwidth allocation is correct.
== Design ==
- The EXT server is initialized at boot time and remains configured
throughout the system's lifetime
- It starts automatically when the first sched_ext task is enqueued
(rq->scx.nr_running == 1)
- The server's pick function (ext_server_pick_task) always selects
sched_ext tasks when active
- Runtime accounting happens in update_curr_scx() during task execution
and update_curr_idle() when idle
- Bandwidth accounting includes both fair and ext servers in root domain
calculations
- A debugfs interface (/sys/kernel/debug/sched/ext_server/) allows runtime
tuning of server parameters (see notes below)
== Notes ==
1) As discussed during the sched_ext microconference at LPC Tokyo, the plan
is to start with a simple approach, avoiding automatically creating or
tearing down the EXT server bandwidth reservation when a BPF scheduler is
loaded or unloaded. Instead, the reservation is kept permanently active.
This significantly simplifies the logic while still addressing the
starvation issue.
Any fine-tuning of the bandwidth reservation is delegated to the system
administrator, who can adjust it via the debugfs interface. In the future,
a more suitable interface can be introduced and automatic removal of the
reservation when the BPF scheduler is unloaded can be revisited.
A better interface to adjust the dl_server bandwidth reservation can be
discussed at the upcoming OSPM
(https://lore.kernel.org/lkml/aULDwbALUj0V7cVk@jlelli-thinkpadt14gen4.remote.csb/).
2) IMPORTANT: this patch requires [1] to function properly (sent
separately, not included in this patch set).
[1] https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/
This patchset is also available in the following git branch:
git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git scx-dl-server
Changes in v12:
- Move dl_server execution state reset on stop fix to a separate patch
(https://lore.kernel.org/all/20260123161645.2181752-1-arighi@nvidia.com/)
- Removed per-patch changelog (keeping a global changelog here)
- Link to v11: https://lore.kernel.org/all/20260120215808.188032-1-arighi@nvidia.com/
Changes in v11:
- do not create/remove the bandwidth reservation for the ext server when a
BPF scheduler is loaded/unloaded, but keep the reservation bandwdith
always active
- change rt_stall kselftest to validate both FAIR and EXT DL servers
- Link to v10: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v10:
- reordered patches to better isolate sched_ext changes vs sched/deadline
changes (Andrea Righi)
- define ext_server only with CONFIG_SCHED_CLASS_EXT=y (Andrea Righi)
- add WARN_ON_ONCE(!cpus) check in dl_server_apply_params() (Andrea Righi)
- wait for inactive_task_timer to fire before removing the bandwidth
reservation (Juri Lelli)
- remove explicit dl_server_stop() in dequeue_task_scx() to reduce timer
reprogramming overhead (Juri Lelli)
- do not restart pick_task() when invoked by the dl_server (Tejun Heo)
- rename rq_dl_server to dl_server (Peter Zijlstra)
- fixed a missing dl_server start in dl_server_on() (Christian Loehle)
- add a comment to the rt_stall selftest to better explain the 4%
threshold (Emil Tsalapatis)
- Link to v9: https://lore.kernel.org/all/20251017093214.70029-1-arighi@nvidia.com/
Changes in v9:
- Drop the ->balance() logic as its functionality is now integrated into
->pick_task(), allowing dl_server to call pick_task_scx() directly
- Link to v8: https://lore.kernel.org/all/20250903095008.162049-1-arighi@nvidia.com/
Changes in v8:
- Add tj's patch to de-couple balance and pick_task and avoid changing
sched/core callbacks to propagate @rf
- Simplify dl_se->dl_server check (suggested by PeterZ)
- Small coding style fixes in the kselftests
- Link to v7: https://lore.kernel.org/all/20250809184800.129831-1-joelagnelf@nvidia.com/
Changes in v7:
- Rebased to Linus master
- Link to v6: https://lore.kernel.org/all/20250702232944.3221001-1-joelagnelf@nvidia.com/
Changes in v6:
- Added Acks to few patches
- Fixes to few nits suggested by Tejun
- Link to v5: https://lore.kernel.org/all/20250620203234.3349930-1-joelagnelf@nvidia.com/
Changes in v5:
- Added a kselftest (total_bw) to sched_ext to verify bandwidth values
from debugfs
- Address comment from Andrea about redundant rq clock invalidation
- Link to v4: https://lore.kernel.org/all/20250617200523.1261231-1-joelagnelf@nvidia.com/
Changes in v4:
- Fixed issues with hotplugged CPUs having their DL server bandwidth
altered due to loading SCX
- Fixed other issues
- Rebased on Linus master
- All sched_ext kselftests reliably pass now, also verified that the
total_bw in debugfs (CONFIG_SCHED_DEBUG) is conserved with these patches
- Link to v3: https://lore.kernel.org/all/20250613051734.4023260-1-joelagnelf@nvidia.com/
Changes in v3:
- Removed code duplication in debugfs. Made ext interface separate
- Fixed issue where rq_lock_irqsave was not used in the relinquish patch
- Fixed running bw accounting issue in dl_server_remove_params
- Link to v2: https://lore.kernel.org/all/20250602180110.816225-1-joelagnelf@nvidia.com/
Changes in v2:
- Fixed a hang related to using rq_lock instead of rq_lock_irqsave
- Added support to remove BW of DL servers when they are switched to/from EXT
- Link to v1: https://lore.kernel.org/all/20250315022158.2354454-1-joelagnelf@nvidia.com/
Andrea Righi (2):
sched_ext: Add a DL server for sched_ext tasks
selftests/sched_ext: Add test for sched_ext dl_server
Joel Fernandes (5):
sched/deadline: Clear the defer params
sched/debug: Fix updating of ppos on server write ops
sched/debug: Stop and start server based on if it was active
sched/debug: Add support to change sched_ext server params
selftests/sched_ext: Add test for DL server total_bw consistency
kernel/sched/core.c | 6 +
kernel/sched/deadline.c | 86 +++++--
kernel/sched/debug.c | 171 +++++++++++---
kernel/sched/ext.c | 33 +++
kernel/sched/idle.c | 3 +
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +
tools/testing/selftests/sched_ext/Makefile | 2 +
tools/testing/selftests/sched_ext/rt_stall.bpf.c | 23 ++
tools/testing/selftests/sched_ext/rt_stall.c | 240 +++++++++++++++++++
tools/testing/selftests/sched_ext/total_bw.c | 281 +++++++++++++++++++++++
11 files changed, 801 insertions(+), 51 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/rt_stall.c
create mode 100644 tools/testing/selftests/sched_ext/total_bw.c
|
sched_ext currently suffers starvation due to RT. The same workload when
converted to EXT can get zero runtime if RT is 100% running, causing EXT
processes to stall. Fix it by adding a DL server for EXT.
A kselftest is also included later to confirm that both DL servers are
functioning correctly:
# ./runner -t rt_stall
===== START =====
TEST: rt_stall
DESCRIPTION: Verify that RT tasks cannot stall SCHED_EXT tasks
OUTPUT:
TAP version 13
1..1
# Runtime of FAIR task (PID 1511) is 0.250000 seconds
# Runtime of RT task (PID 1512) is 4.750000 seconds
# FAIR task got 5.00% of total runtime
ok 1 PASS: FAIR task got more than 4.00% of runtime
TAP version 13
1..1
# Runtime of EXT task (PID 1514) is 0.250000 seconds
# Runtime of RT task (PID 1515) is 4.750000 seconds
# EXT task got 5.00% of total runtime
ok 2 PASS: EXT task got more than 4.00% of runtime
TAP version 13
1..1
# Runtime of FAIR task (PID 1517) is 0.250000 seconds
# Runtime of RT task (PID 1518) is 4.750000 seconds
# FAIR task got 5.00% of total runtime
ok 3 PASS: FAIR task got more than 4.00% of runtime
TAP version 13
1..1
# Runtime of EXT task (PID 1521) is 0.250000 seconds
# Runtime of RT task (PID 1522) is 4.750000 seconds
# EXT task got 5.00% of total runtime
ok 4 PASS: EXT task got more than 4.00% of runtime
ok 1 rt_stall #
===== END =====
Reviewed-by: Juri Lelli <juri.lelli@redhat.com>
Tested-by: Christian Loehle <christian.loehle@arm.com>
Co-developed-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Joel Fernandes <joelagnelf@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/core.c | 6 +++
kernel/sched/deadline.c | 83 +++++++++++++++++++++++++++++------------
kernel/sched/ext.c | 33 ++++++++++++++++
kernel/sched/idle.c | 3 ++
kernel/sched/sched.h | 2 +
kernel/sched/topology.c | 5 +++
6 files changed, 109 insertions(+), 23 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 045f83ad261e2..88476d8b4e3d2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8477,6 +8477,9 @@ int sched_cpu_dying(unsigned int cpu)
dump_rq_tasks(rq, KERN_WARNING);
}
dl_server_stop(&rq->fair_server);
+#ifdef CONFIG_SCHED_CLASS_EXT
+ dl_server_stop(&rq->ext_server);
+#endif
rq_unlock_irqrestore(rq, &rf);
calc_load_migrate(rq);
@@ -8680,6 +8683,9 @@ void __init sched_init(void)
hrtick_rq_init(rq);
atomic_set(&rq->nr_iowait, 0);
fair_server_init(rq);
+#ifdef CONFIG_SCHED_CLASS_EXT
+ ext_server_init(rq);
+#endif
#ifdef CONFIG_SCHED_CORE
rq->core = rq;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 28823f7eb8667..fda77512c6e47 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1443,8 +1443,8 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
dl_se->dl_defer_idle = 0;
/*
- * The fair server can consume its runtime while throttled (not queued/
- * running as regular CFS).
+ * The DL server can consume its runtime while throttled (not
+ * queued / running as regular CFS).
*
* If the server consumes its entire runtime in this state. The server
* is not required for the current period. Thus, reset the server by
@@ -1529,10 +1529,10 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
}
/*
- * The fair server (sole dl_server) does not account for real-time
- * workload because it is running fair work.
+ * The dl_server does not account for real-time workload because it
+ * is running fair work.
*/
- if (dl_se == &rq->fair_server)
+ if (dl_se->dl_server)
return;
#ifdef CONFIG_RT_GROUP_SCHED
@@ -1567,9 +1567,9 @@ static void update_curr_dl_se(struct rq *rq, struct sched_dl_entity *dl_se, s64
* In the non-defer mode, the idle time is not accounted, as the
* server provides a guarantee.
*
- * If the dl_server is in defer mode, the idle time is also considered
- * as time available for the fair server, avoiding a penalty for the
- * rt scheduler that did not consumed that time.
+ * If the dl_server is in defer mode, the idle time is also considered as
+ * time available for the dl_server, avoiding a penalty for the rt
+ * scheduler that did not consumed that time.
*/
void dl_server_update_idle(struct sched_dl_entity *dl_se, s64 delta_exec)
{
@@ -1850,6 +1850,18 @@ void sched_init_dl_servers(void)
dl_se->dl_server = 1;
dl_se->dl_defer = 1;
setup_new_dl_entity(dl_se);
+
+#ifdef CONFIG_SCHED_CLASS_EXT
+ dl_se = &rq->ext_server;
+
+ WARN_ON(dl_server(dl_se));
+
+ dl_server_apply_params(dl_se, runtime, period, 1);
+
+ dl_se->dl_server = 1;
+ dl_se->dl_defer = 1;
+ setup_new_dl_entity(dl_se);
+#endif
}
}
@@ -3181,6 +3193,36 @@ void dl_add_task_root_domain(struct task_struct *p)
raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
}
+static void dl_server_add_bw(struct root_domain *rd, int cpu)
+{
+ struct sched_dl_entity *dl_se;
+
+ dl_se = &cpu_rq(cpu)->fair_server;
+ if (dl_server(dl_se) && cpu_active(cpu))
+ __dl_add(&rd->dl_bw, dl_se->dl_bw, dl_bw_cpus(cpu));
+
+#ifdef CONFIG_SCHED_CLASS_EXT
+ dl_se = &cpu_rq(cpu)->ext_server;
+ if (dl_server(dl_se) && cpu_active(cpu))
+ __dl_add(&rd->dl_bw, dl_se->dl_bw, dl_bw_cpus(cpu));
+#endif
+}
+
+static u64 dl_server_read_bw(int cpu)
+{
+ u64 dl_bw = 0;
+
+ if (cpu_rq(cpu)->fair_server.dl_server)
+ dl_bw += cpu_rq(cpu)->fair_server.dl_bw;
+
+#ifdef CONFIG_SCHED_CLASS_EXT
+ if (cpu_rq(cpu)->ext_server.dl_server)
+ dl_bw += cpu_rq(cpu)->ext_server.dl_bw;
+#endif
+
+ return dl_bw;
+}
+
void dl_clear_root_domain(struct root_domain *rd)
{
int i;
@@ -3199,12 +3241,8 @@ void dl_clear_root_domain(struct root_domain *rd)
* dl_servers are not tasks. Since dl_add_task_root_domain ignores
* them, we need to account for them here explicitly.
*/
- for_each_cpu(i, rd->span) {
- struct sched_dl_entity *dl_se = &cpu_rq(i)->fair_server;
-
- if (dl_server(dl_se) && cpu_active(i))
- __dl_add(&rd->dl_bw, dl_se->dl_bw, dl_bw_cpus(i));
- }
+ for_each_cpu(i, rd->span)
+ dl_server_add_bw(rd, i);
}
void dl_clear_root_domain_cpu(int cpu)
@@ -3706,7 +3744,7 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
unsigned long flags, cap;
struct dl_bw *dl_b;
bool overflow = 0;
- u64 fair_server_bw = 0;
+ u64 dl_server_bw = 0;
rcu_read_lock_sched();
dl_b = dl_bw_of(cpu);
@@ -3739,27 +3777,26 @@ static int dl_bw_manage(enum dl_bw_request req, int cpu, u64 dl_bw)
cap -= arch_scale_cpu_capacity(cpu);
/*
- * cpu is going offline and NORMAL tasks will be moved away
- * from it. We can thus discount dl_server bandwidth
- * contribution as it won't need to be servicing tasks after
- * the cpu is off.
+ * cpu is going offline and NORMAL and EXT tasks will be
+ * moved away from it. We can thus discount dl_server
+ * bandwidth contribution as it won't need to be servicing
+ * tasks after the cpu is off.
*/
- if (cpu_rq(cpu)->fair_server.dl_server)
- fair_server_bw = cpu_rq(cpu)->fair_server.dl_bw;
+ dl_server_bw = dl_server_read_bw(cpu);
/*
* Not much to check if no DEADLINE bandwidth is present.
* dl_servers we can discount, as tasks will be moved out the
* offlined CPUs anyway.
*/
- if (dl_b->total_bw - fair_server_bw > 0) {
+ if (dl_b->total_bw - dl_server_bw > 0) {
/*
* Leaving at least one CPU for DEADLINE tasks seems a
* wise thing to do. As said above, cpu is not offline
* yet, so account for that.
*/
if (dl_bw_cpus(cpu) - 1)
- overflow = __dl_overflow(dl_b, cap, fair_server_bw, 0);
+ overflow = __dl_overflow(dl_b, cap, dl_server_bw, 0);
else
overflow = 1;
}
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index afe28c04d5aa7..809f774183202 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -958,6 +958,8 @@ static void update_curr_scx(struct rq *rq)
if (!curr->scx.slice)
touch_core_sched(rq, curr);
}
+
+ dl_server_update(&rq->ext_server, delta_exec);
}
static bool scx_dsq_priq_less(struct rb_node *node_a,
@@ -1501,6 +1503,10 @@ static void enqueue_task_scx(struct rq *rq, struct task_struct *p, int enq_flags
if (enq_flags & SCX_ENQ_WAKEUP)
touch_core_sched(rq, p);
+ /* Start dl_server if this is the first task being enqueued */
+ if (rq->scx.nr_running == 1)
+ dl_server_start(&rq->ext_server);
+
do_enqueue_task(rq, p, enq_flags, sticky_cpu);
out:
rq->scx.flags &= ~SCX_RQ_IN_WAKEUP;
@@ -2512,6 +2518,33 @@ static struct task_struct *pick_task_scx(struct rq *rq, struct rq_flags *rf)
return do_pick_task_scx(rq, rf, false);
}
+/*
+ * Select the next task to run from the ext scheduling class.
+ *
+ * Use do_pick_task_scx() directly with @force_scx enabled, since the
+ * dl_server must always select a sched_ext task.
+ */
+static struct task_struct *
+ext_server_pick_task(struct sched_dl_entity *dl_se, struct rq_flags *rf)
+{
+ if (!scx_enabled())
+ return NULL;
+
+ return do_pick_task_scx(dl_se->rq, rf, true);
+}
+
+/*
+ * Initialize the ext server deadline entity.
+ */
+void ext_server_init(struct rq *rq)
+{
+ struct sched_dl_entity *dl_se = &rq->ext_server;
+
+ init_dl_entity(dl_se);
+
+ dl_server_init(dl_se, rq, ext_server_pick_task);
+}
+
#ifdef CONFIG_SCHED_CORE
/**
* scx_prio_less - Task ordering for core-sched
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index abf8f15d60c9e..d6b4cda176ccf 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -536,6 +536,9 @@ static void update_curr_idle(struct rq *rq)
se->exec_start = now;
dl_server_update_idle(&rq->fair_server, delta_exec);
+#ifdef CONFIG_SCHED_CLASS_EXT
+ dl_server_update_idle(&rq->ext_server, delta_exec);
+#endif
}
/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 93fce4bbff5ea..d630f46325379 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -414,6 +414,7 @@ extern void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
extern void sched_init_dl_servers(void);
extern void fair_server_init(struct rq *rq);
+extern void ext_server_init(struct rq *rq);
extern void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct rq *rq);
extern int dl_server_apply_params(struct sched_dl_entity *dl_se,
u64 runtime, u64 period, bool init);
@@ -1151,6 +1152,7 @@ struct rq {
struct dl_rq dl;
#ifdef CONFIG_SCHED_CLASS_EXT
struct scx_rq scx;
+ struct sched_dl_entity ext_server;
#endif
struct sched_dl_entity fair_server;
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index cf643a5ddedd2..ac268da917781 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -508,6 +508,11 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
if (rq->fair_server.dl_server)
__dl_server_attach_root(&rq->fair_server, rq);
+#ifdef CONFIG_SCHED_CLASS_EXT
+ if (rq->ext_server.dl_server)
+ __dl_server_attach_root(&rq->ext_server, rq);
+#endif
+
rq_unlock_irqrestore(rq, &rf);
if (old_rd)
--
2.52.0
|
{
"author": "Andrea Righi <arighi@nvidia.com>",
"date": "Mon, 26 Jan 2026 10:59:02 +0100",
"thread_id": "aYDUqdQquFcqj7rQ@slm.duckdns.org.mbox.gz"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.