source
large_stringclasses 2
values | subject
large_stringclasses 112
values | code
large_stringclasses 112
values | critique
large_stringlengths 61
3.04M
⌀ | metadata
dict |
|---|---|---|---|---|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The GENI Serial Engine drivers (I2C, SPI, and SERIAL) currently handle
the attachment of power domains. This often leads to duplicated code
logic across different driver probe functions.
Introduce a new helper API, geni_se_domain_attach(), to centralize
the logic for attaching "power" and "perf" domains to the GENI SE
device.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4
Konrad
- Updated function documentation
---
drivers/soc/qcom/qcom-geni-se.c | 29 +++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 ++++
2 files changed, 33 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index 17ab5bbeb621..d80ae6c36582 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -19,6 +19,7 @@
#include <linux/of_platform.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
+#include <linux/pm_domain.h>
#include <linux/pm_opp.h>
#include <linux/soc/qcom/geni-se.h>
@@ -1092,6 +1093,34 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_domain_attach() - Attach power domains to a GENI SE device.
+ * @se: Pointer to the geni_se structure representing the GENI SE device.
+ *
+ * This function attaches the power domains ("power" and "perf") required
+ * in the SCMI auto-VM environment to the GENI Serial Engine device. It
+ * initializes se->pd_list with the attached domains.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_domain_attach(struct geni_se *se)
+{
+ struct dev_pm_domain_attach_data pd_data = {
+ .pd_flags = PD_FLAG_DEV_LINK_ON,
+ .pd_names = (const char*[]) { "power", "perf" },
+ .num_pd_names = 2,
+ };
+ int ret;
+
+ ret = dev_pm_domain_attach_list(se->dev,
+ &pd_data, &se->pd_list);
+ if (ret <= 0)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(geni_se_domain_attach);
+
/**
* geni_se_resources_init() - Initialize resources for a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 36a68149345c..5f75159c5531 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -64,6 +64,7 @@ struct geni_icc_path {
* @num_clk_levels: Number of valid clock levels in clk_perf_tbl
* @clk_perf_tbl: Table of clock frequency input to serial engine clock
* @icc_paths: Array of ICC paths for SE
+ * @pd_list: Power domain list for managing power domains
* @has_opp: Indicates if OPP is supported
*/
struct geni_se {
@@ -75,6 +76,7 @@ struct geni_se {
unsigned int num_clk_levels;
unsigned long *clk_perf_tbl;
struct geni_icc_path icc_paths[3];
+ struct dev_pm_domain_list *pd_list;
bool has_opp;
};
@@ -546,5 +548,7 @@ int geni_se_resources_activate(struct geni_se *se);
int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
+
+int geni_se_domain_attach(struct geni_se *se);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:15 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The GENI Serial Engine (SE) drivers (I2C, SPI, and SERIAL) currently
manage performance levels and operating points directly. This resulting
in code duplication across drivers. such as configuring a specific level
or find and apply an OPP based on a clock frequency.
Introduce two new helper APIs, geni_se_set_perf_level() and
geni_se_set_perf_opp(), addresses this issue by providing a streamlined
method for the GENI Serial Engine (SE) drivers to find and set the OPP
based on the desired performance level, thereby eliminating redundancy.
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
drivers/soc/qcom/qcom-geni-se.c | 50 ++++++++++++++++++++++++++++++++
include/linux/soc/qcom/geni-se.h | 4 +++
2 files changed, 54 insertions(+)
diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
index d80ae6c36582..2241d1487031 100644
--- a/drivers/soc/qcom/qcom-geni-se.c
+++ b/drivers/soc/qcom/qcom-geni-se.c
@@ -282,6 +282,12 @@ struct se_fw_hdr {
#define geni_setbits32(_addr, _v) writel(readl(_addr) | (_v), _addr)
#define geni_clrbits32(_addr, _v) writel(readl(_addr) & ~(_v), _addr)
+enum domain_idx {
+ DOMAIN_IDX_POWER,
+ DOMAIN_IDX_PERF,
+ DOMAIN_IDX_MAX
+};
+
/**
* geni_se_get_qup_hw_version() - Read the QUP wrapper Hardware version
* @se: Pointer to the corresponding serial engine.
@@ -1093,6 +1099,50 @@ int geni_se_resources_activate(struct geni_se *se)
}
EXPORT_SYMBOL_GPL(geni_se_resources_activate);
+/**
+ * geni_se_set_perf_level() - Set performance level for GENI SE.
+ * @se: Pointer to the struct geni_se instance.
+ * @level: The desired performance level.
+ *
+ * Sets the performance level by directly calling dev_pm_opp_set_level
+ * on the performance device associated with the SE.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level)
+{
+ return dev_pm_opp_set_level(se->pd_list->pd_devs[DOMAIN_IDX_PERF], level);
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_level);
+
+/**
+ * geni_se_set_perf_opp() - Set performance OPP for GENI SE by frequency.
+ * @se: Pointer to the struct geni_se instance.
+ * @clk_freq: The requested clock frequency.
+ *
+ * Finds the nearest operating performance point (OPP) for the given
+ * clock frequency and applies it to the SE's performance device.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq)
+{
+ struct device *perf_dev = se->pd_list->pd_devs[DOMAIN_IDX_PERF];
+ struct dev_pm_opp *opp;
+ int ret;
+
+ opp = dev_pm_opp_find_freq_floor(perf_dev, &clk_freq);
+ if (IS_ERR(opp)) {
+ dev_err(se->dev, "failed to find opp for freq %lu\n", clk_freq);
+ return PTR_ERR(opp);
+ }
+
+ ret = dev_pm_opp_set_opp(perf_dev, opp);
+ dev_pm_opp_put(opp);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(geni_se_set_perf_opp);
+
/**
* geni_se_domain_attach() - Attach power domains to a GENI SE device.
* @se: Pointer to the geni_se structure representing the GENI SE device.
diff --git a/include/linux/soc/qcom/geni-se.h b/include/linux/soc/qcom/geni-se.h
index 5f75159c5531..c5e6ab85df09 100644
--- a/include/linux/soc/qcom/geni-se.h
+++ b/include/linux/soc/qcom/geni-se.h
@@ -550,5 +550,9 @@ int geni_se_resources_deactivate(struct geni_se *se);
int geni_load_se_firmware(struct geni_se *se, enum geni_se_protocol_type protocol);
int geni_se_domain_attach(struct geni_se *se);
+
+int geni_se_set_perf_level(struct geni_se *se, unsigned long level);
+
+int geni_se_set_perf_opp(struct geni_se *se, unsigned long clk_freq);
#endif
#endif
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:16 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Add DT bindings for the QUP GENI I2C controller on sa8255p platforms.
SA8255p platform abstracts resources such as clocks, interconnect and
GPIO pins configuration in Firmware. SCMI power and perf protocol
are utilized to request resource configurations.
SA8255p platform does not require the Serial Engine (SE) common properties
as the SE firmware is loaded and managed by the TrustZone (TZ) secure
environment.
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>
Co-developed-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Nikunj Kela <quic_nkela@quicinc.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v2->v3:
- Added Reviewed-by tag
v1->v2:
Krzysztof:
- Added dma properties in example node
- Removed minItems from power-domains property
- Added in commit text about common property
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 +++++++++++++++++++
1 file changed, 64 insertions(+)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
diff --git a/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
new file mode 100644
index 000000000000..a61e40b5cbc1
--- /dev/null
+++ b/Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
@@ -0,0 +1,64 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/i2c/qcom,sa8255p-geni-i2c.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Qualcomm SA8255p QUP GENI I2C Controller
+
+maintainers:
+ - Praveen Talari <praveen.talari@oss.qualcomm.com>
+
+properties:
+ compatible:
+ const: qcom,sa8255p-geni-i2c
+
+ reg:
+ maxItems: 1
+
+ dmas:
+ maxItems: 2
+
+ dma-names:
+ items:
+ - const: tx
+ - const: rx
+
+ interrupts:
+ maxItems: 1
+
+ power-domains:
+ maxItems: 2
+
+ power-domain-names:
+ items:
+ - const: power
+ - const: perf
+
+required:
+ - compatible
+ - reg
+ - interrupts
+ - power-domains
+
+allOf:
+ - $ref: /schemas/i2c/i2c-controller.yaml#
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/interrupt-controller/arm-gic.h>
+ #include <dt-bindings/dma/qcom-gpi.h>
+
+ i2c@a90000 {
+ compatible = "qcom,sa8255p-geni-i2c";
+ reg = <0xa90000 0x4000>;
+ interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&gpi_dma0 0 0 QCOM_GPI_I2C>,
+ <&gpi_dma0 1 0 QCOM_GPI_I2C>;
+ dma-names = "tx", "rx";
+ power-domains = <&scmi0_pd 0>, <&scmi0_dvfs 0>;
+ power-domain-names = "power", "perf";
+ };
+...
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:17 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Moving the serial engine setup to geni_i2c_init() API for a cleaner
probe function and utilizes the PM runtime API to control resources
instead of direct clock-related APIs for better resource management.
Enables reusability of the serial engine initialization like
hibernation and deep sleep features where hardware context is lost.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
viken:
- Added Acked-by tag
- Removed extra space before invoke of geni_i2c_init().
v1->v2:
Bjorn:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 158 ++++++++++++++---------------
1 file changed, 75 insertions(+), 83 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index ae609bdd2ec4..81ed1596ac9f 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -977,10 +977,77 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_init(struct geni_i2c_dev *gi2c)
+{
+ const struct geni_i2c_desc *desc = NULL;
+ u32 proto, tx_depth;
+ bool fifo_disable;
+ int ret;
+
+ ret = pm_runtime_resume_and_get(gi2c->se.dev);
+ if (ret < 0) {
+ dev_err(gi2c->se.dev, "error turning on device :%d\n", ret);
+ return ret;
+ }
+
+ proto = geni_se_read_proto(&gi2c->se);
+ if (proto == GENI_SE_INVALID_PROTO) {
+ ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
+ if (ret) {
+ dev_err_probe(gi2c->se.dev, ret, "i2c firmware load failed ret: %d\n", ret);
+ goto err;
+ }
+ } else if (proto != GENI_SE_I2C) {
+ ret = dev_err_probe(gi2c->se.dev, -ENXIO, "Invalid proto %d\n", proto);
+ goto err;
+ }
+
+ desc = device_get_match_data(gi2c->se.dev);
+ if (desc && desc->no_dma_support) {
+ fifo_disable = false;
+ gi2c->no_dma = true;
+ } else {
+ fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
+ }
+
+ if (fifo_disable) {
+ /* FIFO is disabled, so we can only use GPI DMA */
+ gi2c->gpi_mode = true;
+ ret = setup_gpi_dma(gi2c);
+ if (ret)
+ goto err;
+
+ dev_dbg(gi2c->se.dev, "Using GPI DMA mode for I2C\n");
+ } else {
+ gi2c->gpi_mode = false;
+ tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
+
+ /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
+ if (!tx_depth && desc)
+ tx_depth = desc->tx_fifo_depth;
+
+ if (!tx_depth) {
+ ret = dev_err_probe(gi2c->se.dev, -EINVAL,
+ "Invalid TX FIFO depth\n");
+ goto err;
+ }
+
+ gi2c->tx_wm = tx_depth - 1;
+ geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
+ geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
+ PACKING_BYTES_PW, true, true, true);
+
+ dev_dbg(gi2c->se.dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
+ }
+
+err:
+ pm_runtime_put(gi2c->se.dev);
+ return ret;
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
- u32 proto, tx_depth, fifo_disable;
int ret;
struct device *dev = &pdev->dev;
const struct geni_i2c_desc *desc = NULL;
@@ -1060,102 +1127,27 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (ret)
return ret;
- ret = clk_prepare_enable(gi2c->core_clk);
- if (ret)
- return ret;
-
- ret = geni_se_resources_on(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning on resources\n");
- goto err_clk;
- }
- proto = geni_se_read_proto(&gi2c->se);
- if (proto == GENI_SE_INVALID_PROTO) {
- ret = geni_load_se_firmware(&gi2c->se, GENI_SE_I2C);
- if (ret) {
- dev_err_probe(dev, ret, "i2c firmware load failed ret: %d\n", ret);
- goto err_resources;
- }
- } else if (proto != GENI_SE_I2C) {
- ret = dev_err_probe(dev, -ENXIO, "Invalid proto %d\n", proto);
- goto err_resources;
- }
-
- if (desc && desc->no_dma_support) {
- fifo_disable = false;
- gi2c->no_dma = true;
- } else {
- fifo_disable = readl_relaxed(gi2c->se.base + GENI_IF_DISABLE_RO) & FIFO_IF_DISABLE;
- }
-
- if (fifo_disable) {
- /* FIFO is disabled, so we can only use GPI DMA */
- gi2c->gpi_mode = true;
- ret = setup_gpi_dma(gi2c);
- if (ret)
- goto err_resources;
-
- dev_dbg(dev, "Using GPI DMA mode for I2C\n");
- } else {
- gi2c->gpi_mode = false;
- tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
-
- /* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
- if (!tx_depth && desc)
- tx_depth = desc->tx_fifo_depth;
-
- if (!tx_depth) {
- ret = dev_err_probe(dev, -EINVAL,
- "Invalid TX FIFO depth\n");
- goto err_resources;
- }
-
- gi2c->tx_wm = tx_depth - 1;
- geni_se_init(&gi2c->se, gi2c->tx_wm, tx_depth);
- geni_se_config_packing(&gi2c->se, BITS_PER_BYTE,
- PACKING_BYTES_PW, true, true, true);
-
- dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth);
- }
-
- clk_disable_unprepare(gi2c->core_clk);
- ret = geni_se_resources_off(&gi2c->se);
- if (ret) {
- dev_err_probe(dev, ret, "Error turning off resources\n");
- goto err_dma;
- }
-
- ret = geni_icc_disable(&gi2c->se);
- if (ret)
- goto err_dma;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
pm_runtime_use_autosuspend(gi2c->se.dev);
pm_runtime_enable(gi2c->se.dev);
+ ret = geni_i2c_init(gi2c);
+ if (ret < 0) {
+ pm_runtime_disable(gi2c->se.dev);
+ return ret;
+ }
+
ret = i2c_add_adapter(&gi2c->adap);
if (ret) {
dev_err_probe(dev, ret, "Error adding i2c adapter\n");
pm_runtime_disable(gi2c->se.dev);
- goto err_dma;
+ return ret;
}
dev_dbg(dev, "Geni-I2C adaptor successfully added\n");
- return ret;
-
-err_resources:
- geni_se_resources_off(&gi2c->se);
-err_clk:
- clk_disable_unprepare(gi2c->core_clk);
-
- return ret;
-
-err_dma:
- release_gpi_dma(gi2c);
-
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:18 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
Refactor the resource initialization in geni_i2c_probe() by introducing
a new geni_i2c_resources_init() function and utilizing the common
geni_se_resources_init() framework and clock frequency mapping, making the
probe function cleaner.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
- Added Acked-by tag.
v1->v2:
- Updated commit text.
---
drivers/i2c/busses/i2c-qcom-geni.c | 53 ++++++++++++------------------
1 file changed, 21 insertions(+), 32 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 81ed1596ac9f..56eebefda75f 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -1045,6 +1045,23 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
return ret;
}
+static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c)
+{
+ int ret;
+
+ ret = geni_se_resources_init(&gi2c->se);
+ if (ret)
+ return ret;
+
+ ret = geni_i2c_clk_map_idx(gi2c);
+ if (ret)
+ return dev_err_probe(gi2c->se.dev, ret, "Invalid clk frequency %d Hz\n",
+ gi2c->clk_freq_out);
+
+ return geni_icc_set_bw_ab(&gi2c->se, GENI_DEFAULT_BW, GENI_DEFAULT_BW,
+ Bps_to_icc(gi2c->clk_freq_out));
+}
+
static int geni_i2c_probe(struct platform_device *pdev)
{
struct geni_i2c_dev *gi2c;
@@ -1064,16 +1081,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
desc = device_get_match_data(&pdev->dev);
- if (desc && desc->has_core_clk) {
- gi2c->core_clk = devm_clk_get(dev, "core");
- if (IS_ERR(gi2c->core_clk))
- return PTR_ERR(gi2c->core_clk);
- }
-
- gi2c->se.clk = devm_clk_get(dev, "se");
- if (IS_ERR(gi2c->se.clk) && !has_acpi_companion(dev))
- return PTR_ERR(gi2c->se.clk);
-
ret = device_property_read_u32(dev, "clock-frequency",
&gi2c->clk_freq_out);
if (ret) {
@@ -1088,16 +1095,15 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (gi2c->irq < 0)
return gi2c->irq;
- ret = geni_i2c_clk_map_idx(gi2c);
- if (ret)
- return dev_err_probe(dev, ret, "Invalid clk frequency %d Hz\n",
- gi2c->clk_freq_out);
-
gi2c->adap.algo = &geni_i2c_algo;
init_completion(&gi2c->done);
spin_lock_init(&gi2c->lock);
platform_set_drvdata(pdev, gi2c);
+ ret = geni_i2c_resources_init(gi2c);
+ if (ret)
+ return ret;
+
/* Keep interrupts disabled initially to allow for low-power modes */
ret = devm_request_irq(dev, gi2c->irq, geni_i2c_irq, IRQF_NO_AUTOEN,
dev_name(dev), gi2c);
@@ -1110,23 +1116,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
gi2c->adap.dev.of_node = dev->of_node;
strscpy(gi2c->adap.name, "Geni-I2C", sizeof(gi2c->adap.name));
- ret = geni_icc_get(&gi2c->se, desc ? desc->icc_ddr : "qup-memory");
- if (ret)
- return ret;
- /*
- * Set the bus quota for core and cpu to a reasonable value for
- * register access.
- * Set quota for DDR based on bus speed.
- */
- gi2c->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW;
- gi2c->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW;
- if (!desc || desc->icc_ddr)
- gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out);
-
- ret = geni_icc_set_bw(&gi2c->se);
- if (ret)
- return ret;
-
gi2c->suspended = 1;
pm_runtime_set_suspended(gi2c->se.dev);
pm_runtime_set_autosuspend_delay(gi2c->se.dev, I2C_AUTO_SUSPEND_DELAY);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:19 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
To manage GENI serial engine resources during runtime power management,
drivers currently need to call functions for ICC, clock, and
SE resource operations in both suspend and resume paths, resulting in
code duplication across drivers.
The new geni_se_resources_activate() and geni_se_resources_deactivate()
helper APIs addresses this issue by providing a streamlined method to
enable or disable all resources based, thereby eliminating redundancy
across drivers.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
- Added Acked-by tag.
v1->v2:
Bjorn:
- Remove geni_se_resources_state() API.
- Used geni_se_resources_activate() and geni_se_resources_deactivate()
to enable/disable resources.
---
drivers/i2c/busses/i2c-qcom-geni.c | 28 +++++-----------------------
1 file changed, 5 insertions(+), 23 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 56eebefda75f..4ff84bb0fff5 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -1163,18 +1163,15 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
disable_irq(gi2c->irq);
- ret = geni_se_resources_off(&gi2c->se);
+
+ ret = geni_se_resources_deactivate(&gi2c->se);
if (ret) {
enable_irq(gi2c->irq);
return ret;
-
- } else {
- gi2c->suspended = 1;
}
- clk_disable_unprepare(gi2c->core_clk);
-
- return geni_icc_disable(&gi2c->se);
+ gi2c->suspended = 1;
+ return ret;
}
static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
@@ -1182,28 +1179,13 @@ static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
int ret;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
- ret = geni_icc_enable(&gi2c->se);
+ ret = geni_se_resources_activate(&gi2c->se);
if (ret)
return ret;
- ret = clk_prepare_enable(gi2c->core_clk);
- if (ret)
- goto out_icc_disable;
-
- ret = geni_se_resources_on(&gi2c->se);
- if (ret)
- goto out_clk_disable;
-
enable_irq(gi2c->irq);
gi2c->suspended = 0;
- return 0;
-
-out_clk_disable:
- clk_disable_unprepare(gi2c->core_clk);
-out_icc_disable:
- geni_icc_disable(&gi2c->se);
-
return ret;
}
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:20 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
To avoid repeatedly fetching and checking platform data across various
functions, store the struct of_device_id data directly in the i2c
private structure. This change enhances code maintainability and reduces
redundancy.
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4
- Added Acked-by tag.
Konrad
- Removed icc_ddr from platfrom data struct
---
drivers/i2c/busses/i2c-qcom-geni.c | 30 ++++++++++++++----------------
1 file changed, 14 insertions(+), 16 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 4ff84bb0fff5..8fd62d659c2a 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -77,6 +77,12 @@ enum geni_i2c_err_code {
#define XFER_TIMEOUT HZ
#define RST_TIMEOUT HZ
+struct geni_i2c_desc {
+ bool has_core_clk;
+ bool no_dma_support;
+ unsigned int tx_fifo_depth;
+};
+
#define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2
/**
@@ -122,13 +128,7 @@ struct geni_i2c_dev {
bool is_tx_multi_desc_xfer;
u32 num_msgs;
struct geni_i2c_gpi_multi_desc_xfer i2c_multi_desc_config;
-};
-
-struct geni_i2c_desc {
- bool has_core_clk;
- char *icc_ddr;
- bool no_dma_support;
- unsigned int tx_fifo_depth;
+ const struct geni_i2c_desc *dev_data;
};
struct geni_i2c_err_log {
@@ -979,7 +979,6 @@ static int setup_gpi_dma(struct geni_i2c_dev *gi2c)
static int geni_i2c_init(struct geni_i2c_dev *gi2c)
{
- const struct geni_i2c_desc *desc = NULL;
u32 proto, tx_depth;
bool fifo_disable;
int ret;
@@ -1002,8 +1001,7 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
goto err;
}
- desc = device_get_match_data(gi2c->se.dev);
- if (desc && desc->no_dma_support) {
+ if (gi2c->dev_data->no_dma_support) {
fifo_disable = false;
gi2c->no_dma = true;
} else {
@@ -1023,8 +1021,8 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
tx_depth = geni_se_get_tx_fifo_depth(&gi2c->se);
/* I2C Master Hub Serial Elements doesn't have the HW_PARAM_0 register */
- if (!tx_depth && desc)
- tx_depth = desc->tx_fifo_depth;
+ if (!tx_depth && gi2c->dev_data->has_core_clk)
+ tx_depth = gi2c->dev_data->tx_fifo_depth;
if (!tx_depth) {
ret = dev_err_probe(gi2c->se.dev, -EINVAL,
@@ -1067,7 +1065,6 @@ static int geni_i2c_probe(struct platform_device *pdev)
struct geni_i2c_dev *gi2c;
int ret;
struct device *dev = &pdev->dev;
- const struct geni_i2c_desc *desc = NULL;
gi2c = devm_kzalloc(dev, sizeof(*gi2c), GFP_KERNEL);
if (!gi2c)
@@ -1079,7 +1076,7 @@ static int geni_i2c_probe(struct platform_device *pdev)
if (IS_ERR(gi2c->se.base))
return PTR_ERR(gi2c->se.base);
- desc = device_get_match_data(&pdev->dev);
+ gi2c->dev_data = device_get_match_data(&pdev->dev);
ret = device_property_read_u32(dev, "clock-frequency",
&gi2c->clk_freq_out);
@@ -1218,15 +1215,16 @@ static const struct dev_pm_ops geni_i2c_pm_ops = {
NULL)
};
+static const struct geni_i2c_desc geni_i2c = {};
+
static const struct geni_i2c_desc i2c_master_hub = {
.has_core_clk = true,
- .icc_ddr = NULL,
.no_dma_support = true,
.tx_fifo_depth = 16,
};
static const struct of_device_id geni_i2c_dt_match[] = {
- { .compatible = "qcom,geni-i2c" },
+ { .compatible = "qcom,geni-i2c", .data = &geni_i2c },
{ .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub },
{}
};
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:21 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH v4 00/13] Enable I2C on SA8255p Qualcomm platforms
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power states(on/off).
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Praveen Talari (13):
soc: qcom: geni-se: Refactor geni_icc_get() and make qup-memory ICC
path optional
soc: qcom: geni-se: Add geni_icc_set_bw_ab() function
soc: qcom: geni-se: Introduce helper API for resource initialization
soc: qcom: geni-se: Handle core clk in geni_se_clks_off() and
geni_se_clks_on()
soc: qcom: geni-se: Add resources activation/deactivation helpers
soc: qcom: geni-se: Introduce helper API for attaching power domains
soc: qcom: geni-se: Introduce helper APIs for performance control
dt-bindings: i2c: Describe SA8255p
i2c: qcom-geni: Isolate serial engine setup
i2c: qcom-geni: Move resource initialization to separate function
i2c: qcom-geni: Use resources helper APIs in runtime PM functions
i2c: qcom-geni: Store of_device_id data in driver private struct
i2c: qcom-geni: Enable I2C on SA8255p Qualcomm platforms
---
v3->v4
- Added a new patch(4/13) to handle core clk as part of
geni_se_clks_off/on().
---
.../bindings/i2c/qcom,sa8255p-geni-i2c.yaml | 64 ++++
drivers/i2c/busses/i2c-qcom-geni.c | 303 +++++++++---------
drivers/soc/qcom/qcom-geni-se.c | 265 +++++++++++++--
include/linux/soc/qcom/geni-se.h | 19 ++
4 files changed, 476 insertions(+), 175 deletions(-)
create mode 100644 Documentation/devicetree/bindings/i2c/qcom,sa8255p-geni-i2c.yaml
base-commit: 193579fe01389bc21aff0051d13f24e8ea95b47d
--
2.34.1
|
The Qualcomm automotive SA8255p SoC relies on firmware to configure
platform resources, including clocks, interconnects and TLMM.
The driver requests resources operations over SCMI using power
and performance protocols.
The SCMI power protocol enables or disables resources like clocks,
interconnect paths, and TLMM (GPIOs) using runtime PM framework APIs,
such as resume/suspend, to control power on/off.
The SCMI performance protocol manages I2C frequency, with each
frequency rate represented by a performance level. The driver uses
geni_se_set_perf_opp() API to request the desired frequency rate..
As part of geni_se_set_perf_opp(), the OPP for the requested frequency
is obtained using dev_pm_opp_find_freq_floor() and the performance
level is set using dev_pm_opp_set_opp().
Acked-by: Viken Dadhaniya <viken.dadhaniya@oss.qualcomm.com>
Signed-off-by: Praveen Talari <praveen.talari@oss.qualcomm.com>
---
v3->v4:
- Added Acked-by tag.
V1->v2:
- Initialized ret to "0" in resume/suspend callbacks.
Bjorn:
- Used seperate APIs for the resouces enable/disable.
---
drivers/i2c/busses/i2c-qcom-geni.c | 56 ++++++++++++++++++++++--------
1 file changed, 42 insertions(+), 14 deletions(-)
diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c
index 8fd62d659c2a..2ad31e412b96 100644
--- a/drivers/i2c/busses/i2c-qcom-geni.c
+++ b/drivers/i2c/busses/i2c-qcom-geni.c
@@ -81,6 +81,10 @@ struct geni_i2c_desc {
bool has_core_clk;
bool no_dma_support;
unsigned int tx_fifo_depth;
+ int (*resources_init)(struct geni_se *se);
+ int (*set_rate)(struct geni_se *se, unsigned long freq);
+ int (*power_on)(struct geni_se *se);
+ int (*power_off)(struct geni_se *se);
};
#define QCOM_I2C_MIN_NUM_OF_MSGS_MULTI_DESC 2
@@ -203,8 +207,9 @@ static int geni_i2c_clk_map_idx(struct geni_i2c_dev *gi2c)
return -EINVAL;
}
-static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c)
+static int qcom_geni_i2c_conf(struct geni_se *se, unsigned long freq)
{
+ struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev);
const struct geni_i2c_clk_fld *itr = gi2c->clk_fld;
u32 val;
@@ -217,6 +222,7 @@ static void qcom_geni_i2c_conf(struct geni_i2c_dev *gi2c)
val |= itr->t_low_cnt << LOW_COUNTER_SHFT;
val |= itr->t_cycle_cnt;
writel_relaxed(val, gi2c->se.base + SE_I2C_SCL_COUNTERS);
+ return 0;
}
static void geni_i2c_err_misc(struct geni_i2c_dev *gi2c)
@@ -908,7 +914,9 @@ static int geni_i2c_xfer(struct i2c_adapter *adap,
return ret;
}
- qcom_geni_i2c_conf(gi2c);
+ ret = gi2c->dev_data->set_rate(&gi2c->se, gi2c->clk_freq_out);
+ if (ret)
+ return ret;
if (gi2c->gpi_mode)
ret = geni_i2c_gpi_xfer(gi2c, msgs, num);
@@ -1043,8 +1051,9 @@ static int geni_i2c_init(struct geni_i2c_dev *gi2c)
return ret;
}
-static int geni_i2c_resources_init(struct geni_i2c_dev *gi2c)
+static int geni_i2c_resources_init(struct geni_se *se)
{
+ struct geni_i2c_dev *gi2c = dev_get_drvdata(se->dev);
int ret;
ret = geni_se_resources_init(&gi2c->se);
@@ -1097,7 +1106,7 @@ static int geni_i2c_probe(struct platform_device *pdev)
spin_lock_init(&gi2c->lock);
platform_set_drvdata(pdev, gi2c);
- ret = geni_i2c_resources_init(gi2c);
+ ret = gi2c->dev_data->resources_init(&gi2c->se);
if (ret)
return ret;
@@ -1156,15 +1165,17 @@ static void geni_i2c_shutdown(struct platform_device *pdev)
static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
{
- int ret;
+ int ret = 0;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
disable_irq(gi2c->irq);
- ret = geni_se_resources_deactivate(&gi2c->se);
- if (ret) {
- enable_irq(gi2c->irq);
- return ret;
+ if (gi2c->dev_data->power_off) {
+ ret = gi2c->dev_data->power_off(&gi2c->se);
+ if (ret) {
+ enable_irq(gi2c->irq);
+ return ret;
+ }
}
gi2c->suspended = 1;
@@ -1173,12 +1184,14 @@ static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev)
static int __maybe_unused geni_i2c_runtime_resume(struct device *dev)
{
- int ret;
+ int ret = 0;
struct geni_i2c_dev *gi2c = dev_get_drvdata(dev);
- ret = geni_se_resources_activate(&gi2c->se);
- if (ret)
- return ret;
+ if (gi2c->dev_data->power_on) {
+ ret = gi2c->dev_data->power_on(&gi2c->se);
+ if (ret)
+ return ret;
+ }
enable_irq(gi2c->irq);
gi2c->suspended = 0;
@@ -1215,17 +1228,32 @@ static const struct dev_pm_ops geni_i2c_pm_ops = {
NULL)
};
-static const struct geni_i2c_desc geni_i2c = {};
+static const struct geni_i2c_desc geni_i2c = {
+ .resources_init = geni_i2c_resources_init,
+ .set_rate = qcom_geni_i2c_conf,
+ .power_on = geni_se_resources_activate,
+ .power_off = geni_se_resources_deactivate,
+};
static const struct geni_i2c_desc i2c_master_hub = {
.has_core_clk = true,
.no_dma_support = true,
.tx_fifo_depth = 16,
+ .resources_init = geni_i2c_resources_init,
+ .set_rate = qcom_geni_i2c_conf,
+ .power_on = geni_se_resources_activate,
+ .power_off = geni_se_resources_deactivate,
+};
+
+static const struct geni_i2c_desc sa8255p_geni_i2c = {
+ .resources_init = geni_se_domain_attach,
+ .set_rate = geni_se_set_perf_opp,
};
static const struct of_device_id geni_i2c_dt_match[] = {
{ .compatible = "qcom,geni-i2c", .data = &geni_i2c },
{ .compatible = "qcom,geni-i2c-master-hub", .data = &i2c_master_hub },
+ { .compatible = "qcom,sa8255p-geni-i2c", .data = &sa8255p_geni_i2c },
{}
};
MODULE_DEVICE_TABLE(of, geni_i2c_dt_match);
--
2.34.1
|
{
"author": "Praveen Talari <praveen.talari@oss.qualcomm.com>",
"date": "Mon, 2 Feb 2026 23:39:22 +0530",
"thread_id": "20260202180922.1692428-1-praveen.talari@oss.qualcomm.com.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/23/2026 2:20 PM, Stanislav Kinsburskii wrote:
Reviewed-by: Nuno Das Neves <nunodasneves@linux.microsoft.com>
|
{
"author": "Nuno Das Neves <nunodasneves@linux.microsoft.com>",
"date": "Fri, 23 Jan 2026 16:09:49 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/23/26 14:20, Stanislav Kinsburskii wrote:
Will this affect CRASH kexec? I see few CONFIG_CRASH_DUMP in kexec.c
implying that crash dump might be involved. Or did you test kdump
and it was fine?
Thanks,
-Mukesh
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Fri, 23 Jan 2026 16:16:33 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 23, 2026 at 04:16:33PM -0800, Mukesh R wrote:
Yes, it will. Crash kexec depends on normal kexec functionality, so it
will be affected as well.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Sun, 25 Jan 2026 14:39:26 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 23, 2026 at 10:20:53PM +0000, Stanislav Kinsburskii wrote:
Someone might want to stop all guest VMs and do a kexec. Which is valid
and would work without any issue for L1VH.
Also, I don't think it is reasonable at all that someone needs to
disable basic kernel functionality such as kexec in order to use our
driver.
Thanks,
Anirudh.
|
{
"author": "Anirudh Rayabharam <anirudh@anirudhrb.com>",
"date": "Tue, 27 Jan 2026 00:19:24 +0530",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/25/26 14:39, Stanislav Kinsburskii wrote:
So not sure I understand the reason for this patch. We can just block
kexec if there are any VMs running, right? Doing this would mean any
further developement would be without a ver important and major feature,
right?
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Mon, 26 Jan 2026 12:20:09 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Mon, Jan 26, 2026 at 12:20:09PM -0800, Mukesh R wrote:
This is an option. But until it's implemented and merged, a user mshv
driver gets into a situation where kexec is broken in a non-obvious way.
The system may crash at any time after kexec, depending on whether the
new kernel touches the pages deposited to hypervisor or not. This is a
bad user experience.
Therefor it should be explicitly forbidden as it's essentially not
supported yet.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 26 Jan 2026 12:43:58 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Tue, Jan 27, 2026 at 12:19:24AM +0530, Anirudh Rayabharam wrote:
No, it won't work and hypervsisor depostied pages won't be withdrawn.
Also, kernel consisntency must no depend on use space behavior.
It's a temporary measure until proper page lifecycle management is
supported in the driver.
Mutual exclusion of the driver and kexec is given and thus should be
expclitily stated in the Kconfig.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 26 Jan 2026 12:46:44 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/26/26 12:43, Stanislav Kinsburskii wrote:
I understand that. But with this we cannot collect core and debug any
crashes. I was thinking there would be a quick way to prohibit kexec
for update via notifier or some other quick hack. Did you already
explore that and didn't find anything, hence this?
Thanks,
-Mukesh
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Mon, 26 Jan 2026 15:07:18 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Mon, Jan 26, 2026 at 03:07:18PM -0800, Mukesh R wrote:
This quick hack you mention isn't quick in the upstream kernel as there
is no hook to interrupt kexec process except the live update one.
I sent an RFC for that one but given todays conversation details is
won't be accepted as is.
Making mshv mutually exclusive with kexec is the only viable option for
now given time constraints.
It is intended to be replaced with proper page lifecycle management in
the future.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 26 Jan 2026 16:21:43 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/26/26 16:21, Stanislav Kinsburskii wrote:
That's the one we want to interrupt and block right? crash kexec
is ok and should be allowed. We can document we don't support kexec
for update for now.
Are you taking about this?
"mshv: Add kexec safety for deposited pages"
Yeah, that could take a long time and imo we cannot just disable KEXEC
completely. What we want is just block kexec for updates from some
mshv file for now, we an print during boot that kexec for updates is
not supported on mshv. Hope that makes sense.
Thanks,
-Mukesh
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Mon, 26 Jan 2026 17:39:49 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Mon, Jan 26, 2026 at 05:39:49PM -0800, Mukesh R wrote:
Yes.
The trade-off here is between disabling kexec support and having the
kernel crash after kexec in a non-obvious way. This affects both regular
kexec and crash kexec.
It’s a pity we can’t apply a quick hack to disable only regular kexec.
However, since crash kexec would hit the same issues, until we have a
proper state transition for deposted pages, the best workaround for now
is to reset the hypervisor state on every kexec, which needs design,
work, and testing.
Disabling kexec is the only consistent way to handle this in the
upstream kernel at the moment.
Thanks, Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Tue, 27 Jan 2026 09:47:01 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/27/26 09:47, Stanislav Kinsburskii wrote:
crash kexec on baremetal is not affected, hence disabling that
doesn't make sense as we can't debug crashes then on bm.
Let me think and explore a bit, and if I come up with something, I'll
send a patch here. If nothing, then we can do this as last resort.
Thanks,
-Mukesh
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Tue, 27 Jan 2026 11:56:02 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
From: Mukesh R <mrathor@linux.microsoft.com> Sent: Tuesday, January 27, 2026 11:56 AM
Maybe you've already looked at this, but there's a sysctl parameter
kernel.kexec_load_limit_reboot that prevents loading a kexec
kernel for reboot if the value is zero. Separately, there is
kernel.kexec_load_limit_panic that controls whether a kexec
kernel can be loaded for kdump purposes.
kernel.kexec_load_limit_reboot defaults to -1, which allows an
unlimited number of loading a kexec kernel for reboot. But the value
can be set to zero with this kernel boot line parameter:
sysctl.kernel.kexec_load_limit_reboot=0
Alternatively, the mshv driver initialization could add code along
the lines of process_sysctl_arg() to open
/proc/sys/kernel/kexec_load_limit_reboot and write a value of zero.
Then there's no dependency on setting the kernel boot line.
The downside to either method is that after Linux in the root partition
is up-and-running, it is possible to change the sysctl to a non-zero value,
and then load a kexec kernel for reboot. So this approach isn't absolute
protection against doing a kexec for reboot. But it makes it harder, and
until there's a mechanism to reclaim the deposited pages, it might be
a viable compromise to allow kdump to still be used.
Just a thought ....
Michael
|
{
"author": "Michael Kelley <mhklinux@outlook.com>",
"date": "Wed, 28 Jan 2026 15:53:04 +0000",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Mon, Jan 26, 2026 at 12:46:44PM -0800, Stanislav Kinsburskii wrote:
All pages that were deposited in the context of a guest partition (i.e.
with the guest partition ID), would be withdrawn when you kill the VMs,
right? What other deposited pages would be left?
Thanks,
Anirudh.
|
{
"author": "Anirudh Rayabharam <anirudh@anirudhrb.com>",
"date": "Wed, 28 Jan 2026 16:16:31 +0000",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Tue, Jan 27, 2026 at 11:56:02AM -0800, Mukesh R wrote:
Bare metal support is not currently relevant, as it is not available.
This is the upstream kernel, and this driver will be accessible to
third-party customers beginning with kernel 6.19 for running their
kernels in Azure L1VH, so consistency is required.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Wed, 28 Jan 2026 15:08:30 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Wed, Jan 28, 2026 at 04:16:31PM +0000, Anirudh Rayabharam wrote:
The driver deposits two types of pages: one for the guests (withdrawn
upon gust shutdown) and the other - for the host itself (never
withdrawn).
See hv_call_create_partition, for example: it deposits pages for the
host partition.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Wed, 28 Jan 2026 15:11:14 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/28/26 07:53, Michael Kelley wrote:
Mmm...eee...weelll... i think i see a much easier way to do this by
just hijacking __kexec_lock. I will resume my normal work tmrw/Fri,
so let me test it out. if it works, will send patch Monday.
Thanks,
-Mukesh
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Thu, 29 Jan 2026 18:52:59 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/28/26 15:08, Stanislav Kinsburskii wrote:
Well, without crashdump support, customers will not be running anything
anywhere.
Thanks,
-Mukesh
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Thu, 29 Jan 2026 18:59:31 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Wed, Jan 28, 2026 at 03:11:14PM -0800, Stanislav Kinsburskii wrote:
Hmm.. I see. Is it not possible to reclaim this memory in module_exit?
Also, can't we forcefully kill all running partitions in module_exit and
then reclaim memory? Would this help with kernel consistency
irrespective of userspace behavior?
Thanks,
Anirudh.
|
{
"author": "Anirudh Rayabharam <anirudh@anirudhrb.com>",
"date": "Fri, 30 Jan 2026 17:11:12 +0000",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Thu, Jan 29, 2026 at 06:59:31PM -0800, Mukesh R wrote:
This is my concern too. I don't think customers will be particularly
happy that kexec doesn't work with our driver.
Thanks,
Anirudh
|
{
"author": "Anirudh Rayabharam <anirudh@anirudhrb.com>",
"date": "Fri, 30 Jan 2026 17:17:52 +0000",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 30, 2026 at 05:17:52PM +0000, Anirudh Rayabharam wrote:
I wasn’t clear earlier, so let me restate it. Today, kexec is not
supported in L1VH. This is a bug we have not fixed yet. Disabling kexec
is not a long-term solution. But it is better to disable it explicitly
than to have kernel crashes after kexec.
This does not mean the bug should not be fixed. But the upstream kernel
has its own policies and merge windows. For kernel 6.19, it is better to
have a clear kexec error than random crashes after kexec.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Fri, 30 Jan 2026 10:41:39 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 30, 2026 at 05:11:12PM +0000, Anirudh Rayabharam wrote:
It would, but this is sloppy and cannot be a long-term solution.
It is also not reliable. We have no hook to prevent kexec. So if we fail
to kill the guest or reclaim the memory for any reason, the new kernel
may still crash.
There are two long-term solutions:
1. Add a way to prevent kexec when there is shared state between the hypervisor and the kernel.
2. Hand the shared kernel state over to the new kernel.
I sent a series for the first one. The second one is not ready yet.
Anything else is neither robust nor reliable, so I don’t think it makes
sense to pursue it.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Fri, 30 Jan 2026 10:46:45 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/30/26 10:41, Stanislav Kinsburskii wrote:
I don't think there is disagreement on this. The undesired part is turning
off KEXEC config completely.
Thanks,
-Mukesh
|
{
"author": "Mukesh R <mrathor@linux.microsoft.com>",
"date": "Fri, 30 Jan 2026 11:47:48 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 30, 2026 at 10:46:45AM -0800, Stanislav Kinsburskii wrote:
Actually guests won't be running by the time we reach our module_exit
function during a kexec. Userspace processes would've been killed by
then.
Also, why is this sloppy? Isn't this what module_exit should be
doing anyway? If someone unloads our module we should be trying to
clean everything up (including killing guests) and reclaim memory.
In any case, we can BUG() out if we fail to reclaim the memory. That would
stop the kexec.
This is a better solution since instead of disabling KEXEC outright: our
driver made the best possible efforts to make kexec work.
I honestly think we should focus efforts on making kexec work rather
than finding ways to prevent it.
Thanks,
Anirudh
|
{
"author": "Anirudh Rayabharam <anirudh@anirudhrb.com>",
"date": "Fri, 30 Jan 2026 20:32:45 +0000",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 30, 2026 at 11:47:48AM -0800, Mukesh R wrote:
There is no disagreement on this either. If you have a better solution
that can be implemented and merged before next kernel merge window,
please propose it. Otherwise, this patch will remain as is for now.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 2 Feb 2026 08:43:37 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On 1/24/2026 3:50 AM, Stanislav Kinsburskii wrote:
I have not gone through entire conversation that has happened already on
this, but if you send a next version for this, please change commit msg
and subject to include MSHV_ROOT instead of MSHV, to avoid confusion.
Regards,
Naman
|
{
"author": "Naman Jain <namjain@linux.microsoft.com>",
"date": "Mon, 2 Feb 2026 22:26:10 +0530",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 30, 2026 at 08:32:45PM +0000, Anirudh Rayabharam wrote:
No, they will not: "kexec -e" doesn't kill user processes.
We must not rely on OS to do graceful shutdown before doing
kexec.
Kexec does not unload modules, but it doesn't really matter even if it
would.
There are other means to plug into the reboot flow, but neither of them
is robust or reliable.
By killing the whole system? This is not a good user experience and I
don't see how can this be justified.
How an unrealiable feature leading to potential system crashes is better
that disabling kexec outright?
It's a complete opposite story for me: the latter provides a limited,
but robust functionality, while the former provides an unreliable and
unpredictable behavior.
There is no argument about it. But until we have it fixed properly, we
have two options: either disable kexec or stop claiming we have our
driver up and ready for external customers. Giving the importance of
this driver for current projects, I believe the better way would be to
explicitly limit the functionality instead of postponing the
productization of the driver.
In other words, this is not about our fillings about kexec support: it's
about what we can reliably provide to our customers today.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 2 Feb 2026 09:10:00 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH] mshv: Make MSHV mutually exclusive with KEXEC
|
The MSHV driver deposits kernel-allocated pages to the hypervisor during
runtime and never withdraws them. This creates a fundamental incompatibility
with KEXEC, as these deposited pages remain unavailable to the new kernel
loaded via KEXEC, leading to potential system crashes upon kernel accessing
hypervisor deposited pages.
Make MSHV mutually exclusive with KEXEC until proper page lifecycle
management is implemented.
Signed-off-by: Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>
---
drivers/hv/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 7937ac0cbd0f..cfd4501db0fa 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -74,6 +74,7 @@ config MSHV_ROOT
# e.g. When withdrawing memory, the hypervisor gives back 4k pages in
# no particular order, making it impossible to reassemble larger pages
depends on PAGE_SIZE_4KB
+ depends on !KEXEC
select EVENTFD
select VIRT_XFER_TO_GUEST_WORK
select HMM_MIRROR
|
On Fri, Jan 30, 2026 at 05:11:12PM +0000, Anirudh Rayabharam wrote:
First, module_exit is not called during kexec. Second, forcefully
killing all partitions during a kexec reboot would be bulky,
error-prone, and slow. It also does not guarantee robust behavior. Too
many things can go wrong, and we could still end up in the same broken
state.
To reiterate: today, the only safe way to use kexec is to avoid any
shared state between the kernel and the hypervisor. In other words, that
state should never be created, or it must be destroyed before issuing
kexec.
Neither of this states is controlled by our driver, so the only safe
options yet is to disable kexec.
Thanks,
Stanislav
|
{
"author": "Stanislav Kinsburskii <skinsburskii@linux.microsoft.com>",
"date": "Mon, 2 Feb 2026 10:09:38 -0800",
"thread_id": "aYDUOeXIoOV4qtRk@skinsburskii.localdomain.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
From: Arnd Bergmann <arnd@arndb.de>
These two ioctls are incompatible on 32-bit x86 userspace, because
the data structures are shorter than they are on 64-bit.
Add compad handling to the regular ioctl handler to just handle
them the same way and ignore the extra padding. This could be
done in a separate .compat_ioctl handler, but the main one already
handles two versions of VDUSE_IOTLB_GET_FD, so adding a third one
fits in rather well.
Fixes: ad146355bfad ("vduse: Support querying information of IOVA regions")
Fixes: c8a6153b6c59 ("vduse: Introduce VDUSE - vDPA Device in Userspace")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 43 +++++++++++++++++++++++++++---
1 file changed, 40 insertions(+), 3 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 405d59610f76..39cbff2f379d 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1341,6 +1341,37 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
return r;
}
+#if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT)
+/*
+ * i386 has different alignment constraints than x86_64,
+ * so there are only 3 bytes of padding instead of 7.
+ */
+struct compat_vduse_iotlb_entry {
+ compat_u64 offset;
+ compat_u64 start;
+ compat_u64 last;
+ __u8 perm;
+ __u8 padding[__alignof__(compat_u64) - 1];
+};
+#define COMPAT_VDUSE_IOTLB_GET_FD _IOWR(VDUSE_BASE, 0x10, struct compat_vduse_iotlb_entry)
+
+struct compat_vduse_vq_info {
+ __u32 index;
+ __u32 num;
+ compat_u64 desc_addr;
+ compat_u64 driver_addr;
+ compat_u64 device_addr;
+ union {
+ struct vduse_vq_state_split split;
+ struct vduse_vq_state_packed packed;
+ };
+ __u8 ready;
+ __u8 padding[__alignof__(compat_u64) - 1];
+} __uapi_arch_align;
+#define COMPAT_VDUSE_VQ_GET_INFO _IOWR(VDUSE_BASE, 0x15, struct compat_vduse_vq_info)
+
+#endif
+
static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
@@ -1352,6 +1383,9 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
return -EPERM;
switch (cmd) {
+#if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT)
+ case COMPAT_VDUSE_IOTLB_GET_FD:
+#endif
case VDUSE_IOTLB_GET_FD:
case VDUSE_IOTLB_GET_FD2: {
struct vduse_iotlb_entry_v2 entry = {0};
@@ -1455,13 +1489,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
ret = 0;
break;
}
+#if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT)
+ case COMPAT_VDUSE_VQ_GET_INFO:
+#endif
case VDUSE_VQ_GET_INFO: {
- struct vduse_vq_info vq_info;
+ struct vduse_vq_info vq_info = {};
struct vduse_virtqueue *vq;
u32 index;
ret = -EFAULT;
- if (copy_from_user(&vq_info, argp, sizeof(vq_info)))
+ if (copy_from_user(&vq_info, argp, _IOC_SIZE(cmd)))
break;
ret = -EINVAL;
@@ -1491,7 +1528,7 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
vq_info.ready = vq->ready;
ret = -EFAULT;
- if (copy_to_user(argp, &vq_info, sizeof(vq_info)))
+ if (copy_to_user(argp, &vq_info, _IOC_SIZE(cmd)))
break;
ret = 0;
--
2.39.5
|
{
"author": "Arnd Bergmann <arnd@kernel.org>",
"date": "Mon, 2 Feb 2026 10:59:32 +0100",
"thread_id": "20260202095940.1358613-1-arnd@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026 at 11:06 AM Arnd Bergmann <arnd@kernel.org> wrote:
s/indiviudally/individually/ if v2
That's something I didn't take into account, thanks!
I did not know about _IOC_SIZE and I like how it reduces the complexity, thanks!
As a proposal, maybe we can add MIN(_IOC_SIZE, sizeof(entry)) ? Not
sure if it is too much boilerplate for nothing as the compiler should
make the code identical and the uapi ioctl part should never change.
But it seems to me future changes to the code are better tied with the
MIN.
I'm ok with not including MIN() either way.
|
{
"author": "Eugenio Perez Martin <eperezma@redhat.com>",
"date": "Mon, 2 Feb 2026 12:28:26 +0100",
"thread_id": "20260202095940.1358613-1-arnd@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026 at 11:07 AM Arnd Bergmann <arnd@kernel.org> wrote:
I'm just learning about the COMPAT_ stuff but does this mean the
userland app need to call a different ioctl depending if it is
compiled for 32 bits or 64 bits? I guess it is not the case, but how
is that handled?
|
{
"author": "Eugenio Perez Martin <eperezma@redhat.com>",
"date": "Mon, 2 Feb 2026 12:34:48 +0100",
"thread_id": "20260202095940.1358613-1-arnd@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026 at 12:28 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
(I hit "Send" too early).
We could make this padding[3] so reserved keeps being [12]. This way
the struct members keep the same alignment between the commits. Not
super important as there should not be a lot of users of this right
now, we're just introducing it.
|
{
"author": "Eugenio Perez Martin <eperezma@redhat.com>",
"date": "Mon, 2 Feb 2026 12:50:49 +0100",
"thread_id": "20260202095940.1358613-1-arnd@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026, at 12:34, Eugenio Perez Martin wrote:
In a definition like
#define VDUSE_IOTLB_GET_FD _IOWR(VDUSE_BASE, 0x10, struct vduse_iotlb_entry)
The resulting integer value encodes sizeof(struct vduse_iotlb_entry)
in some of the bits. Since x86-32 and x86-64 have different
sizes for this particular structure, the command codes are
different for the same macro. The recommendation from
Documentation/driver-api/ioctl.rst is to use structures with
a consistent layout across all architectures to avoid that.
The normal way to handle this once it has gone wrong is to split
out the actual handler into a function that takes the kernel
structure, and a .compat_ioctl() handler that copies the
32-bit structure to the stack in the correct format.
Since the v1 structures here are /almost/ compatible aside from
the padding at the end, my patch here takes a shortcut and does
not add a custom .compat_ioctl handler but instead changes
the native version on x86-64 to deal with both layouts.
This does mean that the kernel driver now also accepts the
64-bit layout coming from compat tasks, and the compat layout
coming from 64-bit tasks.
Nothing in userspace changes, as it still just uses the existing
VDUSE_IOTLB_GET_FD macro, and the kernel continues to handle
the native layout as before.
Arnd
|
{
"author": "\"Arnd Bergmann\" <arnd@arndb.de>",
"date": "Mon, 02 Feb 2026 12:59:03 +0100",
"thread_id": "20260202095940.1358613-1-arnd@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 2, 2026, at 12:50, Eugenio Perez Martin wrote:
I think it's more readable without the MIN(), but I don't mind
adding it either.
I think that is too risky, as it would overlay 'asid' on top of
previously uninitialized padding fields coming from userspace
on most architectures. Since there was previously no is_mem_zero()
check for the padding, I don't think it should be reused at all.
Arnd
|
{
"author": "\"Arnd Bergmann\" <arnd@arndb.de>",
"date": "Mon, 02 Feb 2026 13:06:54 +0100",
"thread_id": "20260202095940.1358613-1-arnd@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 1/2] vduse: avoid adding implicit padding
|
From: Arnd Bergmann <arnd@arndb.de>
The vduse_iova_range_v2 and vduse_iotlb_entry_v2 structures are both
defined in a way that adds implicit padding and is incompatible between
i386 and x86_64 userspace because of the different structure alignment
requirements. Building the header with -Wpadded shows these new warnings:
vduse.h:305:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
vduse.h:374:1: error: padding struct size to alignment boundary with 4 bytes [-Werror=padded]
Change the amount of padding in these two structures to align them to
64 bit words and avoid those problems. Since the v1 vduse_iotlb_entry
already has an inconsistent size, do not attempt to reuse the structure
but rather list the members indiviudally, with a fixed amount of
padding.
Fixes: 079212f6877e ("vduse: add vq group asid support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 40 +++++++++++-------------------
include/uapi/linux/vduse.h | 9 +++++--
2 files changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 73d1d517dc6c..405d59610f76 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1301,7 +1301,7 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
int r = -EINVAL;
struct vhost_iotlb_map *map;
- if (entry->v1.start > entry->v1.last || entry->asid >= dev->nas)
+ if (entry->start > entry->last || entry->asid >= dev->nas)
return -EINVAL;
asid = array_index_nospec(entry->asid, dev->nas);
@@ -1312,18 +1312,18 @@ static int vduse_dev_iotlb_entry(struct vduse_dev *dev,
spin_lock(&dev->as[asid].domain->iotlb_lock);
map = vhost_iotlb_itree_first(dev->as[asid].domain->iotlb,
- entry->v1.start, entry->v1.last);
+ entry->start, entry->last);
if (map) {
if (f) {
const struct vdpa_map_file *map_file;
map_file = (struct vdpa_map_file *)map->opaque;
- entry->v1.offset = map_file->offset;
+ entry->offset = map_file->offset;
*f = get_file(map_file->file);
}
- entry->v1.start = map->start;
- entry->v1.last = map->last;
- entry->v1.perm = map->perm;
+ entry->start = map->start;
+ entry->last = map->last;
+ entry->perm = map->perm;
if (capability) {
*capability = 0;
@@ -1363,14 +1363,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
break;
ret = -EFAULT;
- if (cmd == VDUSE_IOTLB_GET_FD2) {
- if (copy_from_user(&entry, argp, sizeof(entry)))
- break;
- } else {
- if (copy_from_user(&entry.v1, argp,
- sizeof(entry.v1)))
- break;
- }
+ if (copy_from_user(&entry, argp, _IOC_SIZE(cmd)))
+ break;
ret = -EINVAL;
if (!is_mem_zero((const char *)entry.reserved,
@@ -1385,19 +1379,13 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
if (!f)
break;
- if (cmd == VDUSE_IOTLB_GET_FD2)
- ret = copy_to_user(argp, &entry,
- sizeof(entry));
- else
- ret = copy_to_user(argp, &entry.v1,
- sizeof(entry.v1));
-
+ ret = copy_to_user(argp, &entry, _IOC_SIZE(cmd));
if (ret) {
ret = -EFAULT;
fput(f);
break;
}
- ret = receive_fd(f, NULL, perm_to_file_flags(entry.v1.perm));
+ ret = receive_fd(f, NULL, perm_to_file_flags(entry.perm));
fput(f);
break;
}
@@ -1603,16 +1591,16 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
} else if (info.asid >= dev->nas)
break;
- entry.v1.start = info.start;
- entry.v1.last = info.last;
+ entry.start = info.start;
+ entry.last = info.last;
entry.asid = info.asid;
ret = vduse_dev_iotlb_entry(dev, &entry, NULL,
&info.capability);
if (ret < 0)
break;
- info.start = entry.v1.start;
- info.last = entry.v1.last;
+ info.start = entry.start;
+ info.last = entry.last;
info.asid = entry.asid;
ret = -EFAULT;
diff --git a/include/uapi/linux/vduse.h b/include/uapi/linux/vduse.h
index faae7718bd2e..deca8c7b9178 100644
--- a/include/uapi/linux/vduse.h
+++ b/include/uapi/linux/vduse.h
@@ -299,9 +299,13 @@ struct vduse_iova_info {
* Structure used by VDUSE_IOTLB_GET_FD2 ioctl to find an overlapped IOVA region.
*/
struct vduse_iotlb_entry_v2 {
- struct vduse_iotlb_entry v1;
+ __u64 offset;
+ __u64 start;
+ __u64 last;
+ __u8 perm;
+ __u8 padding[7];
__u32 asid;
- __u32 reserved[12];
+ __u32 reserved[11];
};
/*
@@ -371,6 +375,7 @@ struct vduse_iova_range_v2 {
__u64 start;
__u64 last;
__u32 asid;
+ __u32 padding;
};
/**
--
2.39.5
|
On Mon, Feb 02, 2026 at 12:59:03PM +0100, Arnd Bergmann wrote:
I think .compat_ioctl would be cleaner frankly. Just look at
all the ifdefery. And who knows what broken-ness userspace
comes up with with this approach. Better use the standard approach.
|
{
"author": "\"Michael S. Tsirkin\" <mst@redhat.com>",
"date": "Mon, 2 Feb 2026 11:45:13 -0500",
"thread_id": "20260202095940.1358613-1-arnd@kernel.org.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
jbd2_inode fields are updated under journal->j_list_lock, but some
paths read them without holding the lock (e.g. fast commit
helpers and the ordered truncate fast path).
Use READ_ONCE() for these lockless reads to correct the
concurrency assumptions.
Suggested-by: Jan Kara <jack@suse.com>
Signed-off-by: Li Chen <me@linux.beauty>
---
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
2 files changed, 33 insertions(+), 8 deletions(-)
diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
index 7203d2d2624d..3347d75da2f8 100644
--- a/fs/jbd2/commit.c
+++ b/fs/jbd2/commit.c
@@ -180,7 +180,13 @@ static int journal_wait_on_commit_record(journal_t *journal,
/* Send all the data buffers related to an inode */
int jbd2_submit_inode_data(journal_t *journal, struct jbd2_inode *jinode)
{
- if (!jinode || !(jinode->i_flags & JI_WRITE_DATA))
+ unsigned long flags;
+
+ if (!jinode)
+ return 0;
+
+ flags = READ_ONCE(jinode->i_flags);
+ if (!(flags & JI_WRITE_DATA))
return 0;
trace_jbd2_submit_inode_data(jinode->i_vfs_inode);
@@ -191,12 +197,30 @@ EXPORT_SYMBOL(jbd2_submit_inode_data);
int jbd2_wait_inode_data(journal_t *journal, struct jbd2_inode *jinode)
{
- if (!jinode || !(jinode->i_flags & JI_WAIT_DATA) ||
- !jinode->i_vfs_inode || !jinode->i_vfs_inode->i_mapping)
+ struct address_space *mapping;
+ struct inode *inode;
+ unsigned long flags;
+ loff_t start, end;
+
+ if (!jinode)
+ return 0;
+
+ flags = READ_ONCE(jinode->i_flags);
+ if (!(flags & JI_WAIT_DATA))
+ return 0;
+
+ inode = READ_ONCE(jinode->i_vfs_inode);
+ if (!inode)
+ return 0;
+
+ mapping = inode->i_mapping;
+ start = READ_ONCE(jinode->i_dirty_start);
+ end = READ_ONCE(jinode->i_dirty_end);
+
+ if (!mapping)
return 0;
return filemap_fdatawait_range_keep_errors(
- jinode->i_vfs_inode->i_mapping, jinode->i_dirty_start,
- jinode->i_dirty_end);
+ mapping, start, end);
}
EXPORT_SYMBOL(jbd2_wait_inode_data);
@@ -240,10 +264,11 @@ static int journal_submit_data_buffers(journal_t *journal,
int jbd2_journal_finish_inode_data_buffers(struct jbd2_inode *jinode)
{
struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
+ loff_t start = READ_ONCE(jinode->i_dirty_start);
+ loff_t end = READ_ONCE(jinode->i_dirty_end);
return filemap_fdatawait_range_keep_errors(mapping,
- jinode->i_dirty_start,
- jinode->i_dirty_end);
+ start, end);
}
/*
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index dca4b5d8aaaa..302b2090eea7 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -2739,7 +2739,7 @@ int jbd2_journal_begin_ordered_truncate(journal_t *journal,
int ret = 0;
/* This is a quick check to avoid locking if not necessary */
- if (!jinode->i_transaction)
+ if (!READ_ONCE(jinode->i_transaction))
goto out;
/* Locks are here just to force reading of recent values, it is
* enough that the transaction was not committing before we started
--
2.52.0
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 11:12:30 +0800",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
ext4 journal commit callbacks access jbd2_inode fields such as
i_transaction and i_dirty_start/end without holding journal->j_list_lock.
Use READ_ONCE() for these reads to correct the concurrency assumptions.
Suggested-by: Jan Kara <jack@suse.com>
Signed-off-by: Li Chen <me@linux.beauty>
---
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
2 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index d99296d7315f..2d451388e080 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3033,11 +3033,13 @@ static int ext4_writepages(struct address_space *mapping,
int ext4_normal_submit_inode_data_buffers(struct jbd2_inode *jinode)
{
+ loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
+ loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
struct writeback_control wbc = {
.sync_mode = WB_SYNC_ALL,
.nr_to_write = LONG_MAX,
- .range_start = jinode->i_dirty_start,
- .range_end = jinode->i_dirty_end,
+ .range_start = dirty_start,
+ .range_end = dirty_end,
};
struct mpage_da_data mpd = {
.inode = jinode->i_vfs_inode,
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 5cf6c2b54bbb..acb2bc016fd4 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -521,6 +521,7 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode,
{
struct buffer_head *bh, *head;
struct journal_head *jh;
+ transaction_t *trans = READ_ONCE(jinode->i_transaction);
bh = head = folio_buffers(folio);
do {
@@ -539,7 +540,7 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode,
*/
jh = bh2jh(bh);
if (buffer_dirty(bh) ||
- (jh && (jh->b_transaction != jinode->i_transaction ||
+ (jh && (jh->b_transaction != trans ||
jh->b_next_transaction)))
return true;
} while ((bh = bh->b_this_page) != head);
@@ -550,12 +551,14 @@ static bool ext4_journalled_writepage_needs_redirty(struct jbd2_inode *jinode,
static int ext4_journalled_submit_inode_data_buffers(struct jbd2_inode *jinode)
{
struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
+ loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
+ loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
struct writeback_control wbc = {
- .sync_mode = WB_SYNC_ALL,
+ .sync_mode = WB_SYNC_ALL,
.nr_to_write = LONG_MAX,
- .range_start = jinode->i_dirty_start,
- .range_end = jinode->i_dirty_end,
- };
+ .range_start = dirty_start,
+ .range_end = dirty_end,
+ };
struct folio *folio = NULL;
int error;
--
2.52.0
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 11:12:31 +0800",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
ocfs2 journal commit callback reads jbd2_inode dirty range fields without
holding journal->j_list_lock.
Use READ_ONCE() for these reads to correct the concurrency assumptions.
Suggested-by: Jan Kara <jack@suse.com>
Signed-off-by: Li Chen <me@linux.beauty>
---
fs/ocfs2/journal.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
index 85239807dec7..7032284cdbd6 100644
--- a/fs/ocfs2/journal.c
+++ b/fs/ocfs2/journal.c
@@ -902,8 +902,11 @@ int ocfs2_journal_alloc(struct ocfs2_super *osb)
static int ocfs2_journal_submit_inode_data_buffers(struct jbd2_inode *jinode)
{
- return filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping,
- jinode->i_dirty_start, jinode->i_dirty_end);
+ struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
+ loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
+ loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
+
+ return filemap_fdatawrite_range(mapping, dirty_start, dirty_end);
}
int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
--
2.52.0
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 11:12:32 +0800",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri, Jan 30, 2026 at 11:12:32AM +0800, Li Chen wrote:
I don't think this is the right solution to the problem. If it is,
there needs to be much better argumentation in the commit message.
As I understand it, jbd2_journal_file_inode() initialises jinode,
then adds it to the t_inode_list, then drops the j_list_lock. So the
actual problem we need to address is that there's no memory barrier
between the store to i_dirty_start and the list_add(). Once that's
added, there's no need for a READ_ONCE here.
Or have I misunderstood the problem?
|
{
"author": "Matthew Wilcox <willy@infradead.org>",
"date": "Fri, 30 Jan 2026 05:27:59 +0000",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
Hi Matthew,
> On Fri, Jan 30, 2026 at 11:12:32AM +0800, Li Chen wrote:
> > ocfs2 journal commit callback reads jbd2_inode dirty range fields without
> > holding journal->j_list_lock.
> >
> > Use READ_ONCE() for these reads to correct the concurrency assumptions.
>
> I don't think this is the right solution to the problem. If it is,
> there needs to be much better argumentation in the commit message.
>
> As I understand it, jbd2_journal_file_inode() initialises jinode,
> then adds it to the t_inode_list, then drops the j_list_lock. So the
> actual problem we need to address is that there's no memory barrier
> between the store to i_dirty_start and the list_add(). Once that's
> added, there's no need for a READ_ONCE here.
>
> Or have I misunderstood the problem?
Thanks for the review.
My understanding of your point is that you're worried about a missing
"publish" ordering in jbd2_journal_file_inode(): we store
jinode->i_dirty_start/end and then list_add() the jinode to
t_inode_list, and a core which observes the list entry might miss the prior
i_dirty_* stores. Is that the issue you had in mind?
If so, for the normal commit path where the list is walked under
journal->j_list_lock (e.g. journal_submit_data_buffers() in
fs/jbd2/commit.c), spin_lock()/spin_unlock() should already provide the
necessary ordering, since both the i_dirty_* updates and the list_add()
happen inside the same critical section.
The ocfs2 case I was aiming at is different: the filesystem callback is
invoked after unlocking journal->j_list_lock and may sleep, so it can't hold
j_list_lock but it still reads jinode->i_dirty_start/end while other
threads update these fields under the lock. Adding a barrier between the
stores and list_add() would not address that concurrent update window.
So the itent of READ_ONCE() in ocfs2 is to take a single snapshot of the
dirty range values from memory (avoid compiler to reuse a value kept in a
register or fold multiple reads). I'm not trying to claim any additional
memory ordering from this change.
I'll respin and adjust the commit message accordingly. The updated part will
say along the lines of:
"ocfs2 reads jinode->i_dirty_start/end without journal->j_list_lock
(callback may sleep); these fields are updated under j_list_lock in jbd2.
Use READ_ONCE() so the callback takes a single snapshot via actual loads
from the variable (i.e. don't let the compiler reuse a value kept in a register
or fold multiple reads)."
Does that match your understanding?
Regards,
Li
> > Suggested-by: Jan Kara <jack@suse.com>
> > Signed-off-by: Li Chen <me@linux.beauty>
> > ---
> > fs/ocfs2/journal.c | 7 +++++--
> > 1 file changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
> > index 85239807dec7..7032284cdbd6 100644
> > --- a/fs/ocfs2/journal.c
> > +++ b/fs/ocfs2/journal.c
> > @@ -902,8 +902,11 @@ int ocfs2_journal_alloc(struct ocfs2_super *osb)
> >
> > static int ocfs2_journal_submit_inode_data_buffers(struct jbd2_inode *jinode)
> > {
> > - return filemap_fdatawrite_range(jinode->i_vfs_inode->i_mapping,
> > - jinode->i_dirty_start, jinode->i_dirty_end);
> > + struct address_space *mapping = jinode->i_vfs_inode->i_mapping;
> > + loff_t dirty_start = READ_ONCE(jinode->i_dirty_start);
> > + loff_t dirty_end = READ_ONCE(jinode->i_dirty_end);
> > +
> > + return filemap_fdatawrite_range(mapping, dirty_start, dirty_end);
> > }
> >
> > int ocfs2_journal_init(struct ocfs2_super *osb, int *dirty)
> > --
> > 2.52.0
> >
>
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Fri, 30 Jan 2026 20:26:40 +0800",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri, Jan 30, 2026 at 08:26:40PM +0800, Li Chen wrote:
I think that's the only issue that exists ...
I don't think that's true. I think what you're asserting is that:
int *pi;
int **ppi;
spin_lock(&lock);
*pi = 1;
*ppi = pi;
spin_unlock(&lock);
that the store to *pi must be observed before the store to *ppi, and
that's not true for a reader which doesn't read the value of lock.
The store to *ppi needs a store barrier before it.
I don't think that race exists. If it does exist, the READ_ONCE will
not help (on 32 bit platforms) because it's a 64-bit quantity and 32-bit
platforms do not, in general, have a way to do an atomic 64-bit load
(look at the implementation of i_size_read() for the gyrations we go
through to assure a non-torn read of that value).
I think the prevention of this race occurs at a higher level than
"it's updated under a lock". That is, jbd2_journal_file_inode()
is never called for a jinode which is currently being operated on by
j_submit_inode_data_buffers(). Now, I'm not an expert on the jbd code,
so I may be wrong here.
|
{
"author": "Matthew Wilcox <willy@infradead.org>",
"date": "Fri, 30 Jan 2026 16:36:28 +0000",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
Hi Matthew,
Thank you very much for the detailed explanation and for your patience.
On Sat, 31 Jan 2026 00:36:28 +0800,
Matthew Wilcox wrote:
Understood.
Yes, agreed $B!=(B thank you. I was implicitly assuming the reader had taken the same lock
at some point, which is not a valid assumption for a lockless reader.
Thanks. I tried to sanity-check whether that $B!H(Bnever called$B!I(B invariant holds
in practice.
I added a small local-only tracepoint (not for upstream) which fires from
jbd2_journal_file_inode() when it observes JI_COMMIT_RUNNING already set
on the same jinode:
/* fs/jbd2/transaction.c */
if (unlikely(jinode->i_flags & JI_COMMIT_RUNNING))
trace_jbd2_file_inode_commit_running(...);
The trace event prints dev, ino, current tid, jinode flags, and the
i_transaction / i_next_transaction tids.
With an ext4 test (ordered mode) I do see repeated hits. Trace output:
... jbd2_submit_inode_data: dev 7,0 ino 20
... jbd2_file_inode_commit_running: dev 7,0 ino 20 tid 3 op 0x6 i_flags 0x7
j_tid 2 j_next 3 ... comm python3
So it looks like jbd2_journal_file_inode() can run while JI_COMMIT_RUNNING
is set for that inode, i.e. during the window where the commit thread drops
j_list_lock around ->j_submit_inode_data_buffers() / ->j_finish_inode_data_buffers().
Given this, would you prefer the series to move towards something like:
1. taking a snapshot of i_dirty_start/end under j_list_lock in the commit path and passing the snapshot
to the filesystem callback (so callbacks never read jinode->i_dirty_* locklessly), or
2. introducing a real synchronization mechanism for the dirty range itself (seqcount/atomic64/etc)?
3. something else.
I$B!G(Bd be very grateful for guidance on what you consider the most appropriate direction or point out something I'm wrong.
Thanks again.
Regards,
Li
|
{
"author": "Li Chen <me@linux.beauty>",
"date": "Sun, 01 Feb 2026 12:37:36 +0800",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri 30-01-26 11:12:30, Li Chen wrote:
Just one nit below. With that fixed feel free to add:
Reviewed-by: Jan Kara <jack@suse.cz>
i_vfs_inode never changes so READ_ONCE is pointless here.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 17:40:45 +0100",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri 30-01-26 11:12:31, Li Chen wrote:
Looks good. Feel free to add:
Reviewed-by: Jan Kara <jack@suse.cz>
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 17:41:39 +0100",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Mon 02-02-26 17:40:45, Jan Kara wrote:
One more note: I've realized that for this to work you also need to make
jbd2_journal_file_inode() use WRITE_ONCE() when updating i_dirty_start,
i_dirty_end and i_flags.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 17:52:30 +0100",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/3] jbd2/ext4/ocfs2: READ_ONCE for lockless jinode reads
|
This series adds READ_ONCE() for existing lockless reads of
jbd2_inode fields in jbd2 and filesystem callbacks used by ext4 and ocfs2.
This is based on Jan's suggestion in the review of the ext4 jinode
publication race fix. [1]
[1]: https://lore.kernel.org/all/4jxwogttddiaoqbstlgou5ox6zs27ngjjz5ukrxafm2z5ijxod@so4eqnykiegj/
Thanks,
Li
Li Chen (3):
jbd2: use READ_ONCE for lockless jinode reads
ext4: use READ_ONCE for lockless jinode reads
ocfs2: use READ_ONCE for lockless jinode reads
fs/ext4/inode.c | 6 ++++--
fs/ext4/super.c | 13 ++++++++-----
fs/jbd2/commit.c | 39 ++++++++++++++++++++++++++++++++-------
fs/jbd2/transaction.c | 2 +-
fs/ocfs2/journal.c | 7 +++++--
5 files changed, 50 insertions(+), 17 deletions(-)
--
2.52.0
|
On Fri 30-01-26 16:36:28, Matthew Wilcox wrote:
Well, the above reasonably accurately describes the code making jinode
visible. The reader code is like:
spin_lock(&lock);
pi = *ppi;
spin_unlock(&lock);
work with pi
so it is guaranteed to see pi properly initialized. The problem is that
"work with pi" can race with other code updating the content of pi which is
what this patch is trying to deal with.
Sadly the race does exist - journal_submit_data_buffers() on the committing
transaction can run in parallel with jbd2_journal_file_inode() in the
running transaction. There's nothing preventing that. The problems arising
out of that are mostly theoretical but they do exist. In particular you're
correct that on 32-bit platforms this will be racy even with READ_ONCE /
WRITE_ONCE which I didn't realize.
Li, the best way to address this concern would be to modify jbd2_inode to
switch i_dirty_start / i_dirty_end to account in PAGE_SIZE units instead of
bytes and be of type pgoff_t. jbd2_journal_file_inode() just needs to round
the passed ranges properly...
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
|
{
"author": "Jan Kara <jack@suse.cz>",
"date": "Mon, 2 Feb 2026 18:17:49 +0100",
"thread_id": "nxltvmkavegi5tedwzb5g4gt5vzyjvsmkmg24sej74q7b5nvfm@o5u6uivv7sm7.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
Simplify the mcasp_set_clk_pdir caller convention in start/stop stream
function, to make it so that set_clk_pdir gets called regardless when
stream starts and also disables when stream ends.
Functionality-wise, everything remains the same as the previously skipped
calls are now either correctly configured
(when McASP is SND_SOC_DAIFMT_BP_FC - pdir needs to be enabled)
or called with a bitmask of zero (when McASP is SND_SOC_DAIFMT_BC_FC - pdir
gets disabled).
On brief regarding McASP Clock and Frame sync configurations, refer to [0].
[0]:TRM Section 12.1.1.4.2 https://www.ti.com/lit/ug/sprujd4a/sprujd4a.pdf
Signed-off-by: Sen Wang <sen@ti.com>
---
sound/soc/ti/davinci-mcasp.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
index aa14fc1c8011..4f8a2ce6ce78 100644
--- a/sound/soc/ti/davinci-mcasp.c
+++ b/sound/soc/ti/davinci-mcasp.c
@@ -235,8 +235,8 @@ static void mcasp_start_rx(struct davinci_mcasp *mcasp)
if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp)) {
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXHCLKRST);
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXCLKRST);
- mcasp_set_clk_pdir(mcasp, true);
}
+ mcasp_set_clk_pdir(mcasp, true);
/* Activate serializer(s) */
mcasp_set_reg(mcasp, DAVINCI_MCASP_RXSTAT_REG, 0xFFFFFFFF);
@@ -311,10 +311,10 @@ static void mcasp_stop_rx(struct davinci_mcasp *mcasp)
* In synchronous mode stop the TX clocks if no other stream is
* running
*/
- if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) && !mcasp->streams) {
- mcasp_set_clk_pdir(mcasp, false);
+ if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) && !mcasp->streams)
mcasp_set_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, 0);
- }
+ if (!mcasp->streams)
+ mcasp_set_clk_pdir(mcasp, false);
mcasp_set_reg(mcasp, DAVINCI_MCASP_GBLCTLR_REG, 0);
mcasp_set_reg(mcasp, DAVINCI_MCASP_RXSTAT_REG, 0xFFFFFFFF);
@@ -340,7 +340,7 @@ static void mcasp_stop_tx(struct davinci_mcasp *mcasp)
*/
if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) && mcasp->streams)
val = TXHCLKRST | TXCLKRST | TXFSRST;
- else
+ if (!mcasp->streams)
mcasp_set_clk_pdir(mcasp, false);
--
2.43.0
|
{
"author": "Sen Wang <sen@ti.com>",
"date": "Thu, 29 Jan 2026 23:10:43 -0600",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
The current mcasp_is_synchronous() function does more than what it
proclaims, it also checks if McASP is a frame producer.
Therefore split the original function into two separate ones and
replace all occurrences with the new equivalent logic. So the functions
can be re-used when checking async/sync status in light of async mode
enhancements.
Signed-off-by: Sen Wang <sen@ti.com>
---
sound/soc/ti/davinci-mcasp.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
index 621a9d5f9377..aa14fc1c8011 100644
--- a/sound/soc/ti/davinci-mcasp.c
+++ b/sound/soc/ti/davinci-mcasp.c
@@ -179,10 +179,16 @@ static void mcasp_set_ctl_reg(struct davinci_mcasp *mcasp, u32 ctl_reg, u32 val)
static bool mcasp_is_synchronous(struct davinci_mcasp *mcasp)
{
- u32 rxfmctl = mcasp_get_reg(mcasp, DAVINCI_MCASP_RXFMCTL_REG);
u32 aclkxctl = mcasp_get_reg(mcasp, DAVINCI_MCASP_ACLKXCTL_REG);
- return !(aclkxctl & TX_ASYNC) && rxfmctl & AFSRE;
+ return !(aclkxctl & TX_ASYNC);
+}
+
+static bool mcasp_is_frame_producer(struct davinci_mcasp *mcasp)
+{
+ u32 rxfmctl = mcasp_get_reg(mcasp, DAVINCI_MCASP_RXFMCTL_REG);
+
+ return rxfmctl & AFSRE;
}
static inline void mcasp_set_clk_pdir(struct davinci_mcasp *mcasp, bool enable)
@@ -226,7 +232,7 @@ static void mcasp_start_rx(struct davinci_mcasp *mcasp)
* synchronously from the transmit clock and frame sync. We need to make
* sure that the TX signlas are enabled when starting reception.
*/
- if (mcasp_is_synchronous(mcasp)) {
+ if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp)) {
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXHCLKRST);
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXCLKRST);
mcasp_set_clk_pdir(mcasp, true);
@@ -239,7 +245,7 @@ static void mcasp_start_rx(struct davinci_mcasp *mcasp)
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLR_REG, RXSMRST);
/* Release Frame Sync generator */
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLR_REG, RXFSRST);
- if (mcasp_is_synchronous(mcasp))
+ if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp))
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXFSRST);
/* enable receive IRQs */
@@ -305,7 +311,7 @@ static void mcasp_stop_rx(struct davinci_mcasp *mcasp)
* In synchronous mode stop the TX clocks if no other stream is
* running
*/
- if (mcasp_is_synchronous(mcasp) && !mcasp->streams) {
+ if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) && !mcasp->streams) {
mcasp_set_clk_pdir(mcasp, false);
mcasp_set_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, 0);
}
@@ -332,7 +338,7 @@ static void mcasp_stop_tx(struct davinci_mcasp *mcasp)
* In synchronous mode keep TX clocks running if the capture stream is
* still running.
*/
- if (mcasp_is_synchronous(mcasp) && mcasp->streams)
+ if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) && mcasp->streams)
val = TXHCLKRST | TXCLKRST | TXFSRST;
else
mcasp_set_clk_pdir(mcasp, false);
@@ -1041,7 +1047,8 @@ static int mcasp_i2s_hw_param(struct davinci_mcasp *mcasp, int stream,
* not running already we need to configure the TX slots in
* order to have correct FSX on the bus
*/
- if (mcasp_is_synchronous(mcasp) && !mcasp->channels)
+ if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) &&
+ !mcasp->channels)
mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMCTL_REG,
FSXMOD(total_slots), FSXMOD(0x1FF));
}
--
2.43.0
|
{
"author": "Sen Wang <sen@ti.com>",
"date": "Thu, 29 Jan 2026 23:10:42 -0600",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
McASP supports the independent configuration of TX & RX clk and frame
sync registers. By default, the driver is configured in synchronous mode
where RX clock generator is disabled and it uses transmit clock signals as
bit clock and frame sync. Therefore add optional properties needed for
asynchronous mode.
Add ti,async-mode boolean binding to provide a way to decouple the default
behavior and allows for independent TX & RX clocking.
Add tdm-slots-rx uint32 binding to provide an alternative hardware
specifier stating the number of RX serializers.
The existing property tdm-slots will still dictate number of
TX serializers, and RX if tdm-slots-rx isn't given for backwards
compatibility.
Add auxclk-fs-ratio-rx which allows to specify the ratio just for RX.
The driver can be supplied with two different ratios
(auxclk-fs-ratio and auxclk-fs-ratio-rx in tandem) and achieve two
different sampling rates for tx & rx.
Signed-off-by: Sen Wang <sen@ti.com>
---
.../bindings/sound/davinci-mcasp-audio.yaml | 71 +++++++++++++++++--
1 file changed, 66 insertions(+), 5 deletions(-)
diff --git a/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml b/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml
index beef193aaaeb..87559d0d079a 100644
--- a/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml
+++ b/Documentation/devicetree/bindings/sound/davinci-mcasp-audio.yaml
@@ -40,11 +40,33 @@ properties:
tdm-slots:
$ref: /schemas/types.yaml#/definitions/uint32
description:
- number of channels over one serializer
- the property is ignored in DIT mode
+ Number of channels over one serializer. This property
+ specifies the TX playback TDM slot count, along with default RX slot count
+ if tdm-slots-rx is not specified.
+ The property is ignored in DIT mode.
minimum: 2
maximum: 32
+ tdm-slots-rx:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ Number of RX capture channels over one serializer. If specified,
+ allows independent RX TDM slot count separate from TX. Requires
+ ti,async-mode to be enabled for independent TX/RX clock rates.
+ The property is ignored in DIT mode.
+ minimum: 2
+ maximum: 32
+
+ ti,async-mode:
+ description:
+ Specify to allow independent TX & RX clocking,
+ to enable audio playback & record with different sampling rate,
+ and different number of bits per frame.
+ if property is omitted, TX and RX will share same bit clock and frame clock signals,
+ thus RX need to use same bits per frame and sampling rate as TX in synchronous mode.
+ the property is ignored in DIT mode (as DIT is TX-only)
+ type: boolean
+
serial-dir:
description:
A list of serializer configuration
@@ -125,7 +147,21 @@ properties:
auxclk-fs-ratio:
$ref: /schemas/types.yaml#/definitions/uint32
- description: ratio of AUCLK and FS rate if applicable
+ description:
+ Ratio of AUCLK and FS rate if applicable. This property specifies
+ the TX ratio, along with default RX ratio if auxclk-fs-ratio-rx
+ is not specified.
+ When not specified, the inputted system clock frequency via set_sysclk
+ callback by the machine driver is used for divider calculation.
+
+ auxclk-fs-ratio-rx:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ Ratio of AUCLK and FS rate for RX. If specified, allows
+ for a different RX ratio. Requires ti,async-mode to be
+ enabled when the ratio differs from auxclk-fs-ratio.
+ When not specified, it defaults to the value of auxclk-fs-ratio.
+ The property is ignored in DIT mode.
gpio-controller: true
@@ -170,14 +206,38 @@ allOf:
- $ref: dai-common.yaml#
- if:
properties:
- opmode:
+ op-mode:
enum:
- 0
-
then:
required:
- tdm-slots
+ - if:
+ properties:
+ op-mode:
+ const: 1
+ then:
+ properties:
+ tdm-slots: false
+ tdm-slots-rx: false
+ ti,async-mode: false
+ auxclk-fs-ratio-rx: false
+
+ - if:
+ required:
+ - tdm-slots-rx
+ then:
+ required:
+ - ti,async-mode
+
+ - if:
+ required:
+ - auxclk-fs-ratio-rx
+ then:
+ required:
+ - ti,async-mode
+
unevaluatedProperties: false
examples:
@@ -190,6 +250,7 @@ examples:
interrupt-names = "tx", "rx";
op-mode = <0>; /* MCASP_IIS_MODE */
tdm-slots = <2>;
+ ti,async-mode;
dmas = <&main_udmap 0xc400>, <&main_udmap 0x4400>;
dma-names = "tx", "rx";
serial-dir = <
--
2.43.0
|
{
"author": "Sen Wang <sen@ti.com>",
"date": "Thu, 29 Jan 2026 23:10:41 -0600",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
McASP has dedicated clock & frame sync registers for both transmit
and receive. Currently McASP driver only supports synchronous behavior and
couples both TX & RX settings.
Add logic that enables asynchronous mode via ti,async-mode property. In
async mode, playback & record can be done simultaneously with different
audio configurations (tdm slots, tdm width, audio bit depth).
Note the ability to have different tx/rx DSP formats (i2s, dsp_a, etc.),
while possible in hardware, remains to be a gap as it require changes
to the corresponding machine driver interface.
Existing IIS (sync mode) and DIT mode logic remains mostly unchanged.
Exceptions are IIS mode logic that previously assumed sync mode, which has
now been made aware of the distinction. And shared logic across all modes
also now checks for McASP tx/rx-specific driver attributes. Those
attributes have been populated according to the original extent, ensuring
no divergence in functionality.
Constraints no longer applicable for async mode are skipped.
Clock selection options have also been added to include rx/tx-only clk_ids,
exposing independent configuration via the machine driver as well.
Note that asynchronous mode is not applicable for McASP in DIT mode,
which is a transmitter-only mode to interface w/ self-clocking formats.
Signed-off-by: Sen Wang <sen@ti.com>
---
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 487 +++++++++++++++++-----
sound/soc/ti/davinci-mcasp.h | 10 +
3 files changed, 398 insertions(+), 102 deletions(-)
diff --git a/include/linux/platform_data/davinci_asp.h b/include/linux/platform_data/davinci_asp.h
index b9c8520b4bd3..509c5592aab0 100644
--- a/include/linux/platform_data/davinci_asp.h
+++ b/include/linux/platform_data/davinci_asp.h
@@ -59,7 +59,8 @@ struct davinci_mcasp_pdata {
bool i2s_accurate_sck;
/* McASP specific fields */
- int tdm_slots;
+ int tdm_slots_tx;
+ int tdm_slots_rx;
u8 op_mode;
u8 dismod;
u8 num_serializer;
diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
index 4f8a2ce6ce78..ef7fa23d30bf 100644
--- a/sound/soc/ti/davinci-mcasp.c
+++ b/sound/soc/ti/davinci-mcasp.c
@@ -70,6 +70,7 @@ struct davinci_mcasp_context {
struct davinci_mcasp_ruledata {
struct davinci_mcasp *mcasp;
int serializers;
+ int stream;
};
struct davinci_mcasp {
@@ -87,21 +88,27 @@ struct davinci_mcasp {
bool missing_audio_param;
/* McASP specific data */
- int tdm_slots;
+ int tdm_slots_tx;
+ int tdm_slots_rx;
u32 tdm_mask[2];
- int slot_width;
+ int slot_width_tx;
+ int slot_width_rx;
u8 op_mode;
u8 dismod;
u8 num_serializer;
u8 *serial_dir;
u8 version;
- u8 bclk_div;
+ u8 bclk_div_tx;
+ u8 bclk_div_rx;
int streams;
u32 irq_request[2];
- int sysclk_freq;
+ unsigned int sysclk_freq_tx;
+ unsigned int sysclk_freq_rx;
bool bclk_master;
- u32 auxclk_fs_ratio;
+ bool async_mode;
+ u32 auxclk_fs_ratio_tx;
+ u32 auxclk_fs_ratio_rx;
unsigned long pdir; /* Pin direction bitfield */
@@ -203,6 +210,27 @@ static inline void mcasp_set_clk_pdir(struct davinci_mcasp *mcasp, bool enable)
}
}
+static inline void mcasp_set_clk_pdir_stream(struct davinci_mcasp *mcasp,
+ int stream, bool enable)
+{
+ u32 bit, bit_end;
+
+ if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ bit = PIN_BIT_ACLKX;
+ bit_end = PIN_BIT_AFSX + 1;
+ } else {
+ bit = PIN_BIT_ACLKR;
+ bit_end = PIN_BIT_AFSR + 1;
+ }
+
+ for_each_set_bit_from(bit, &mcasp->pdir, bit_end) {
+ if (enable)
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_PDIR_REG, BIT(bit));
+ else
+ mcasp_clr_bits(mcasp, DAVINCI_MCASP_PDIR_REG, BIT(bit));
+ }
+}
+
static inline void mcasp_set_axr_pdir(struct davinci_mcasp *mcasp, bool enable)
{
u32 bit;
@@ -215,6 +243,36 @@ static inline void mcasp_set_axr_pdir(struct davinci_mcasp *mcasp, bool enable)
}
}
+static inline int mcasp_get_tdm_slots(struct davinci_mcasp *mcasp, int stream)
+{
+ return (stream == SNDRV_PCM_STREAM_PLAYBACK) ?
+ mcasp->tdm_slots_tx : mcasp->tdm_slots_rx;
+}
+
+static inline int mcasp_get_slot_width(struct davinci_mcasp *mcasp, int stream)
+{
+ return (stream == SNDRV_PCM_STREAM_PLAYBACK) ?
+ mcasp->slot_width_tx : mcasp->slot_width_rx;
+}
+
+static inline unsigned int mcasp_get_sysclk_freq(struct davinci_mcasp *mcasp, int stream)
+{
+ return (stream == SNDRV_PCM_STREAM_PLAYBACK) ?
+ mcasp->sysclk_freq_tx : mcasp->sysclk_freq_rx;
+}
+
+static inline unsigned int mcasp_get_bclk_div(struct davinci_mcasp *mcasp, int stream)
+{
+ return (stream == SNDRV_PCM_STREAM_PLAYBACK) ?
+ mcasp->bclk_div_tx : mcasp->bclk_div_rx;
+}
+
+static inline unsigned int mcasp_get_auxclk_fs_ratio(struct davinci_mcasp *mcasp, int stream)
+{
+ return (stream == SNDRV_PCM_STREAM_PLAYBACK) ?
+ mcasp->auxclk_fs_ratio_tx : mcasp->auxclk_fs_ratio_rx;
+}
+
static void mcasp_start_rx(struct davinci_mcasp *mcasp)
{
if (mcasp->rxnumevt) { /* enable FIFO */
@@ -230,13 +288,17 @@ static void mcasp_start_rx(struct davinci_mcasp *mcasp)
/*
* When ASYNC == 0 the transmit and receive sections operate
* synchronously from the transmit clock and frame sync. We need to make
- * sure that the TX signlas are enabled when starting reception.
+ * sure that the TX signals are enabled when starting reception.
+ * Else set pin to be output when McASP is the master
*/
if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp)) {
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXHCLKRST);
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXCLKRST);
}
- mcasp_set_clk_pdir(mcasp, true);
+ if (mcasp_is_synchronous(mcasp))
+ mcasp_set_clk_pdir(mcasp, true);
+ else
+ mcasp_set_clk_pdir_stream(mcasp, SNDRV_PCM_STREAM_CAPTURE, true);
/* Activate serializer(s) */
mcasp_set_reg(mcasp, DAVINCI_MCASP_RXSTAT_REG, 0xFFFFFFFF);
@@ -267,7 +329,10 @@ static void mcasp_start_tx(struct davinci_mcasp *mcasp)
/* Start clocks */
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXHCLKRST);
mcasp_set_ctl_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, TXCLKRST);
- mcasp_set_clk_pdir(mcasp, true);
+ if (mcasp_is_synchronous(mcasp))
+ mcasp_set_clk_pdir(mcasp, true);
+ else
+ mcasp_set_clk_pdir_stream(mcasp, SNDRV_PCM_STREAM_PLAYBACK, true);
/* Activate serializer(s) */
mcasp_set_reg(mcasp, DAVINCI_MCASP_TXSTAT_REG, 0xFFFFFFFF);
@@ -310,11 +375,14 @@ static void mcasp_stop_rx(struct davinci_mcasp *mcasp)
/*
* In synchronous mode stop the TX clocks if no other stream is
* running
+ * Otherwise in async mode only stop RX clocks
*/
if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) && !mcasp->streams)
mcasp_set_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, 0);
- if (!mcasp->streams)
+ if (mcasp_is_synchronous(mcasp) && !mcasp->streams)
mcasp_set_clk_pdir(mcasp, false);
+ else if (!mcasp_is_synchronous(mcasp))
+ mcasp_set_clk_pdir_stream(mcasp, SNDRV_PCM_STREAM_CAPTURE, false);
mcasp_set_reg(mcasp, DAVINCI_MCASP_GBLCTLR_REG, 0);
mcasp_set_reg(mcasp, DAVINCI_MCASP_RXSTAT_REG, 0xFFFFFFFF);
@@ -337,11 +405,14 @@ static void mcasp_stop_tx(struct davinci_mcasp *mcasp)
/*
* In synchronous mode keep TX clocks running if the capture stream is
* still running.
+ * Otherwise in async mode only stop TX clocks
*/
if (mcasp_is_frame_producer(mcasp) && mcasp_is_synchronous(mcasp) && mcasp->streams)
val = TXHCLKRST | TXCLKRST | TXFSRST;
- if (!mcasp->streams)
+ if (mcasp_is_synchronous(mcasp) && !mcasp->streams)
mcasp_set_clk_pdir(mcasp, false);
+ else if (!mcasp_is_synchronous(mcasp))
+ mcasp_set_clk_pdir_stream(mcasp, SNDRV_PCM_STREAM_PLAYBACK, false);
mcasp_set_reg(mcasp, DAVINCI_MCASP_GBLCTLX_REG, val);
@@ -353,7 +424,8 @@ static void mcasp_stop_tx(struct davinci_mcasp *mcasp)
mcasp_clr_bits(mcasp, reg, FIFO_ENABLE);
}
- mcasp_set_axr_pdir(mcasp, false);
+ if (!mcasp->streams)
+ mcasp_set_axr_pdir(mcasp, false);
}
static void davinci_mcasp_stop(struct davinci_mcasp *mcasp, int stream)
@@ -625,13 +697,39 @@ static int __davinci_mcasp_set_clkdiv(struct davinci_mcasp *mcasp, int div_id,
AHCLKRDIV(div - 1), AHCLKRDIV_MASK);
break;
+ case MCASP_CLKDIV_AUXCLK_TXONLY: /* MCLK divider for TX only */
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG,
+ AHCLKXDIV(div - 1), AHCLKXDIV_MASK);
+ break;
+
+ case MCASP_CLKDIV_AUXCLK_RXONLY: /* MCLK divider for RX only */
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG,
+ AHCLKRDIV(div - 1), AHCLKRDIV_MASK);
+ break;
+
case MCASP_CLKDIV_BCLK: /* BCLK divider */
mcasp_mod_bits(mcasp, DAVINCI_MCASP_ACLKXCTL_REG,
ACLKXDIV(div - 1), ACLKXDIV_MASK);
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_ACLKRCTL_REG,
+ ACLKRDIV(div - 1), ACLKRDIV_MASK);
+ if (explicit) {
+ mcasp->bclk_div_tx = div;
+ mcasp->bclk_div_rx = div;
+ }
+ break;
+
+ case MCASP_CLKDIV_BCLK_TXONLY: /* BCLK divider for TX only */
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_ACLKXCTL_REG,
+ ACLKXDIV(div - 1), ACLKXDIV_MASK);
+ if (explicit)
+ mcasp->bclk_div_tx = div;
+ break;
+
+ case MCASP_CLKDIV_BCLK_RXONLY: /* BCLK divider for RX only */
mcasp_mod_bits(mcasp, DAVINCI_MCASP_ACLKRCTL_REG,
ACLKRDIV(div - 1), ACLKRDIV_MASK);
if (explicit)
- mcasp->bclk_div = div;
+ mcasp->bclk_div_rx = div;
break;
case MCASP_CLKDIV_BCLK_FS_RATIO:
@@ -645,11 +743,33 @@ static int __davinci_mcasp_set_clkdiv(struct davinci_mcasp *mcasp, int div_id,
* tdm_slot width by dividing the ratio by the
* number of configured tdm slots.
*/
- mcasp->slot_width = div / mcasp->tdm_slots;
- if (div % mcasp->tdm_slots)
+ mcasp->slot_width_tx = div / mcasp->tdm_slots_tx;
+ if (div % mcasp->tdm_slots_tx)
+ dev_warn(mcasp->dev,
+ "%s(): BCLK/LRCLK %d is not divisible by %d tx tdm slots",
+ __func__, div, mcasp->tdm_slots_tx);
+
+ mcasp->slot_width_rx = div / mcasp->tdm_slots_rx;
+ if (div % mcasp->tdm_slots_rx)
+ dev_warn(mcasp->dev,
+ "%s(): BCLK/LRCLK %d is not divisible by %d rx tdm slots",
+ __func__, div, mcasp->tdm_slots_rx);
+ break;
+
+ case MCASP_CLKDIV_BCLK_FS_RATIO_TXONLY:
+ mcasp->slot_width_tx = div / mcasp->tdm_slots_tx;
+ if (div % mcasp->tdm_slots_tx)
+ dev_warn(mcasp->dev,
+ "%s(): BCLK/LRCLK %d is not divisible by %d tx tdm slots",
+ __func__, div, mcasp->tdm_slots_tx);
+ break;
+
+ case MCASP_CLKDIV_BCLK_FS_RATIO_RXONLY:
+ mcasp->slot_width_rx = div / mcasp->tdm_slots_rx;
+ if (div % mcasp->tdm_slots_rx)
dev_warn(mcasp->dev,
- "%s(): BCLK/LRCLK %d is not divisible by %d tdm slots",
- __func__, div, mcasp->tdm_slots);
+ "%s(): BCLK/LRCLK %d is not divisible by %d rx tdm slots",
+ __func__, div, mcasp->tdm_slots_rx);
break;
default:
@@ -683,6 +803,20 @@ static int davinci_mcasp_set_sysclk(struct snd_soc_dai *dai, int clk_id,
mcasp_clr_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG,
AHCLKRE);
clear_bit(PIN_BIT_AHCLKX, &mcasp->pdir);
+ mcasp->sysclk_freq_tx = freq;
+ mcasp->sysclk_freq_rx = freq;
+ break;
+ case MCASP_CLK_HCLK_AHCLK_TXONLY:
+ mcasp_clr_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG,
+ AHCLKXE);
+ clear_bit(PIN_BIT_AHCLKX, &mcasp->pdir);
+ mcasp->sysclk_freq_tx = freq;
+ break;
+ case MCASP_CLK_HCLK_AHCLK_RXONLY:
+ mcasp_clr_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG,
+ AHCLKRE);
+ clear_bit(PIN_BIT_AHCLKR, &mcasp->pdir);
+ mcasp->sysclk_freq_rx = freq;
break;
case MCASP_CLK_HCLK_AUXCLK:
mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG,
@@ -690,22 +824,56 @@ static int davinci_mcasp_set_sysclk(struct snd_soc_dai *dai, int clk_id,
mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG,
AHCLKRE);
set_bit(PIN_BIT_AHCLKX, &mcasp->pdir);
+ mcasp->sysclk_freq_tx = freq;
+ mcasp->sysclk_freq_rx = freq;
+ break;
+ case MCASP_CLK_HCLK_AUXCLK_TXONLY:
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG,
+ AHCLKXE);
+ set_bit(PIN_BIT_AHCLKX, &mcasp->pdir);
+ mcasp->sysclk_freq_tx = freq;
+ break;
+ case MCASP_CLK_HCLK_AUXCLK_RXONLY:
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG,
+ AHCLKRE);
+ set_bit(PIN_BIT_AHCLKR, &mcasp->pdir);
+ mcasp->sysclk_freq_rx = freq;
break;
default:
dev_err(mcasp->dev, "Invalid clk id: %d\n", clk_id);
goto out;
}
} else {
- /* Select AUXCLK as HCLK */
- mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG, AHCLKXE);
- mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG, AHCLKRE);
- set_bit(PIN_BIT_AHCLKX, &mcasp->pdir);
+ /* McASP is clock master, select AUXCLK as HCLK */
+ switch (clk_id) {
+ case MCASP_CLK_HCLK_AUXCLK_TXONLY:
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG,
+ AHCLKXE);
+ set_bit(PIN_BIT_AHCLKX, &mcasp->pdir);
+ mcasp->sysclk_freq_tx = freq;
+ break;
+ case MCASP_CLK_HCLK_AUXCLK_RXONLY:
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG,
+ AHCLKRE);
+ set_bit(PIN_BIT_AHCLKR, &mcasp->pdir);
+ mcasp->sysclk_freq_rx = freq;
+ break;
+ default:
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG,
+ AHCLKXE);
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG,
+ AHCLKRE);
+ set_bit(PIN_BIT_AHCLKX, &mcasp->pdir);
+ set_bit(PIN_BIT_AHCLKR, &mcasp->pdir);
+ mcasp->sysclk_freq_tx = freq;
+ mcasp->sysclk_freq_rx = freq;
+ break;
+ }
}
/*
* When AHCLK X/R is selected to be output it means that the HCLK is
* the same clock - coming via AUXCLK.
*/
- mcasp->sysclk_freq = freq;
out:
pm_runtime_put(mcasp->dev);
return 0;
@@ -717,9 +885,11 @@ static int davinci_mcasp_ch_constraint(struct davinci_mcasp *mcasp, int stream,
{
struct snd_pcm_hw_constraint_list *cl = &mcasp->chconstr[stream];
unsigned int *list = (unsigned int *) cl->list;
- int slots = mcasp->tdm_slots;
+ int slots;
int i, count = 0;
+ slots = mcasp_get_tdm_slots(mcasp, stream);
+
if (mcasp->tdm_mask[stream])
slots = hweight32(mcasp->tdm_mask[stream]);
@@ -784,27 +954,42 @@ static int davinci_mcasp_set_tdm_slot(struct snd_soc_dai *dai,
return -EINVAL;
}
- mcasp->tdm_slots = slots;
+ if (mcasp->async_mode) {
+ if (tx_mask) {
+ mcasp->tdm_slots_tx = slots;
+ mcasp->slot_width_tx = slot_width;
+ }
+ if (rx_mask) {
+ mcasp->tdm_slots_rx = slots;
+ mcasp->slot_width_rx = slot_width;
+ }
+ } else {
+ mcasp->tdm_slots_tx = slots;
+ mcasp->tdm_slots_rx = slots;
+ mcasp->slot_width_tx = slot_width;
+ mcasp->slot_width_rx = slot_width;
+ }
+
mcasp->tdm_mask[SNDRV_PCM_STREAM_PLAYBACK] = tx_mask;
mcasp->tdm_mask[SNDRV_PCM_STREAM_CAPTURE] = rx_mask;
- mcasp->slot_width = slot_width;
return davinci_mcasp_set_ch_constraints(mcasp);
}
static int davinci_config_channel_size(struct davinci_mcasp *mcasp,
- int sample_width)
+ int sample_width, int stream)
{
u32 fmt;
u32 tx_rotate, rx_rotate, slot_width;
u32 mask = (1ULL << sample_width) - 1;
- if (mcasp->slot_width)
- slot_width = mcasp->slot_width;
- else if (mcasp->max_format_width)
- slot_width = mcasp->max_format_width;
- else
- slot_width = sample_width;
+ slot_width = mcasp_get_slot_width(mcasp, stream);
+ if (!slot_width) {
+ if (mcasp->max_format_width)
+ slot_width = mcasp->max_format_width;
+ else
+ slot_width = sample_width;
+ }
/*
* TX rotation:
* right aligned formats: rotate w/ slot_width
@@ -827,17 +1012,23 @@ static int davinci_config_channel_size(struct davinci_mcasp *mcasp,
fmt = (slot_width >> 1) - 1;
if (mcasp->op_mode != DAVINCI_MCASP_DIT_MODE) {
- mcasp_mod_bits(mcasp, DAVINCI_MCASP_RXFMT_REG, RXSSZ(fmt),
- RXSSZ(0x0F));
- mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSSZ(fmt),
- TXSSZ(0x0F));
- mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXROT(tx_rotate),
- TXROT(7));
- mcasp_mod_bits(mcasp, DAVINCI_MCASP_RXFMT_REG, RXROT(rx_rotate),
- RXROT(7));
- mcasp_set_reg(mcasp, DAVINCI_MCASP_RXMASK_REG, mask);
+ if (!mcasp->async_mode || stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSSZ(fmt),
+ TXSSZ(0x0F));
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXROT(tx_rotate),
+ TXROT(7));
+ mcasp_set_reg(mcasp, DAVINCI_MCASP_TXMASK_REG, mask);
+ }
+ if (!mcasp->async_mode || stream == SNDRV_PCM_STREAM_CAPTURE) {
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_RXFMT_REG, RXSSZ(fmt),
+ RXSSZ(0x0F));
+ mcasp_mod_bits(mcasp, DAVINCI_MCASP_RXFMT_REG, RXROT(rx_rotate),
+ RXROT(7));
+ mcasp_set_reg(mcasp, DAVINCI_MCASP_RXMASK_REG, mask);
+ }
} else {
/*
+ * DIT mode only use TX serializers
* according to the TRM it should be TXROT=0, this one works:
* 16 bit to 23-8 (TXROT=6, rotate 24 bits)
* 24 bit to 23-0 (TXROT=0, rotate 0 bits)
@@ -850,10 +1041,9 @@ static int davinci_config_channel_size(struct davinci_mcasp *mcasp,
TXROT(7));
mcasp_mod_bits(mcasp, DAVINCI_MCASP_TXFMT_REG, TXSSZ(15),
TXSSZ(0x0F));
+ mcasp_set_reg(mcasp, DAVINCI_MCASP_TXMASK_REG, mask);
}
- mcasp_set_reg(mcasp, DAVINCI_MCASP_TXMASK_REG, mask);
-
return 0;
}
@@ -864,11 +1054,13 @@ static int mcasp_common_hw_param(struct davinci_mcasp *mcasp, int stream,
int i;
u8 tx_ser = 0;
u8 rx_ser = 0;
- u8 slots = mcasp->tdm_slots;
+ int slots;
u8 max_active_serializers, max_rx_serializers, max_tx_serializers;
int active_serializers, numevt;
u32 reg;
+ slots = mcasp_get_tdm_slots(mcasp, stream);
+
/* In DIT mode we only allow maximum of one serializers for now */
if (mcasp->op_mode == DAVINCI_MCASP_DIT_MODE)
max_active_serializers = 1;
@@ -996,7 +1188,7 @@ static int mcasp_i2s_hw_param(struct davinci_mcasp *mcasp, int stream,
u32 mask = 0;
u32 busel = 0;
- total_slots = mcasp->tdm_slots;
+ total_slots = mcasp_get_tdm_slots(mcasp, stream);
/*
* If more than one serializer is needed, then use them with
@@ -1027,7 +1219,10 @@ static int mcasp_i2s_hw_param(struct davinci_mcasp *mcasp, int stream,
mask |= (1 << i);
}
- mcasp_clr_bits(mcasp, DAVINCI_MCASP_ACLKXCTL_REG, TX_ASYNC);
+ if (mcasp->async_mode)
+ mcasp_set_bits(mcasp, DAVINCI_MCASP_ACLKXCTL_REG, TX_ASYNC);
+ else
+ mcasp_clr_bits(mcasp, DAVINCI_MCASP_ACLKXCTL_REG, TX_ASYNC);
if (!mcasp->dat_port)
busel = TXSEL;
@@ -1126,16 +1321,33 @@ static int mcasp_dit_hw_param(struct davinci_mcasp *mcasp,
static int davinci_mcasp_calc_clk_div(struct davinci_mcasp *mcasp,
unsigned int sysclk_freq,
- unsigned int bclk_freq, bool set)
+ unsigned int bclk_freq,
+ int stream,
+ bool set)
{
- u32 reg = mcasp_get_reg(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG);
int div = sysclk_freq / bclk_freq;
int rem = sysclk_freq % bclk_freq;
int error_ppm;
int aux_div = 1;
+ int bclk_div_id, auxclk_div_id;
+ bool auxclk_enabled;
+
+ if (mcasp->async_mode && stream == SNDRV_PCM_STREAM_CAPTURE) {
+ auxclk_enabled = mcasp_get_reg(mcasp, DAVINCI_MCASP_AHCLKRCTL_REG) & AHCLKRE;
+ bclk_div_id = MCASP_CLKDIV_BCLK_RXONLY;
+ auxclk_div_id = MCASP_CLKDIV_AUXCLK_RXONLY;
+ } else if (mcasp->async_mode && stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ auxclk_enabled = mcasp_get_reg(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG) & AHCLKXE;
+ bclk_div_id = MCASP_CLKDIV_BCLK_TXONLY;
+ auxclk_div_id = MCASP_CLKDIV_AUXCLK_TXONLY;
+ } else {
+ auxclk_enabled = mcasp_get_reg(mcasp, DAVINCI_MCASP_AHCLKXCTL_REG) & AHCLKXE;
+ bclk_div_id = MCASP_CLKDIV_BCLK;
+ auxclk_div_id = MCASP_CLKDIV_AUXCLK;
+ }
if (div > (ACLKXDIV_MASK + 1)) {
- if (reg & AHCLKXE) {
+ if (auxclk_enabled) {
aux_div = div / (ACLKXDIV_MASK + 1);
if (div % (ACLKXDIV_MASK + 1))
aux_div++;
@@ -1165,10 +1377,10 @@ static int davinci_mcasp_calc_clk_div(struct davinci_mcasp *mcasp,
dev_info(mcasp->dev, "Sample-rate is off by %d PPM\n",
error_ppm);
- __davinci_mcasp_set_clkdiv(mcasp, MCASP_CLKDIV_BCLK, div, 0);
- if (reg & AHCLKXE)
- __davinci_mcasp_set_clkdiv(mcasp, MCASP_CLKDIV_AUXCLK,
- aux_div, 0);
+ __davinci_mcasp_set_clkdiv(mcasp, bclk_div_id, div, false);
+ if (auxclk_enabled)
+ __davinci_mcasp_set_clkdiv(mcasp, auxclk_div_id,
+ aux_div, false);
}
return error_ppm;
@@ -1219,6 +1431,7 @@ static int davinci_mcasp_hw_params(struct snd_pcm_substream *substream,
int channels = params_channels(params);
int period_size = params_period_size(params);
int ret;
+ unsigned int sysclk_freq = mcasp_get_sysclk_freq(mcasp, substream->stream);
switch (params_format(params)) {
case SNDRV_PCM_FORMAT_U8:
@@ -1259,22 +1472,26 @@ static int davinci_mcasp_hw_params(struct snd_pcm_substream *substream,
* If mcasp is BCLK master, and a BCLK divider was not provided by
* the machine driver, we need to calculate the ratio.
*/
- if (mcasp->bclk_master && mcasp->bclk_div == 0 && mcasp->sysclk_freq) {
- int slots = mcasp->tdm_slots;
+ if (mcasp->bclk_master && mcasp_get_bclk_div(mcasp, substream->stream) == 0 &&
+ sysclk_freq) {
+ int slots, slot_width;
int rate = params_rate(params);
int sbits = params_width(params);
unsigned int bclk_target;
- if (mcasp->slot_width)
- sbits = mcasp->slot_width;
+ slots = mcasp_get_tdm_slots(mcasp, substream->stream);
+
+ slot_width = mcasp_get_slot_width(mcasp, substream->stream);
+ if (slot_width)
+ sbits = slot_width;
if (mcasp->op_mode == DAVINCI_MCASP_IIS_MODE)
bclk_target = rate * sbits * slots;
else
bclk_target = rate * 128;
- davinci_mcasp_calc_clk_div(mcasp, mcasp->sysclk_freq,
- bclk_target, true);
+ davinci_mcasp_calc_clk_div(mcasp, sysclk_freq,
+ bclk_target, substream->stream, true);
}
ret = mcasp_common_hw_param(mcasp, substream->stream,
@@ -1291,9 +1508,10 @@ static int davinci_mcasp_hw_params(struct snd_pcm_substream *substream,
if (ret)
return ret;
- davinci_config_channel_size(mcasp, word_length);
+ davinci_config_channel_size(mcasp, word_length, substream->stream);
- if (mcasp->op_mode == DAVINCI_MCASP_IIS_MODE) {
+ /* Channel constraints are disabled for async mode */
+ if (mcasp->op_mode == DAVINCI_MCASP_IIS_MODE && !mcasp->async_mode) {
mcasp->channels = channels;
if (!mcasp->max_format_width)
mcasp->max_format_width = word_length;
@@ -1337,7 +1555,7 @@ static int davinci_mcasp_hw_rule_slot_width(struct snd_pcm_hw_params *params,
snd_pcm_format_t i;
snd_mask_none(&nfmt);
- slot_width = rd->mcasp->slot_width;
+ slot_width = mcasp_get_slot_width(rd->mcasp, rd->stream);
pcm_for_each_format(i) {
if (snd_mask_test_format(fmt, i)) {
@@ -1387,12 +1605,15 @@ static int davinci_mcasp_hw_rule_rate(struct snd_pcm_hw_params *params,
struct snd_interval *ri =
hw_param_interval(params, SNDRV_PCM_HW_PARAM_RATE);
int sbits = params_width(params);
- int slots = rd->mcasp->tdm_slots;
+ int slots, slot_width;
struct snd_interval range;
int i;
- if (rd->mcasp->slot_width)
- sbits = rd->mcasp->slot_width;
+ slots = mcasp_get_tdm_slots(rd->mcasp, rd->stream);
+
+ slot_width = mcasp_get_slot_width(rd->mcasp, rd->stream);
+ if (slot_width)
+ sbits = slot_width;
snd_interval_any(&range);
range.empty = 1;
@@ -1402,16 +1623,17 @@ static int davinci_mcasp_hw_rule_rate(struct snd_pcm_hw_params *params,
uint bclk_freq = sbits * slots *
davinci_mcasp_dai_rates[i];
unsigned int sysclk_freq;
+ unsigned int ratio;
int ppm;
- if (rd->mcasp->auxclk_fs_ratio)
- sysclk_freq = davinci_mcasp_dai_rates[i] *
- rd->mcasp->auxclk_fs_ratio;
+ ratio = mcasp_get_auxclk_fs_ratio(rd->mcasp, rd->stream);
+ if (ratio)
+ sysclk_freq = davinci_mcasp_dai_rates[i] * ratio;
else
- sysclk_freq = rd->mcasp->sysclk_freq;
+ sysclk_freq = mcasp_get_sysclk_freq(rd->mcasp, rd->stream);
ppm = davinci_mcasp_calc_clk_div(rd->mcasp, sysclk_freq,
- bclk_freq, false);
+ bclk_freq, rd->stream, false);
if (abs(ppm) < DAVINCI_MAX_RATE_ERROR_PPM) {
if (range.empty) {
range.min = davinci_mcasp_dai_rates[i];
@@ -1437,30 +1659,34 @@ static int davinci_mcasp_hw_rule_format(struct snd_pcm_hw_params *params,
struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT);
struct snd_mask nfmt;
int rate = params_rate(params);
- int slots = rd->mcasp->tdm_slots;
+ int slots;
int count = 0;
snd_pcm_format_t i;
+ slots = mcasp_get_tdm_slots(rd->mcasp, rd->stream);
+
snd_mask_none(&nfmt);
pcm_for_each_format(i) {
if (snd_mask_test_format(fmt, i)) {
uint sbits = snd_pcm_format_width(i);
unsigned int sysclk_freq;
- int ppm;
+ unsigned int ratio;
+ int ppm, slot_width;
- if (rd->mcasp->auxclk_fs_ratio)
- sysclk_freq = rate *
- rd->mcasp->auxclk_fs_ratio;
+ ratio = mcasp_get_auxclk_fs_ratio(rd->mcasp, rd->stream);
+ if (ratio)
+ sysclk_freq = rate * ratio;
else
- sysclk_freq = rd->mcasp->sysclk_freq;
+ sysclk_freq = mcasp_get_sysclk_freq(rd->mcasp, rd->stream);
- if (rd->mcasp->slot_width)
- sbits = rd->mcasp->slot_width;
+ slot_width = mcasp_get_slot_width(rd->mcasp, rd->stream);
+ if (slot_width)
+ sbits = slot_width;
ppm = davinci_mcasp_calc_clk_div(rd->mcasp, sysclk_freq,
sbits * slots * rate,
- false);
+ rd->stream, false);
if (abs(ppm) < DAVINCI_MAX_RATE_ERROR_PPM) {
snd_mask_set_format(&nfmt, i);
count++;
@@ -1497,7 +1723,7 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
&mcasp->ruledata[substream->stream];
u32 max_channels = 0;
int i, dir, ret;
- int tdm_slots = mcasp->tdm_slots;
+ int tdm_slots;
u8 *numevt;
/* Do not allow more then one stream per direction */
@@ -1506,6 +1732,8 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
mcasp->substreams[substream->stream] = substream;
+ tdm_slots = mcasp_get_tdm_slots(mcasp, substream->stream);
+
if (mcasp->tdm_mask[substream->stream])
tdm_slots = hweight32(mcasp->tdm_mask[substream->stream]);
@@ -1527,6 +1755,7 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
}
ruledata->serializers = max_channels;
ruledata->mcasp = mcasp;
+ ruledata->stream = substream->stream;
max_channels *= tdm_slots;
/*
* If the already active stream has less channels than the calculated
@@ -1534,9 +1763,13 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
* is in use we need to use that as a constraint for the second stream.
* Otherwise (first stream or less allowed channels or more than one
* serializer in use) we use the calculated constraint.
+ *
+ * However, in async mode, TX and RX have independent clocks and can
+ * use different configurations, so don't apply the constraint.
*/
if (mcasp->channels && mcasp->channels < max_channels &&
- ruledata->serializers == 1)
+ ruledata->serializers == 1 &&
+ !mcasp->async_mode)
max_channels = mcasp->channels;
/*
* But we can always allow channels upto the amount of
@@ -1553,10 +1786,10 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
0, SNDRV_PCM_HW_PARAM_CHANNELS,
&mcasp->chconstr[substream->stream]);
- if (mcasp->max_format_width) {
+ if (mcasp->max_format_width && !mcasp->async_mode) {
/*
* Only allow formats which require same amount of bits on the
- * bus as the currently running stream
+ * bus as the currently running stream to ensure sync mode
*/
ret = snd_pcm_hw_rule_add(substream->runtime, 0,
SNDRV_PCM_HW_PARAM_FORMAT,
@@ -1565,8 +1798,7 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
SNDRV_PCM_HW_PARAM_FORMAT, -1);
if (ret)
return ret;
- }
- else if (mcasp->slot_width) {
+ } else if (mcasp_get_slot_width(mcasp, substream->stream)) {
/* Only allow formats require <= slot_width bits on the bus */
ret = snd_pcm_hw_rule_add(substream->runtime, 0,
SNDRV_PCM_HW_PARAM_FORMAT,
@@ -1581,7 +1813,8 @@ static int davinci_mcasp_startup(struct snd_pcm_substream *substream,
* If we rely on implicit BCLK divider setting we should
* set constraints based on what we can provide.
*/
- if (mcasp->bclk_master && mcasp->bclk_div == 0 && mcasp->sysclk_freq) {
+ if (mcasp->bclk_master && mcasp_get_bclk_div(mcasp, substream->stream) == 0 &&
+ mcasp_get_sysclk_freq(mcasp, substream->stream)) {
ret = snd_pcm_hw_rule_add(substream->runtime, 0,
SNDRV_PCM_HW_PARAM_RATE,
davinci_mcasp_hw_rule_rate,
@@ -1758,8 +1991,6 @@ static struct snd_soc_dai_driver davinci_mcasp_dai[] = {
.formats = DAVINCI_MCASP_PCM_FMTS,
},
.ops = &davinci_mcasp_dai_ops,
-
- .symmetric_rate = 1,
},
{
.name = "davinci-mcasp.1",
@@ -1921,18 +2152,33 @@ static int davinci_mcasp_get_config(struct davinci_mcasp *mcasp,
goto out;
}
+ /* Parse TX-specific TDM slot and use it as default for RX */
if (of_property_read_u32(np, "tdm-slots", &val) == 0) {
if (val < 2 || val > 32) {
- dev_err(&pdev->dev, "tdm-slots must be in rage [2-32]\n");
+ dev_err(&pdev->dev, "tdm-slots must be in range [2-32]\n");
return -EINVAL;
}
- pdata->tdm_slots = val;
+ pdata->tdm_slots_tx = val;
+ pdata->tdm_slots_rx = val;
} else if (pdata->op_mode == DAVINCI_MCASP_IIS_MODE) {
mcasp->missing_audio_param = true;
goto out;
}
+ /* Parse RX-specific TDM slot count if provided */
+ if (of_property_read_u32(np, "tdm-slots-rx", &val) == 0) {
+ if (val < 2 || val > 32) {
+ dev_err(&pdev->dev, "tdm-slots-rx must be in range [2-32]\n");
+ return -EINVAL;
+ }
+
+ pdata->tdm_slots_rx = val;
+ }
+
+ if (pdata->op_mode != DAVINCI_MCASP_DIT_MODE)
+ mcasp->async_mode = of_property_read_bool(np, "ti,async-mode");
+
of_serial_dir32 = of_get_property(np, "serial-dir", &val);
val /= sizeof(u32);
if (of_serial_dir32) {
@@ -1958,8 +2204,15 @@ static int davinci_mcasp_get_config(struct davinci_mcasp *mcasp,
if (of_property_read_u32(np, "rx-num-evt", &val) == 0)
pdata->rxnumevt = val;
- if (of_property_read_u32(np, "auxclk-fs-ratio", &val) == 0)
- mcasp->auxclk_fs_ratio = val;
+ /* Parse TX-specific auxclk/fs ratio and use it as default for RX */
+ if (of_property_read_u32(np, "auxclk-fs-ratio", &val) == 0) {
+ mcasp->auxclk_fs_ratio_tx = val;
+ mcasp->auxclk_fs_ratio_rx = val;
+ }
+
+ /* Parse RX-specific auxclk/fs ratio if provided */
+ if (of_property_read_u32(np, "auxclk-fs-ratio-rx", &val) == 0)
+ mcasp->auxclk_fs_ratio_rx = val;
if (of_property_read_u32(np, "dismod", &val) == 0) {
if (val == 0 || val == 2 || val == 3) {
@@ -1988,19 +2241,51 @@ static int davinci_mcasp_get_config(struct davinci_mcasp *mcasp,
mcasp->op_mode = pdata->op_mode;
/* sanity check for tdm slots parameter */
if (mcasp->op_mode == DAVINCI_MCASP_IIS_MODE) {
- if (pdata->tdm_slots < 2) {
- dev_warn(&pdev->dev, "invalid tdm slots: %d\n",
- pdata->tdm_slots);
- mcasp->tdm_slots = 2;
- } else if (pdata->tdm_slots > 32) {
- dev_warn(&pdev->dev, "invalid tdm slots: %d\n",
- pdata->tdm_slots);
- mcasp->tdm_slots = 32;
+ if (pdata->tdm_slots_tx < 2) {
+ dev_warn(&pdev->dev, "invalid tdm tx slots: %d\n",
+ pdata->tdm_slots_tx);
+ mcasp->tdm_slots_tx = 2;
+ } else if (pdata->tdm_slots_tx > 32) {
+ dev_warn(&pdev->dev, "invalid tdm tx slots: %d\n",
+ pdata->tdm_slots_tx);
+ mcasp->tdm_slots_tx = 32;
} else {
- mcasp->tdm_slots = pdata->tdm_slots;
+ mcasp->tdm_slots_tx = pdata->tdm_slots_tx;
+ }
+
+ if (pdata->tdm_slots_rx < 2) {
+ dev_warn(&pdev->dev, "invalid tdm rx slots: %d\n",
+ pdata->tdm_slots_rx);
+ mcasp->tdm_slots_rx = 2;
+ } else if (pdata->tdm_slots_rx > 32) {
+ dev_warn(&pdev->dev, "invalid tdm rx slots: %d\n",
+ pdata->tdm_slots_rx);
+ mcasp->tdm_slots_rx = 32;
+ } else {
+ mcasp->tdm_slots_rx = pdata->tdm_slots_rx;
}
} else {
- mcasp->tdm_slots = 32;
+ mcasp->tdm_slots_tx = 32;
+ mcasp->tdm_slots_rx = 32;
+ }
+
+ /* Different TX/RX slot counts require async mode */
+ if (pdata->op_mode != DAVINCI_MCASP_DIT_MODE &&
+ mcasp->tdm_slots_tx != mcasp->tdm_slots_rx && !mcasp->async_mode) {
+ dev_err(&pdev->dev,
+ "Different TX (%d) and RX (%d) TDM slots require ti,async-mode\n",
+ mcasp->tdm_slots_tx, mcasp->tdm_slots_rx);
+ return -EINVAL;
+ }
+
+ /* Different TX/RX auxclk-fs-ratio require async mode */
+ if (pdata->op_mode != DAVINCI_MCASP_DIT_MODE &&
+ mcasp->auxclk_fs_ratio_tx && mcasp->auxclk_fs_ratio_rx &&
+ mcasp->auxclk_fs_ratio_tx != mcasp->auxclk_fs_ratio_rx && !mcasp->async_mode) {
+ dev_err(&pdev->dev,
+ "Different TX (%d) and RX (%d) auxclk-fs-ratio require ti,async-mode\n",
+ mcasp->auxclk_fs_ratio_tx, mcasp->auxclk_fs_ratio_rx);
+ return -EINVAL;
}
mcasp->num_serializer = pdata->num_serializer;
diff --git a/sound/soc/ti/davinci-mcasp.h b/sound/soc/ti/davinci-mcasp.h
index 5de2b8a31061..4eba8c918c5f 100644
--- a/sound/soc/ti/davinci-mcasp.h
+++ b/sound/soc/ti/davinci-mcasp.h
@@ -298,10 +298,20 @@
/* Source of High-frequency transmit/receive clock */
#define MCASP_CLK_HCLK_AHCLK 0 /* AHCLKX/R */
#define MCASP_CLK_HCLK_AUXCLK 1 /* Internal functional clock */
+#define MCASP_CLK_HCLK_AHCLK_TXONLY 2 /* AHCLKX for TX only */
+#define MCASP_CLK_HCLK_AHCLK_RXONLY 3 /* AHCLKR for RX only */
+#define MCASP_CLK_HCLK_AUXCLK_TXONLY 4 /* AUXCLK for TX only */
+#define MCASP_CLK_HCLK_AUXCLK_RXONLY 5 /* AUXCLK for RX only */
/* clock divider IDs */
#define MCASP_CLKDIV_AUXCLK 0 /* HCLK divider from AUXCLK */
#define MCASP_CLKDIV_BCLK 1 /* BCLK divider from HCLK */
#define MCASP_CLKDIV_BCLK_FS_RATIO 2 /* to set BCLK FS ration */
+#define MCASP_CLKDIV_AUXCLK_TXONLY 3 /* AUXCLK divider for TX only */
+#define MCASP_CLKDIV_AUXCLK_RXONLY 4 /* AUXCLK divider for RX only */
+#define MCASP_CLKDIV_BCLK_TXONLY 5 /* BCLK divider for TX only */
+#define MCASP_CLKDIV_BCLK_RXONLY 6 /* BCLK divider for RX only */
+#define MCASP_CLKDIV_BCLK_FS_RATIO_TXONLY 7 /* BCLK/FS ratio for TX only */
+#define MCASP_CLKDIV_BCLK_FS_RATIO_RXONLY 8 /* BCLK/FS ratio for RX only */
#endif /* DAVINCI_MCASP_H */
--
2.43.0
|
{
"author": "Sen Wang <sen@ti.com>",
"date": "Thu, 29 Jan 2026 23:10:44 -0600",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
On Thu, Jan 29, 2026 at 11:10:41PM -0600, Sen Wang wrote:
Please submit patches using subject lines reflecting the style for the
subsystem, this makes it easier for people to identify relevant patches.
Look at what existing commits in the area you're changing are doing and
make sure your subject lines visually resemble what they're doing.
There's no need to resubmit to fix this alone.
|
{
"author": "Mark Brown <broonie@kernel.org>",
"date": "Mon, 2 Feb 2026 12:44:32 +0000",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
On 30/01/2026 07:10, Sen Wang wrote:
True, the naming was not too precise. It is tasked to decide if the TX
clock needs to be enabled for RX operation, which precisely when McASP
is in synchronous mode _and_ it is clock provider.
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
davinci_mcasp *mcasp, u32 ctl_reg, u32 val)
bool enable)
*mcasp)
*mcasp)
!mcasp->streams) {
mcasp->streams)
davinci_mcasp *mcasp, int stream,
--
Péter
|
{
"author": "=?UTF-8?Q?P=C3=A9ter_Ujfalusi?= <peter.ujfalusi@gmail.com>",
"date": "Mon, 2 Feb 2026 18:42:20 +0200",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
On 30/01/2026 07:10, Sen Wang wrote:
I'm not sure about this, but the sequence should be preserved, PDIR
change first.
--
Péter
|
{
"author": "=?UTF-8?Q?P=C3=A9ter_Ujfalusi?= <peter.ujfalusi@gmail.com>",
"date": "Mon, 2 Feb 2026 18:49:40 +0200",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH 0/4] ASoC: ti: davinci-mcasp: Add asynchronous mode support for McASP
|
This series adds asynchronous mode support to the McASP driver, which
enables independent configuration of bitclocks, frame sync, and audio
configurations between tx(playback) and rx(record). And achieves
simultaneous playback & record using different audio configurations.
It also adds two clean up patches to the McASP driver that disambiguate
and simplifies the logic which avoids the async enhancement from being
too convoluted to review and analyze.
The implementation is based on vendor documentation and patches tested in
both SK-AM62P-LP (sync mode, McASP slave) and AM62D-EVM
(async mode, McASP master, rx & tx has different TDM configs).
Testing verifies async mode functionality while maintaining backward
compatibility with the default sync mode.
Bootlog and Async mode tests on AM62D-EVM: [0]
[0]: https://gist.github.com/SenWang125/f31f9172b186d414695e37c8b9ef127d
Signed-off-by: Sen Wang <sen@ti.com>
Sen Wang (4):
dt-bindings: sound: davinci-mcasp: Add optional properties for asynchronous mode
ASoC: ti: davinci-mcasp: Disambiguate mcasp_is_synchronous function
ASoC: ti: davinci-mcasp: Streamline pdir behavior across rx & tx streams
ASoC: ti: davinci-mcasp: Add asynchronous mode support
.../bindings/sound/davinci-mcasp-audio.yaml | 71 ++-
include/linux/platform_data/davinci_asp.h | 3 +-
sound/soc/ti/davinci-mcasp.c | 510 ++++++++++++++----
sound/soc/ti/davinci-mcasp.h | 10 +
4 files changed, 479 insertions(+), 115 deletions(-)
base-commit: dbf8fe85a16a33d6b6bd01f2bc606fc017771465
--
2.43.0
|
On 30/01/2026 07:10, Sen Wang wrote:
static void mcasp_start_rx(struct davinci_mcasp *mcasp)
In new code - while it might not match with old code - use producer
instead of master.
Otherwise it looks nice, I trust you have tested the sync and DIT mode.
With this nitpick addressed:
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
--
Péter
|
{
"author": "=?UTF-8?Q?P=C3=A9ter_Ujfalusi?= <peter.ujfalusi@gmail.com>",
"date": "Mon, 2 Feb 2026 19:02:31 +0200",
"thread_id": "d7ed59c4-2262-4cd5-978f-e9e5c0e8a9a9@gmail.com.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
From: Lance Yang <lance.yang@linux.dev>
Now that we have tlb_remove_table_sync_mm(), convert callers from
tlb_remove_table_sync_one() to enable targeted IPIs instead of broadcast.
Three callers updated:
1) collapse_huge_page() - after flushing the old PMD, only IPIs CPUs
walking this mm instead of all CPUs.
2) tlb_flush_unshared_tables() - when unsharing hugetlb page tables,
use tlb->mm for targeted IPIs.
3) __tlb_remove_table_one() - updated to take mmu_gather parameter so
it can use tlb->mm when batch allocation fails.
Note that pmdp_get_lockless_sync() (PAE only) also calls
tlb_remove_table_sync_one() under PTL to ensure all ongoing PMD split-reads
complete between pmdp_get_lockless_{start,end}; the critical section is
very short. I'm inclined not to convert it since PAE systems typically
don't have many cores.
Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
include/asm-generic/tlb.h | 11 ++++++-----
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 12 ++++++------
3 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index b6b06e6b879f..40eb74b28f9d 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -831,17 +831,18 @@ static inline void tlb_flush_unshared_tables(struct mmu_gather *tlb)
/*
* Similarly, we must make sure that concurrent GUP-fast will not
* walk previously-shared page tables that are getting modified+reused
- * elsewhere. So broadcast an IPI to wait for any concurrent GUP-fast.
+ * elsewhere. So send an IPI to wait for any concurrent GUP-fast.
*
- * We only perform this when we are the last sharer of a page table,
- * as the IPI will reach all CPUs: any GUP-fast.
+ * We only perform this when we are the last sharer of a page table.
+ * Use targeted IPI to CPUs actively walking this mm instead of
+ * broadcast.
*
- * Note that on configs where tlb_remove_table_sync_one() is a NOP,
+ * Note that on configs where tlb_remove_table_sync_mm() is a NOP,
* the expectation is that the tlb_flush_mmu_tlbonly() would have issued
* required IPIs already for us.
*/
if (tlb->fully_unshared_tables) {
- tlb_remove_table_sync_one();
+ tlb_remove_table_sync_mm(tlb->mm);
tlb->fully_unshared_tables = false;
}
}
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index fa1e57fd2c46..7781d6628649 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1173,7 +1173,7 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a
_pmd = pmdp_collapse_flush(vma, address, pmd);
spin_unlock(pmd_ptl);
mmu_notifier_invalidate_range_end(&range);
- tlb_remove_table_sync_one();
+ tlb_remove_table_sync_mm(mm);
pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
if (pte) {
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 35c89e4b6230..76573ec454e5 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -378,7 +378,7 @@ static inline void __tlb_remove_table_one_rcu(struct rcu_head *head)
__tlb_remove_table(ptdesc);
}
-static inline void __tlb_remove_table_one(void *table)
+static inline void __tlb_remove_table_one(struct mmu_gather *tlb, void *table)
{
struct ptdesc *ptdesc;
@@ -386,16 +386,16 @@ static inline void __tlb_remove_table_one(void *table)
call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu);
}
#else
-static inline void __tlb_remove_table_one(void *table)
+static inline void __tlb_remove_table_one(struct mmu_gather *tlb, void *table)
{
- tlb_remove_table_sync_one();
+ tlb_remove_table_sync_mm(tlb->mm);
__tlb_remove_table(table);
}
#endif /* CONFIG_PT_RECLAIM */
-static void tlb_remove_table_one(void *table)
+static void tlb_remove_table_one(struct mmu_gather *tlb, void *table)
{
- __tlb_remove_table_one(table);
+ __tlb_remove_table_one(tlb, table);
}
static void tlb_table_flush(struct mmu_gather *tlb)
@@ -417,7 +417,7 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT);
if (*batch == NULL) {
tlb_table_invalidate(tlb);
- tlb_remove_table_one(table);
+ tlb_remove_table_one(tlb, table);
return;
}
(*batch)->nr = 0;
--
2.49.0
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 15:45:56 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
From: Lance Yang <lance.yang@linux.dev>
Currently, tlb_remove_table_sync_one() broadcasts IPIs to all CPUs to wait
for any concurrent lockless page table walkers (e.g., GUP-fast). This is
inefficient on systems with many CPUs, especially for RT workloads[1].
This patch introduces a per-CPU tracking mechanism to record which CPUs are
actively performing lockless page table walks for a specific mm_struct.
When freeing/unsharing page tables, we can now send IPIs only to the CPUs
that are actually walking that mm, instead of broadcasting to all CPUs.
In preparation for targeted IPIs; a follow-up will switch callers to
tlb_remove_table_sync_mm().
Note that the tracking adds ~3% latency to GUP-fast, as measured on a
64-core system.
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
include/asm-generic/tlb.h | 2 ++
include/linux/mm.h | 34 ++++++++++++++++++++++++++
kernel/events/core.c | 2 ++
mm/gup.c | 2 ++
mm/mmu_gather.c | 50 +++++++++++++++++++++++++++++++++++++++
5 files changed, 90 insertions(+)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 4aeac0c3d3f0..b6b06e6b879f 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -250,6 +250,7 @@ static inline void tlb_remove_table(struct mmu_gather *tlb, void *table)
#endif
void tlb_remove_table_sync_one(void);
+void tlb_remove_table_sync_mm(struct mm_struct *mm);
#else
@@ -258,6 +259,7 @@ void tlb_remove_table_sync_one(void);
#endif
static inline void tlb_remove_table_sync_one(void) { }
+static inline void tlb_remove_table_sync_mm(struct mm_struct *mm) { }
#endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f8a8fd47399c..d92df995fcd1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2995,6 +2995,40 @@ long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
pgoff_t *offset);
int folio_add_pins(struct folio *folio, unsigned int pins);
+/*
+ * Track CPUs doing lockless page table walks to avoid broadcast IPIs
+ * during TLB flushes.
+ */
+DECLARE_PER_CPU(struct mm_struct *, active_lockless_pt_walk_mm);
+
+static inline void pt_walk_lockless_start(struct mm_struct *mm)
+{
+ lockdep_assert_irqs_disabled();
+
+ /*
+ * Tell other CPUs we're doing lockless page table walk.
+ *
+ * Full barrier needed to prevent page table reads from being
+ * reordered before this write.
+ *
+ * Pairs with smp_rmb() in tlb_remove_table_sync_mm().
+ */
+ this_cpu_write(active_lockless_pt_walk_mm, mm);
+ smp_mb();
+}
+
+static inline void pt_walk_lockless_end(void)
+{
+ lockdep_assert_irqs_disabled();
+
+ /*
+ * Clear the pointer so other CPUs no longer see this CPU as walking
+ * the mm. Use smp_store_release to ensure page table reads complete
+ * before the clear is visible to other CPUs.
+ */
+ smp_store_release(this_cpu_ptr(&active_lockless_pt_walk_mm), NULL);
+}
+
int get_user_pages_fast(unsigned long start, int nr_pages,
unsigned int gup_flags, struct page **pages);
int pin_user_pages_fast(unsigned long start, int nr_pages,
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 5b5cb620499e..6539112c28ff 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8190,7 +8190,9 @@ static u64 perf_get_page_size(unsigned long addr)
mm = &init_mm;
}
+ pt_walk_lockless_start(mm);
size = perf_get_pgtable_size(mm, addr);
+ pt_walk_lockless_end();
local_irq_restore(flags);
diff --git a/mm/gup.c b/mm/gup.c
index 8e7dc2c6ee73..6748e28b27f2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3154,7 +3154,9 @@ static unsigned long gup_fast(unsigned long start, unsigned long end,
* that come from callers of tlb_remove_table_sync_one().
*/
local_irq_save(flags);
+ pt_walk_lockless_start(current->mm);
gup_fast_pgd_range(start, end, gup_flags, pages, &nr_pinned);
+ pt_walk_lockless_end();
local_irq_restore(flags);
/*
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 2faa23d7f8d4..35c89e4b6230 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -285,6 +285,56 @@ void tlb_remove_table_sync_one(void)
smp_call_function(tlb_remove_table_smp_sync, NULL, 1);
}
+DEFINE_PER_CPU(struct mm_struct *, active_lockless_pt_walk_mm);
+EXPORT_PER_CPU_SYMBOL_GPL(active_lockless_pt_walk_mm);
+
+/**
+ * tlb_remove_table_sync_mm - send IPIs to CPUs doing lockless page table
+ * walk for @mm
+ *
+ * @mm: target mm; only CPUs walking this mm get an IPI.
+ *
+ * Like tlb_remove_table_sync_one() but only targets CPUs in
+ * active_lockless_pt_walk_mm.
+ */
+void tlb_remove_table_sync_mm(struct mm_struct *mm)
+{
+ cpumask_var_t target_cpus;
+ bool found_any = false;
+ int cpu;
+
+ if (WARN_ONCE(!mm, "NULL mm in %s\n", __func__)) {
+ tlb_remove_table_sync_one();
+ return;
+ }
+
+ /* If we can't, fall back to broadcast. */
+ if (!alloc_cpumask_var(&target_cpus, GFP_ATOMIC)) {
+ tlb_remove_table_sync_one();
+ return;
+ }
+
+ cpumask_clear(target_cpus);
+
+ /* Pairs with smp_mb() in pt_walk_lockless_start(). */
+ smp_rmb();
+
+ /* Find CPUs doing lockless page table walks for this mm */
+ for_each_online_cpu(cpu) {
+ if (per_cpu(active_lockless_pt_walk_mm, cpu) == mm) {
+ cpumask_set_cpu(cpu, target_cpus);
+ found_any = true;
+ }
+ }
+
+ /* Only send IPIs to CPUs actually doing lockless walks */
+ if (found_any)
+ smp_call_function_many(target_cpus, tlb_remove_table_smp_sync,
+ NULL, 1);
+
+ free_cpumask_var(target_cpus);
+}
+
static void tlb_remove_table_rcu(struct rcu_head *head)
{
__tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu));
--
2.49.0
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 15:45:55 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
From: Lance Yang <lance.yang@linux.dev>
When the TLB flush path already sends IPIs (e.g. native without INVLPGB,
or KVM), tlb_remove_table_sync_mm() does not need to send another round.
Add a property on pv_mmu_ops so each paravirt backend can indicate whether
its flush_tlb_multi sends real IPIs; if so, tlb_remove_table_sync_mm() is
a no-op.
Native sets it in native_pv_tlb_init() when still using
native_flush_tlb_multi() and INVLPGB is disabled. KVM sets it true; Xen and
Hyper-V set it false because they use hypercalls.
Also pass both freed_tables and unshared_tables from tlb_flush() into
flush_tlb_mm_range() so lazy-TLB CPUs get IPIs during hugetlb unshare.
Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
arch/x86/hyperv/mmu.c | 5 +++++
arch/x86/include/asm/paravirt.h | 5 +++++
arch/x86/include/asm/paravirt_types.h | 6 ++++++
arch/x86/include/asm/tlb.h | 20 +++++++++++++++++++-
arch/x86/kernel/kvm.c | 6 ++++++
arch/x86/kernel/paravirt.c | 18 ++++++++++++++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 ++
include/asm-generic/tlb.h | 15 +++++++++++++++
mm/mmu_gather.c | 7 +++++++
10 files changed, 84 insertions(+), 1 deletion(-)
diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
index cfcb60468b01..fc8fb275f295 100644
--- a/arch/x86/hyperv/mmu.c
+++ b/arch/x86/hyperv/mmu.c
@@ -243,4 +243,9 @@ void hyperv_setup_mmu_ops(void)
pr_info("Using hypercall for remote TLB flush\n");
pv_ops.mmu.flush_tlb_multi = hyperv_flush_tlb_multi;
+ /*
+ * Hyper-V uses hypercalls for TLB flush, not real IPIs.
+ * Keep the property as false.
+ */
+ pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast = false;
}
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 13f9cd31c8f8..1fdbe3736f41 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -698,6 +698,7 @@ static __always_inline unsigned long arch_local_irq_save(void)
extern void default_banner(void);
void native_pv_lock_init(void) __init;
+void native_pv_tlb_init(void) __init;
#else /* __ASSEMBLER__ */
@@ -727,6 +728,10 @@ void native_pv_lock_init(void) __init;
static inline void native_pv_lock_init(void)
{
}
+
+static inline void native_pv_tlb_init(void)
+{
+}
#endif
#endif /* !CONFIG_PARAVIRT */
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 3502939415ad..d8aa519ef5e3 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -133,6 +133,12 @@ struct pv_mmu_ops {
void (*flush_tlb_multi)(const struct cpumask *cpus,
const struct flush_tlb_info *info);
+ /*
+ * Indicates whether flush_tlb_multi IPIs provide sufficient
+ * synchronization during TLB flush when freeing or unsharing page tables.
+ */
+ bool flush_tlb_multi_implies_ipi_broadcast;
+
/* Hook for intercepting the destruction of an mm_struct. */
void (*exit_mmap)(struct mm_struct *mm);
void (*notify_page_enc_status_changed)(unsigned long pfn, int npages, bool enc);
diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
index 866ea78ba156..1e524d8e260a 100644
--- a/arch/x86/include/asm/tlb.h
+++ b/arch/x86/include/asm/tlb.h
@@ -5,10 +5,23 @@
#define tlb_flush tlb_flush
static inline void tlb_flush(struct mmu_gather *tlb);
+#define tlb_table_flush_implies_ipi_broadcast tlb_table_flush_implies_ipi_broadcast
+static inline bool tlb_table_flush_implies_ipi_broadcast(void);
+
#include <asm-generic/tlb.h>
#include <linux/kernel.h>
#include <vdso/bits.h>
#include <vdso/page.h>
+#include <asm/paravirt.h>
+
+static inline bool tlb_table_flush_implies_ipi_broadcast(void)
+{
+#ifdef CONFIG_PARAVIRT
+ return pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast;
+#else
+ return !cpu_feature_enabled(X86_FEATURE_INVLPGB);
+#endif
+}
static inline void tlb_flush(struct mmu_gather *tlb)
{
@@ -20,7 +33,12 @@ static inline void tlb_flush(struct mmu_gather *tlb)
end = tlb->end;
}
- flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables);
+ /*
+ * During TLB flushes, pass both freed_tables and unshared_tables
+ * so lazy-TLB CPUs receive IPIs.
+ */
+ flush_tlb_mm_range(tlb->mm, start, end, stride_shift,
+ tlb->freed_tables || tlb->unshared_tables);
}
static inline void invlpg(unsigned long addr)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 37dc8465e0f5..6a5e47ee4eb6 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -856,6 +856,12 @@ static void __init kvm_guest_init(void)
#ifdef CONFIG_SMP
if (pv_tlb_flush_supported()) {
pv_ops.mmu.flush_tlb_multi = kvm_flush_tlb_multi;
+ /*
+ * KVM's flush implementation calls native_flush_tlb_multi(),
+ * which sends real IPIs when INVLPGB is not available.
+ */
+ if (!cpu_feature_enabled(X86_FEATURE_INVLPGB))
+ pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast = true;
pr_info("KVM setup pv remote TLB flush\n");
}
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index ab3e172dcc69..1af253c9f51d 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -60,6 +60,23 @@ void __init native_pv_lock_init(void)
static_branch_enable(&virt_spin_lock_key);
}
+void __init native_pv_tlb_init(void)
+{
+ /*
+ * Check if we're still using native TLB flush (not overridden by
+ * a PV backend) and don't have INVLPGB support.
+ *
+ * In this case, native IPI-based TLB flush provides sufficient
+ * synchronization for GUP-fast.
+ *
+ * PV backends (KVM, Xen, HyperV) should set this property in their
+ * own initialization code if their flush implementation sends IPIs.
+ */
+ if (pv_ops.mmu.flush_tlb_multi == native_flush_tlb_multi &&
+ !cpu_feature_enabled(X86_FEATURE_INVLPGB))
+ pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast = true;
+}
+
struct static_key paravirt_steal_enabled;
struct static_key paravirt_steal_rq_enabled;
@@ -173,6 +190,7 @@ struct paravirt_patch_template pv_ops = {
.mmu.flush_tlb_kernel = native_flush_tlb_global,
.mmu.flush_tlb_one_user = native_flush_tlb_one_user,
.mmu.flush_tlb_multi = native_flush_tlb_multi,
+ .mmu.flush_tlb_multi_implies_ipi_broadcast = false,
.mmu.exit_mmap = paravirt_nop,
.mmu.notify_page_enc_status_changed = paravirt_nop,
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 5cd6950ab672..3cdb04162843 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1167,6 +1167,7 @@ void __init native_smp_prepare_boot_cpu(void)
switch_gdt_and_percpu_base(me);
native_pv_lock_init();
+ native_pv_tlb_init();
}
void __init native_smp_cpus_done(unsigned int max_cpus)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 7a35c3393df4..b6d86299cf10 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2185,6 +2185,8 @@ static const typeof(pv_ops) xen_mmu_ops __initconst = {
.flush_tlb_kernel = xen_flush_tlb,
.flush_tlb_one_user = xen_flush_tlb_one_user,
.flush_tlb_multi = xen_flush_tlb_multi,
+ /* Xen uses hypercalls for TLB flush, not real IPIs */
+ .flush_tlb_multi_implies_ipi_broadcast = false,
.pgd_alloc = xen_pgd_alloc,
.pgd_free = xen_pgd_free,
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 40eb74b28f9d..fae97c8bcceb 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -240,6 +240,21 @@ static inline void tlb_remove_table(struct mmu_gather *tlb, void *table)
}
#endif /* CONFIG_MMU_GATHER_TABLE_FREE */
+/*
+ * Architectures can override this to indicate whether TLB flush operations
+ * send IPIs that are sufficient to synchronize with lockless page table
+ * walkers (e.g., GUP-fast). If true, tlb_remove_table_sync_mm() becomes
+ * a no-op as the TLB flush already provided the necessary IPI.
+ *
+ * Default is false, meaning we need explicit IPIs via tlb_remove_table_sync_mm().
+ */
+#ifndef tlb_table_flush_implies_ipi_broadcast
+static inline bool tlb_table_flush_implies_ipi_broadcast(void)
+{
+ return false;
+}
+#endif
+
#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE
/*
* This allows an architecture that does not use the linux page-tables for
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 76573ec454e5..9620480c11ce 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -303,6 +303,13 @@ void tlb_remove_table_sync_mm(struct mm_struct *mm)
bool found_any = false;
int cpu;
+ /*
+ * If the architecture's TLB flush already sent IPIs that are sufficient
+ * for synchronization, we don't need to send additional IPIs.
+ */
+ if (tlb_table_flush_implies_ipi_broadcast())
+ return;
+
if (WARN_ONCE(!mm, "NULL mm in %s\n", __func__)) {
tlb_remove_table_sync_one();
return;
--
2.49.0
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 15:45:57 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, Feb 02, 2026 at 03:45:55PM +0800, Lance Yang wrote:
What architecture, and that is acceptable?
One thing to try is something like:
xchg(this_cpu_ptr(&active_lockless_pt_walk_mm), mm);
That *might* be a little better on x86_64, on anything else you really
don't want to use this_cpu_() ops when you *know* IRQs are already
disabled.
Why the heck is this exported? Both users are firmly core code.
Pairs how? The start thing does something like:
[W] active_lockless_pt_walk_mm = mm
MB
[L] page-tables
So this is:
[L] page-tables
RMB
[L] active_lockless_pt_walk_mm
?
You really don't need this to be atomic.
Coding style wants { } here. Also, isn't this what we have
smp_call_function_many_cond() for?
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 10:42:45 +0100",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, Feb 02, 2026 at 03:45:54PM +0800, Lance Yang wrote:
I'm confused. This only happens when !PT_RECLAIM, because if PT_RECLAIM
__tlb_remove_table_one() actually uses RCU.
So why are you making things more expensive for no reason?
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 10:54:14 +0100",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, 2 Feb 2026 10:54:14 +0100, Peter Zijlstra wrote:
You're right that when CONFIG_PT_RECLAIM is set, __tlb_remove_table_one()
uses call_rcu() and we never call any sync there — this series doesn't
touch that path.
In the !PT_RECLAIM table-free path (same __tlb_remove_table_one() branch
that calls tlb_remove_table_sync_mm(tlb->mm) before __tlb_remove_table),
we're not adding any new sync; we're replacing the existing broadcast IPI
(tlb_remove_table_sync_one()) with targeted IPIs (tlb_remove_table_sync_mm()).
One thing I just realized: when CONFIG_MMU_GATHER_RCU_TABLE_FREE is not
set, the sync path isn't used at all (tlb_remove_table_sync_one() and
friends aren't even compiled), so we don't need the tracker in that config.
Thanks for raising this!
Lance
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 19:00:16 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
Hi Peter,
Thanks for taking time to review!
On 2026/2/2 17:42, Peter Zijlstra wrote:
x86-64.
I ran ./gup_bench which spawns 60 threads, each doing 500k GUP-fast
operations (pinning 8 pages per call) via the gup_test ioctl.
Results for pin pages:
- Before: avg 1.489s (10 runs)
- After: avg 1.533s (10 runs)
Given we avoid broadcast IPIs on large systems, I think this is a
reasonable trade-off :)
Ah, good to know that. Thanks!
IIUC, xchg() provides the full barrier we need ;)
OK. Will drop this export.
On the walker side (pt_walk_lockless_start):
[W] active_lockless_pt_walk_mm = mm
MB
[L] page-tables (walker reads page tables)
So the walker publishes "I'm walking this mm" before reading page tables.
On the sync side we don't read page-tables. We do:
RMB
[L] active_lockless_pt_walk_mm (we read the per-CPU pointer below)
We need to observe the walker's store of active_lockless_pt_walk_mm before
we decide which CPUs to IPI.
So on the sync side we do smp_rmb(), then read active_lockless_pt_walk_mm.
That pairs with the full barrier in pt_walk_lockless_start().
Right! That would be better, something like:
static bool tlb_remove_table_sync_mm_cond(int cpu, void *mm)
{
return per_cpu(active_lockless_pt_walk_mm, cpu) == (struct mm_struct *)mm;
}
on_each_cpu_cond_mask(tlb_remove_table_sync_mm_cond,
tlb_remove_table_smp_sync,
(void *)mm, true, cpu_online_mask);
Thanks,
Lance
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 20:14:32 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, Feb 02, 2026 at 07:00:16PM +0800, Lance Yang wrote:
Right, but if we can use full RCU for PT_RECLAIM, why can't we do so
unconditionally and not add overhead?
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 13:50:30 +0100",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, Feb 02, 2026 at 08:14:32PM +0800, Lance Yang wrote:
No it doesn't; this is not how memory barriers work.
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 13:51:46 +0100",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On 2026/2/2 20:50, Peter Zijlstra wrote:
The sync (IPI) is mainly needed for unshare (e.g. hugetlb) and collapse
(khugepaged) paths, regardless of whether table free uses RCU, IIUC.
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 20:58:59 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On 2026/2/2 20:58, Lance Yang wrote:
In addition: We need the sync when we modify page tables (e.g. unshare,
collapse), not only when we free them. RCU can defer freeing but does
not prevent lockless walkers from seeing concurrent in-place
modifications, so we need the IPI to synchronize with those walkers
first.
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 21:07:10 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On 2026/2/2 20:51, Peter Zijlstra wrote:
Hmm... we need MB rather than RMB on the sync side. Is that correct?
Walker:
[W]active_lockless_pt_walk_mm = mm -> MB -> [L]page-tables
Sync:
[W]page-tables -> MB -> [L]active_lockless_pt_walk_mm
Thanks,
Lance
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 21:23:07 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, Feb 02, 2026 at 09:07:10PM +0800, Lance Yang wrote:
Currently PT_RECLAIM=y has no IPI; are you saying that is broken? If
not, then why do we need this at all?
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 14:37:13 +0100",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, Feb 02, 2026 at 09:23:07PM +0800, Lance Yang wrote:
This can work -- but only if the walker and sync touch the same
page-table address.
Now, typically I would imagine they both share the p4d/pud address at
the very least, right?
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 14:42:33 +0100",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On 2026/2/2 21:42, Peter Zijlstra wrote:
Thanks. I think I see the confusion ...
To be clear, the goal is not to make the walker see page-table writes
through the
MB pairing, but to wait for any concurrent lockless page table walkers
to finish.
The flow is:
1) Page tables are modified
2) TLB flush is done
3) Read active_lockless_pt_walk_mm (with MB to order page-table writes
before
this read) to find which CPUs are locklessly walking this mm
4) IPI those CPUs
5) The IPI forces them to sync, so after the IPI returns, any in-flight
lockless
page table walk has finished (or will restart and see the new page
tables)
The synchronization relies on the IPI to ensure walkers stop before
continuing.
I would assume the TLB flush (step 2) should imply some barrier.
Does that clarify?
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 22:28:47 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On 2026/2/2 21:37, Peter Zijlstra wrote:
PT_RECLAIM=y does have IPI for unshare/collapse — those paths call
tlb_flush_unshared_tables() (for hugetlb unshare) and collapse_huge_page()
(in khugepaged collapse), which already send IPIs today (broadcast to all
CPUs via tlb_remove_table_sync_one()).
What PT_RECLAIM=y doesn't need IPI for is table freeing (
__tlb_remove_table_one() uses call_rcu() instead). But table modification
(unshare, collapse) still needs IPI to synchronize with lockless walkers,
regardless of PT_RECLAIM.
So PT_RECLAIM=y is not broken; it already has IPI where needed. This series
just makes those IPIs targeted instead of broadcast. Does that clarify?
Thanks,
Lance
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 22:37:39 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On Mon, Feb 02, 2026 at 10:37:39PM +0800, Lance Yang wrote:
Oh bah, reading is hard. I had missed they had more table_sync_one() calls,
rather than remove_table_one().
So you *can* replace table_sync_one() with rcu_sync(), that will provide
the same guarantees. Its just a 'little' bit slower on the update side,
but does not incur the read side cost.
I really think anything here needs to better explain the various
requirements. Because now everybody gets to pay the price for hugetlb
shared crud, while 'nobody' will actually use that.
|
{
"author": "Peter Zijlstra <peterz@infradead.org>",
"date": "Mon, 2 Feb 2026 16:09:57 +0100",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On 2026/2/2 23:09, Peter Zijlstra wrote:
Yep, we could replace the IPI with synchronize_rcu() on the sync side:
- Currently: TLB flush → send IPI → wait for walkers to finish
- With synchronize_rcu(): TLB flush → synchronize_rcu() -> waits for
grace period
Lockless walkers (e.g. GUP-fast) use local_irq_disable();
synchronize_rcu() also
waits for regions with preemption/interrupts disabled, so it should
work, IIUC.
And then, the trade-off would be:
- Read side: zero cost (no per-CPU tracking)
- Write side: wait for RCU grace period (potentially slower)
For collapse/unshare, that write-side latency might be acceptable :)
@David, what do you think?
Right. If we go with synchronize_rcu(), the read-side cost goes away ...
Thanks,
Lance
|
{
"author": "Lance Yang <lance.yang@linux.dev>",
"date": "Mon, 2 Feb 2026 23:52:31 +0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH v4 0/3] targeted TLB sync IPIs for lockless page table walkers
|
When freeing or unsharing page tables we send an IPI to synchronize with
concurrent lockless page table walkers (e.g. GUP-fast). Today we broadcast
that IPI to all CPUs, which is costly on large machines and hurts RT
workloads[1].
This series makes those IPIs targeted. We track which CPUs are currently
doing a lockless page table walk for a given mm (per-CPU
active_lockless_pt_walk_mm). When we need to sync, we only IPI those CPUs.
GUP-fast and perf_get_page_size() set/clear the tracker around their walk;
tlb_remove_table_sync_mm() uses it and replaces the previous broadcast in
the free/unshare paths.
On x86, when the TLB flush path already sends IPIs (native without INVLPGB,
or KVM), the extra sync IPI is redundant. We add a property on pv_mmu_ops
so each backend can declare whether its flush_tlb_multi sends real IPIs; if
so, tlb_remove_table_sync_mm() is a no-op. We also have tlb_flush() pass
both freed_tables and unshared_tables so lazy-TLB CPUs get IPIs during
hugetlb unshare.
David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!
[1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
v3 -> v4:
- Rework based on David's two-step direction and per-CPU idea:
1) Targeted IPIs: per-CPU variable when entering/leaving lockless page
table walk; tlb_remove_table_sync_mm() IPIs only those CPUs.
2) On x86, pv_mmu_ops property set at init to skip the extra sync when
flush_tlb_multi() already sends IPIs.
https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/
- https://lore.kernel.org/linux-mm/20260106120303.38124-1-lance.yang@linux.dev/
v2 -> v3:
- Complete rewrite: use dynamic IPI tracking instead of static checks
(per Dave Hansen, thanks!)
- Track IPIs via mmu_gather: native_flush_tlb_multi() sets flag when
actually sending IPIs
- Motivation for skipping redundant IPIs explained by David:
https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/
- https://lore.kernel.org/linux-mm/20251229145245.85452-1-lance.yang@linux.dev/
v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
any email flood caused by the failed send attempts :(
RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
that don't enable this config.
https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/
Lance Yang (3):
mm: use targeted IPIs for TLB sync with lockless page table walkers
mm: switch callers to tlb_remove_table_sync_mm()
x86/tlb: add architecture-specific TLB IPI optimization support
arch/x86/hyperv/mmu.c | 5 ++
arch/x86/include/asm/paravirt.h | 5 ++
arch/x86/include/asm/paravirt_types.h | 6 +++
arch/x86/include/asm/tlb.h | 20 +++++++-
arch/x86/kernel/kvm.c | 6 +++
arch/x86/kernel/paravirt.c | 18 +++++++
arch/x86/kernel/smpboot.c | 1 +
arch/x86/xen/mmu_pv.c | 2 +
include/asm-generic/tlb.h | 28 +++++++++--
include/linux/mm.h | 34 +++++++++++++
kernel/events/core.c | 2 +
mm/gup.c | 2 +
mm/khugepaged.c | 2 +-
mm/mmu_gather.c | 69 ++++++++++++++++++++++++---
14 files changed, 187 insertions(+), 13 deletions(-)
--
2.49.0
|
On 2/2/26 04:14, Lance Yang wrote:
I thought the big databases were really sensitive to GUP-fast latency.
They like big systems, too. Won't they howl when this finally hits their
testing?
Also, two of the "write" side here are:
* collapse_huge_page() (khugepaged)
* tlb_remove_table() (in an "-ENOMEM" path)
Those are quite slow paths, right? Shouldn't the design here favor
keeping gup-fast as fast as possible as opposed to impacting those?
|
{
"author": "Dave Hansen <dave.hansen@intel.com>",
"date": "Mon, 2 Feb 2026 08:20:13 -0800",
"thread_id": "20260202074557.16544-1-lance.yang@linux.dev.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
Set d3hot_delay to 0 for Intel controllers because a delay is not needed.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
index 0f05a15c14c7..bc83caad4197 100644
--- a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
+++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
@@ -164,6 +164,7 @@ static int intel_i3c_init(struct mipi_i3c_hci_pci *hci)
dma_set_mask_and_coherent(&hci->pci->dev, DMA_BIT_MASK(64));
hci->pci->d3cold_delay = 0;
+ hci->pci->d3hot_delay = 0;
hci->private = host;
host->priv = priv;
--
2.51.0
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 20:18:35 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
Some I3C controller drivers need runtime PM to operate on a device other
than the parent device. To support that, add an rpm_dev pointer to
struct i3c_master_controller so drivers can specify which device should
be used for runtime power management.
If a driver does not set rpm_dev explicitly, default to using the parent
device to maintain existing behaviour.
Update the runtime PM helpers to use rpm_dev instead of dev.parent.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/i3c/master.c | 9 ++++++---
include/linux/i3c/master.h | 2 ++
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
index 49fb6e30a68e..bcc493dc9d04 100644
--- a/drivers/i3c/master.c
+++ b/drivers/i3c/master.c
@@ -108,10 +108,10 @@ static struct i3c_master_controller *dev_to_i3cmaster(struct device *dev)
static int __must_check i3c_master_rpm_get(struct i3c_master_controller *master)
{
- int ret = master->rpm_allowed ? pm_runtime_resume_and_get(master->dev.parent) : 0;
+ int ret = master->rpm_allowed ? pm_runtime_resume_and_get(master->rpm_dev) : 0;
if (ret < 0) {
- dev_err(master->dev.parent, "runtime resume failed, error %d\n", ret);
+ dev_err(master->rpm_dev, "runtime resume failed, error %d\n", ret);
return ret;
}
return 0;
@@ -120,7 +120,7 @@ static int __must_check i3c_master_rpm_get(struct i3c_master_controller *master)
static void i3c_master_rpm_put(struct i3c_master_controller *master)
{
if (master->rpm_allowed)
- pm_runtime_put_autosuspend(master->dev.parent);
+ pm_runtime_put_autosuspend(master->rpm_dev);
}
int i3c_bus_rpm_get(struct i3c_bus *bus)
@@ -2975,6 +2975,9 @@ int i3c_master_register(struct i3c_master_controller *master,
INIT_LIST_HEAD(&master->boardinfo.i2c);
INIT_LIST_HEAD(&master->boardinfo.i3c);
+ if (!master->rpm_dev)
+ master->rpm_dev = parent;
+
ret = i3c_master_rpm_get(master);
if (ret)
return ret;
diff --git a/include/linux/i3c/master.h b/include/linux/i3c/master.h
index af2bb48363ba..4be67a902dd8 100644
--- a/include/linux/i3c/master.h
+++ b/include/linux/i3c/master.h
@@ -501,6 +501,7 @@ struct i3c_master_controller_ops {
* registered to the I2C subsystem to be as transparent as possible to
* existing I2C drivers
* @ops: master operations. See &struct i3c_master_controller_ops
+ * @rpm_dev: Runtime PM device
* @secondary: true if the master is a secondary master
* @init_done: true when the bus initialization is done
* @hotjoin: true if the master support hotjoin
@@ -526,6 +527,7 @@ struct i3c_master_controller {
struct i3c_dev_desc *this;
struct i2c_adapter i2c;
const struct i3c_master_controller_ops *ops;
+ struct device *rpm_dev;
unsigned int secondary : 1;
unsigned int init_done : 1;
unsigned int hotjoin: 1;
--
2.51.0
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 20:18:36 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
When an IBI can be received after the controller is
pm_runtime_put_autosuspend()'ed, the interrupt may occur just before the
device is auto‑suspended. In such cases, the runtime PM core may not see
any recent activity and may suspend the device earlier than intended.
Mark the controller as last busy whenever an IBI is queued (when
rpm_ibi_allowed is set) so that the auto-suspend delay correctly reflects
recent bus activity and avoids premature suspension.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/i3c/master.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
index bcc493dc9d04..dcc07ebc50a2 100644
--- a/drivers/i3c/master.c
+++ b/drivers/i3c/master.c
@@ -2721,9 +2721,14 @@ static void i3c_master_unregister_i3c_devs(struct i3c_master_controller *master)
*/
void i3c_master_queue_ibi(struct i3c_dev_desc *dev, struct i3c_ibi_slot *slot)
{
+ struct i3c_master_controller *master = i3c_dev_get_master(dev);
+
if (!dev->ibi || !slot)
return;
+ if (master->rpm_ibi_allowed)
+ pm_runtime_mark_last_busy(master->rpm_dev);
+
atomic_inc(&dev->ibi->pending_ibis);
queue_work(dev->ibi->wq, &slot->work);
}
--
2.51.0
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 20:18:37 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
Some I3C controllers can be automatically runtime-resumed in order to
handle in-band interrupts (IBIs), meaning that runtime suspend does not
need to be blocked when IBIs are enabled.
For example, a PCI-attached controller in a low-power state may generate
a Power Management Event (PME) when the SDA line is pulled low to signal
the START condition of an IBI. The PCI subsystem will then runtime-resume
the device, allowing the IBI to be received without requiring the
controller to remain active.
Introduce a new quirk, HCI_QUIRK_RPM_IBI_ALLOWED, so that drivers can
opt-in to this capability via driver data.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/i3c/master/mipi-i3c-hci/core.c | 3 +++
drivers/i3c/master/mipi-i3c-hci/hci.h | 1 +
2 files changed, 4 insertions(+)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index e925584113d1..ec4dbe64c35e 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -959,6 +959,9 @@ static int i3c_hci_probe(struct platform_device *pdev)
if (hci->quirks & HCI_QUIRK_RPM_ALLOWED)
i3c_hci_rpm_enable(&pdev->dev);
+ if (hci->quirks & HCI_QUIRK_RPM_IBI_ALLOWED)
+ hci->master.rpm_ibi_allowed = true;
+
return i3c_master_register(&hci->master, &pdev->dev, &i3c_hci_ops, false);
}
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index 6035f74212db..819328a85b84 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -146,6 +146,7 @@ struct i3c_hci_dev_data {
#define HCI_QUIRK_OD_PP_TIMING BIT(3) /* Set OD and PP timings for AMD platforms */
#define HCI_QUIRK_RESP_BUF_THLD BIT(4) /* Set resp buf thld to 0 for AMD platforms */
#define HCI_QUIRK_RPM_ALLOWED BIT(5) /* Runtime PM allowed */
+#define HCI_QUIRK_RPM_IBI_ALLOWED BIT(6) /* IBI and Hot-Join allowed while runtime suspended */
/* global functions */
void mipi_i3c_hci_resume(struct i3c_hci *hci);
--
2.51.0
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 20:18:38 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
Some platforms implement the MIPI I3C HCI Multi-Bus Instance capability,
where a single parent device hosts multiple I3C controller instances. In
such designs, the parent - not the individual child instances - may need to
coordinate runtime PM so that all controllers enter low-power states
together, and all runtime suspend callbacks are invoked in a controlled
and synchronized manner.
For example, if the parent enables IBI-wakeup when transitioning into a
low-power state, every bus instance must remain able to receive IBIs up
until that point. This requires deferring the individual controllers’
runtime suspend callbacks (which disable bus activity) until the parent
decides it is safe for all instances to suspend together.
To support this usage model:
* Export the controller's runtime PM suspend/resume callbacks so that
the parent can invoke them directly.
* Add a new quirk, HCI_QUIRK_RPM_PARENT_MANAGED, which designates the
parent device as the controller’s runtime PM device (rpm_dev). When
used without HCI_QUIRK_RPM_ALLOWED, this also prevents the child
instance’s system-suspend callbacks from using
pm_runtime_force_suspend()/pm_runtime_force_resume(), since runtime
PM is managed entirely by the parent.
* Move DEFAULT_AUTOSUSPEND_DELAY_MS into the header so it can be shared
by parent-managed PM implementations.
The new quirk allows platforms with multi-bus parent-managed PM
infrastructure to correctly coordinate runtime PM across all I3C HCI
instances.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/i3c/master/mipi-i3c-hci/core.c | 25 ++++++++++++++++---------
drivers/i3c/master/mipi-i3c-hci/hci.h | 6 ++++++
2 files changed, 22 insertions(+), 9 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index ec4dbe64c35e..cb974b0f9e17 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -733,7 +733,7 @@ static int i3c_hci_reset_and_init(struct i3c_hci *hci)
return 0;
}
-static int i3c_hci_runtime_suspend(struct device *dev)
+int i3c_hci_runtime_suspend(struct device *dev)
{
struct i3c_hci *hci = dev_get_drvdata(dev);
int ret;
@@ -746,8 +746,9 @@ static int i3c_hci_runtime_suspend(struct device *dev)
return 0;
}
+EXPORT_SYMBOL_GPL(i3c_hci_runtime_suspend);
-static int i3c_hci_runtime_resume(struct device *dev)
+int i3c_hci_runtime_resume(struct device *dev)
{
struct i3c_hci *hci = dev_get_drvdata(dev);
int ret;
@@ -768,6 +769,7 @@ static int i3c_hci_runtime_resume(struct device *dev)
return 0;
}
+EXPORT_SYMBOL_GPL(i3c_hci_runtime_resume);
static int i3c_hci_suspend(struct device *dev)
{
@@ -784,12 +786,14 @@ static int i3c_hci_resume_common(struct device *dev, bool rstdaa)
struct i3c_hci *hci = dev_get_drvdata(dev);
int ret;
- if (!(hci->quirks & HCI_QUIRK_RPM_ALLOWED))
- return 0;
+ if (!(hci->quirks & HCI_QUIRK_RPM_PARENT_MANAGED)) {
+ if (!(hci->quirks & HCI_QUIRK_RPM_ALLOWED))
+ return 0;
- ret = pm_runtime_force_resume(dev);
- if (ret)
- return ret;
+ ret = pm_runtime_force_resume(dev);
+ if (ret)
+ return ret;
+ }
ret = i3c_master_do_daa_ext(&hci->master, rstdaa);
if (ret)
@@ -812,8 +816,6 @@ static int i3c_hci_restore(struct device *dev)
return i3c_hci_resume_common(dev, true);
}
-#define DEFAULT_AUTOSUSPEND_DELAY_MS 1000
-
static void i3c_hci_rpm_enable(struct device *dev)
{
struct i3c_hci *hci = dev_get_drvdata(dev);
@@ -962,6 +964,11 @@ static int i3c_hci_probe(struct platform_device *pdev)
if (hci->quirks & HCI_QUIRK_RPM_IBI_ALLOWED)
hci->master.rpm_ibi_allowed = true;
+ if (hci->quirks & HCI_QUIRK_RPM_PARENT_MANAGED) {
+ hci->master.rpm_dev = pdev->dev.parent;
+ hci->master.rpm_allowed = true;
+ }
+
return i3c_master_register(&hci->master, &pdev->dev, &i3c_hci_ops, false);
}
diff --git a/drivers/i3c/master/mipi-i3c-hci/hci.h b/drivers/i3c/master/mipi-i3c-hci/hci.h
index 819328a85b84..d0e7ad58ac15 100644
--- a/drivers/i3c/master/mipi-i3c-hci/hci.h
+++ b/drivers/i3c/master/mipi-i3c-hci/hci.h
@@ -147,6 +147,7 @@ struct i3c_hci_dev_data {
#define HCI_QUIRK_RESP_BUF_THLD BIT(4) /* Set resp buf thld to 0 for AMD platforms */
#define HCI_QUIRK_RPM_ALLOWED BIT(5) /* Runtime PM allowed */
#define HCI_QUIRK_RPM_IBI_ALLOWED BIT(6) /* IBI and Hot-Join allowed while runtime suspended */
+#define HCI_QUIRK_RPM_PARENT_MANAGED BIT(7) /* Runtime PM managed by parent device */
/* global functions */
void mipi_i3c_hci_resume(struct i3c_hci *hci);
@@ -156,4 +157,9 @@ void amd_set_od_pp_timing(struct i3c_hci *hci);
void amd_set_resp_buf_thld(struct i3c_hci *hci);
void i3c_hci_sync_irq_inactive(struct i3c_hci *hci);
+#define DEFAULT_AUTOSUSPEND_DELAY_MS 1000
+
+int i3c_hci_runtime_suspend(struct device *dev);
+int i3c_hci_runtime_resume(struct device *dev);
+
#endif
--
2.51.0
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 20:18:39 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs), and they also implement the MIPI I3C HCI
Multi-Bus Instance capability. When multiple I3C bus instances share the
same PCI wakeup, the PCI parent must coordinate runtime PM so that all
instances suspend together and their mipi-i3c-hci runtime suspend
callbacks are invoked in a consistent manner.
Enable IBI-based wakeup by setting HCI_QUIRK_RPM_IBI_ALLOWED for the
intel-lpss-i3c platform device. Replace HCI_QUIRK_RPM_ALLOWED with
HCI_QUIRK_RPM_PARENT_MANAGED so that the mipi-i3c-hci core driver expects
runtime PM to be controlled by the PCI parent rather than by individual
instances. For all Intel HCI PCI configurations, enable the corresponding
control_instance_pm flag in the PCI driver.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
drivers/i3c/master/mipi-i3c-hci/core.c | 2 +-
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/core.c b/drivers/i3c/master/mipi-i3c-hci/core.c
index cb974b0f9e17..67ae7441ce97 100644
--- a/drivers/i3c/master/mipi-i3c-hci/core.c
+++ b/drivers/i3c/master/mipi-i3c-hci/core.c
@@ -992,7 +992,7 @@ static const struct acpi_device_id i3c_hci_acpi_match[] = {
MODULE_DEVICE_TABLE(acpi, i3c_hci_acpi_match);
static const struct platform_device_id i3c_hci_driver_ids[] = {
- { .name = "intel-lpss-i3c", HCI_QUIRK_RPM_ALLOWED },
+ { .name = "intel-lpss-i3c", HCI_QUIRK_RPM_IBI_ALLOWED | HCI_QUIRK_RPM_PARENT_MANAGED },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(platform, i3c_hci_driver_ids);
diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
index f7f776300a0f..2f72cf48e36c 100644
--- a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
+++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
@@ -200,6 +200,7 @@ static const struct mipi_i3c_hci_pci_info intel_mi_1_info = {
.id = {0, 1},
.instance_offset = {0, 0x400},
.instance_count = 2,
+ .control_instance_pm = true,
};
static const struct mipi_i3c_hci_pci_info intel_mi_2_info = {
@@ -209,6 +210,7 @@ static const struct mipi_i3c_hci_pci_info intel_mi_2_info = {
.id = {2, 3},
.instance_offset = {0, 0x400},
.instance_count = 2,
+ .control_instance_pm = true,
};
static const struct mipi_i3c_hci_pci_info intel_si_2_info = {
@@ -218,6 +220,7 @@ static const struct mipi_i3c_hci_pci_info intel_si_2_info = {
.id = {2},
.instance_offset = {0},
.instance_count = 1,
+ .control_instance_pm = true,
};
static int mipi_i3c_hci_pci_find_instance(struct mipi_i3c_hci_pci *hci, struct device *dev)
--
2.51.0
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 20:18:41 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
Some platforms implement the MIPI I3C HCI Multi-Bus Instance capability,
where a single parent device hosts multiple I3C controller instances. In
such designs, the parent - not the individual child instances - may need to
coordinate runtime PM so that all controllers enter low-power states
together, and all runtime suspend callbacks are invoked in a controlled
and synchronized manner.
For example, if the parent enables IBI-wakeup when transitioning into a
low-power state, every bus instance must remain able to receive IBIs up
until that point. This requires deferring the individual controllers’
runtime suspend callbacks (which disable bus activity) until the parent
decides it is safe for all instances to suspend together.
To support this usage model:
* Add runtime PM and system PM callbacks in the PCI driver to invoke
the mipi-i3c-hci driver’s runtime PM callbacks for each instance.
* Introduce a driver-data flag, control_instance_pm, which opts into
the new parent-managed PM behaviour.
* Ensure the callbacks are only used when the corresponding instance is
operational at suspend time. This is reliable because the operational
state cannot change while the parent device is undergoing a PM
transition, and PCI always performs a runtime resume before system
suspend on current configurations, so that suspend and resume alternate
irrespective of whether it is runtime or system PM.
By that means, parent-managed runtime PM coordination for multi-instance
MIPI I3C HCI PCI devices is provided without altering existing behaviour on
platforms that do not require it.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
.../master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 154 +++++++++++++++++-
1 file changed, 150 insertions(+), 4 deletions(-)
diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
index bc83caad4197..f7f776300a0f 100644
--- a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
+++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
@@ -9,6 +9,7 @@
#include <linux/acpi.h>
#include <linux/bitfield.h>
#include <linux/debugfs.h>
+#include <linux/i3c/master.h>
#include <linux/idr.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
@@ -20,16 +21,24 @@
#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
+#include "hci.h"
+
/*
* There can up to 15 instances, but implementations have at most 2 at this
* time.
*/
#define INST_MAX 2
+struct mipi_i3c_hci_pci_instance {
+ struct device *dev;
+ bool operational;
+};
+
struct mipi_i3c_hci_pci {
struct pci_dev *pci;
void __iomem *base;
const struct mipi_i3c_hci_pci_info *info;
+ struct mipi_i3c_hci_pci_instance instance[INST_MAX];
void *private;
};
@@ -40,6 +49,7 @@ struct mipi_i3c_hci_pci_info {
int id[INST_MAX];
u32 instance_offset[INST_MAX];
int instance_count;
+ bool control_instance_pm;
};
#define INTEL_PRIV_OFFSET 0x2b0
@@ -210,14 +220,148 @@ static const struct mipi_i3c_hci_pci_info intel_si_2_info = {
.instance_count = 1,
};
-static void mipi_i3c_hci_pci_rpm_allow(struct device *dev)
+static int mipi_i3c_hci_pci_find_instance(struct mipi_i3c_hci_pci *hci, struct device *dev)
+{
+ for (int i = 0; i < INST_MAX; i++) {
+ if (!hci->instance[i].dev)
+ hci->instance[i].dev = dev;
+ if (hci->instance[i].dev == dev)
+ return i;
+ }
+
+ return -1;
+}
+
+#define HC_CONTROL 0x04
+#define HC_CONTROL_BUS_ENABLE BIT(31)
+
+static bool __mipi_i3c_hci_pci_is_operational(struct device *dev)
+{
+ const struct mipi_i3c_hci_platform_data *pdata = dev->platform_data;
+ u32 hc_control = readl(pdata->base_regs + HC_CONTROL);
+
+ return hc_control & HC_CONTROL_BUS_ENABLE;
+}
+
+static bool mipi_i3c_hci_pci_is_operational(struct device *dev, bool update)
+{
+ struct mipi_i3c_hci_pci *hci = dev_get_drvdata(dev->parent);
+ int pos = mipi_i3c_hci_pci_find_instance(hci, dev);
+
+ if (pos < 0) {
+ dev_err(dev, "%s: I3C instance not found\n", __func__);
+ return false;
+ }
+
+ if (update)
+ hci->instance[pos].operational = __mipi_i3c_hci_pci_is_operational(dev);
+
+ return hci->instance[pos].operational;
+}
+
+struct mipi_i3c_hci_pci_pm_data {
+ struct device *dev[INST_MAX];
+ int dev_cnt;
+};
+
+static bool mipi_i3c_hci_pci_is_mfd(struct device *dev)
+{
+ return dev_is_platform(dev) && mfd_get_cell(to_platform_device(dev));
+}
+
+static int mipi_i3c_hci_pci_suspend_instance(struct device *dev, void *data)
+{
+ struct mipi_i3c_hci_pci_pm_data *pm_data = data;
+ int ret;
+
+ if (!mipi_i3c_hci_pci_is_mfd(dev) ||
+ !mipi_i3c_hci_pci_is_operational(dev, true))
+ return 0;
+
+ ret = i3c_hci_runtime_suspend(dev);
+ if (ret)
+ return ret;
+
+ pm_data->dev[pm_data->dev_cnt++] = dev;
+
+ return 0;
+}
+
+static int mipi_i3c_hci_pci_resume_instance(struct device *dev, void *data)
{
+ struct mipi_i3c_hci_pci_pm_data *pm_data = data;
+ int ret;
+
+ if (!mipi_i3c_hci_pci_is_mfd(dev) ||
+ !mipi_i3c_hci_pci_is_operational(dev, false))
+ return 0;
+
+ ret = i3c_hci_runtime_resume(dev);
+ if (ret)
+ return ret;
+
+ pm_data->dev[pm_data->dev_cnt++] = dev;
+
+ return 0;
+}
+
+static int mipi_i3c_hci_pci_suspend(struct device *dev)
+{
+ struct mipi_i3c_hci_pci *hci = dev_get_drvdata(dev);
+ struct mipi_i3c_hci_pci_pm_data pm_data = {};
+ int ret;
+
+ if (!hci->info->control_instance_pm)
+ return 0;
+
+ ret = device_for_each_child_reverse(dev, &pm_data, mipi_i3c_hci_pci_suspend_instance);
+ if (ret) {
+ if (ret == -EAGAIN || ret == -EBUSY)
+ pm_runtime_mark_last_busy(&hci->pci->dev);
+ for (int i = 0; i < pm_data.dev_cnt; i++)
+ i3c_hci_runtime_resume(pm_data.dev[i]);
+ }
+
+ return ret;
+}
+
+static int mipi_i3c_hci_pci_resume(struct device *dev)
+{
+ struct mipi_i3c_hci_pci *hci = dev_get_drvdata(dev);
+ struct mipi_i3c_hci_pci_pm_data pm_data = {};
+ int ret;
+
+ if (!hci->info->control_instance_pm)
+ return 0;
+
+ ret = device_for_each_child(dev, &pm_data, mipi_i3c_hci_pci_resume_instance);
+ if (ret)
+ for (int i = 0; i < pm_data.dev_cnt; i++)
+ i3c_hci_runtime_suspend(pm_data.dev[i]);
+
+ return ret;
+}
+
+static void mipi_i3c_hci_pci_rpm_allow(struct mipi_i3c_hci_pci *hci)
+{
+ struct device *dev = &hci->pci->dev;
+
+ if (hci->info->control_instance_pm) {
+ pm_runtime_set_autosuspend_delay(dev, DEFAULT_AUTOSUSPEND_DELAY_MS);
+ pm_runtime_use_autosuspend(dev);
+ }
+
pm_runtime_put(dev);
pm_runtime_allow(dev);
}
-static void mipi_i3c_hci_pci_rpm_forbid(struct device *dev)
+static void mipi_i3c_hci_pci_rpm_forbid(struct mipi_i3c_hci_pci *hci)
{
+ struct device *dev = &hci->pci->dev;
+
+ if (hci->info->control_instance_pm)
+ pm_runtime_dont_use_autosuspend(dev);
+
pm_runtime_forbid(dev);
pm_runtime_get_sync(dev);
}
@@ -299,7 +443,7 @@ static int mipi_i3c_hci_pci_probe(struct pci_dev *pci,
pci_set_drvdata(pci, hci);
- mipi_i3c_hci_pci_rpm_allow(&pci->dev);
+ mipi_i3c_hci_pci_rpm_allow(hci);
return 0;
@@ -316,13 +460,15 @@ static void mipi_i3c_hci_pci_remove(struct pci_dev *pci)
if (hci->info->exit)
hci->info->exit(hci);
- mipi_i3c_hci_pci_rpm_forbid(&pci->dev);
+ mipi_i3c_hci_pci_rpm_forbid(hci);
mfd_remove_devices(&pci->dev);
}
/* PM ops must exist for PCI to put a device to a low power state */
static const struct dev_pm_ops mipi_i3c_hci_pci_pm_ops = {
+ RUNTIME_PM_OPS(mipi_i3c_hci_pci_suspend, mipi_i3c_hci_pci_resume, NULL)
+ SYSTEM_SLEEP_PM_OPS(mipi_i3c_hci_pci_suspend, mipi_i3c_hci_pci_resume)
};
static const struct pci_device_id mipi_i3c_hci_pci_devices[] = {
--
2.51.0
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 20:18:40 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On Thu, Jan 29, 2026 at 08:18:35PM +0200, Adrian Hunter wrote:
Reviewed-by: Frank Li <Frank.Li@nxp.com>
|
{
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Thu, 29 Jan 2026 14:43:45 -0500",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On Thu, Jan 29, 2026 at 08:18:37PM +0200, Adrian Hunter wrote:
look like this can't resolve problem. pm_runtime_mark_last_busy() just
change dev->power.last_busy. If suspend before it, nothing happen.
irq use thread irq, in irq thread call pm_runtime_resume() if needs.
And this function call by irq handle, just put to work queue, what's impact
if do nothing here?
Frank
|
{
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Thu, 29 Jan 2026 14:56:01 -0500",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On Thu, Jan 29, 2026 at 08:18:39PM +0200, Adrian Hunter wrote:
Does your hardware support recieve IBI when runtime suspend?
Frank
|
{
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Thu, 29 Jan 2026 15:00:14 -0500",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On 29/01/2026 22:00, Frank Li wrote:
When runtime suspended (in D3), the hardware first triggers a Power Management
Event (PME) when the SDA line is pulled low to signal the START condition of an IBI.
The PCI subsystem will then runtime-resume the device. When the bus is enabled,
the clock is started and the IBI is received.
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 22:28:14 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On 29/01/2026 21:56, Frank Li wrote:
It should be effective.
rpm_suspend() recalculates the autosuspend expiry time based on
last_busy (see pm_runtime_autosuspend_expiration()) and restarts
the timer is it is in the future.
Just premature runtime suspension inconsistent with autosuspend_delay.
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Thu, 29 Jan 2026 22:42:32 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On Thu, Jan 29, 2026 at 10:42:32PM +0200, Adrian Hunter wrote:
CPU 0 CPU 1
1. rpm_suspend() 2. pm_runtime_mark_last_busy(master->rpm_dev)
if 2 happen before 1, it can extend suspend. 2 happen after 1, it should
do nothing.
Frank
|
{
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Thu, 29 Jan 2026 15:55:40 -0500",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On Thu, Jan 29, 2026 at 10:28:14PM +0200, Adrian Hunter wrote:
It align my assumption, why need complex solution.
SDA->PME->IRQ should handle by hardware, so irq handle queue IBI to working
queue.
IBI work will try do transfer, which will call runtime resume(), then
transfer data.
What's issue?
Frank
|
{
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Thu, 29 Jan 2026 16:00:20 -0500",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On 29/01/2026 23:00, Frank Li wrote:
The PME indicates I3C START (SDA line pulled low). The controller is
in a low power state unable to operate the bus. At this point it is not
known what I3C device has pulled down the SDA line, or even if it is an
IBI since it is indistinguishable from hot-join at this point.
The PCI PME IRQ is not the device's IRQ. It is handled by acpi_irq()
which ultimately informs the PCI subsystem to wake the PCI device.
The PCI subsystem performs pm_request_resume(), refer pci_acpi_wake_dev().
When the controller is resumed, it enables the I3C bus and the IBI is
finally delivered normally.
However, none of that is related to this patch.
This patch is because the PCI device has 2 I3C bus instances and only 1 PME
wakeup. The PME becomes active when the PCI device is put to a low
power state. Both I3C bus instances must be runtime suspended then.
Similarly, upon resume the PME is no longer active, so both I3C bus instances
must make their buses operational - we don't know which may have received
an IBI. And there may be further IBIs which can't be received unless the
associated bus is operational. The PCI device is no longer in a low power
state, so there will be no PME in that case.
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Fri, 30 Jan 2026 09:00:33 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On 29/01/2026 22:55, Frank Li wrote:
2 happening after 1 is a separate issue. It will never happen
in the wakeup case because the wakeup does a runtime resume:
pm_runtime_put_autosuspend()
IBI -> pm_runtime_mark_last_busy()
another IBI -> pm_runtime_mark_last_busy() and so on
<autosuspend_delay finally elapses>
rpm_suspend() -> device suspended, PME activated
IBI START -> PME -> pm_request_resume()
IBI is delivered after controller runtime resumes
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Fri, 30 Jan 2026 09:48:07 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On Fri, Jan 30, 2026 at 09:00:33AM +0200, Adrian Hunter wrote:
One instance 1 suspend, instance 2 running, PME is inactive, what's happen
if instance 1 request IBI?
IBI will be missed?
Does PME active auto by hardware or need software config?
Frank
|
{
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Fri, 30 Jan 2026 10:04:24 -0500",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On 30/01/2026 17:04, Frank Li wrote:
Nothing will happen. Instance 1 I3C bus is not operational and there can
be no PME when the PCI device is not in a low power state (D3hot)
Possibly not if instance 1 is eventually resumed and the I3C device
requesting the IBI has not yet given up.
PCI devices (hardware) advertise their PME capability in terms of
which states are capable of PMEs. Currently the Intel LPSS I3C
device lists only D3hot.
The PCI subsystem (software) automatically enables the PME before
runtime suspend if the target power state allows it.
|
{
"author": "Adrian Hunter <adrian.hunter@intel.com>",
"date": "Fri, 30 Jan 2026 18:34:37 +0200",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
lkml
|
[PATCH 0/7] i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
|
Hi
Here are patches related to enabling IBI while runtime suspended for Intel
controllers.
Intel LPSS I3C controllers can wake from runtime suspend to receive
in-band interrupts (IBIs).
It is non-trivial to implement because the parent PCI device has 2 I3C bus
instances (MIPI I3C HCI Multi-Bus Instance capability) represented by
platform devices with a separate driver, but the IBI-wakeup is shared by
both, which means runtime PM has to be managed by the parent PCI driver.
To make that work, the PCI driver handles runtime PM, but leverages the
mipi-i3c-hci platform driver's functionality for saving and restoring
controller state.
Adrian Hunter (7):
i3c: mipi-i3c-hci-pci: Set d3hot_delay to 0 for Intel controllers
i3c: master: Allow controller drivers to select runtime PM device
i3c: master: Mark last_busy on IBI when runtime PM is allowed
i3c: mipi-i3c-hci: Add quirk to allow IBI while runtime suspended
i3c: mipi-i3c-hci: Allow parent to manage runtime PM
i3c: mipi-i3c-hci-pci: Add optional ability to manage child runtime PM
i3c: mipi-i3c-hci-pci: Enable IBI while runtime suspended for Intel controllers
drivers/i3c/master.c | 14 +-
drivers/i3c/master/mipi-i3c-hci/core.c | 30 ++--
drivers/i3c/master/mipi-i3c-hci/hci.h | 7 +
drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c | 158 ++++++++++++++++++++-
include/linux/i3c/master.h | 2 +
5 files changed, 194 insertions(+), 17 deletions(-)
Regards
Adrian
|
On Fri, Jan 30, 2026 at 06:34:37PM +0200, Adrian Hunter wrote:
Okay, I think I understand your situation, let me check patch again.
Frank
|
{
"author": "Frank Li <Frank.li@nxp.com>",
"date": "Fri, 30 Jan 2026 12:11:19 -0500",
"thread_id": "20260129181841.130864-1-adrian.hunter@intel.com.mbox.gz"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.